Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4 - 8 years
10 - 15 Lacs
Pune
Remote
Position: AWS Data Engineer About bluCognition: bluCognition is an AI/ML based start-up specializing in developing data products leveraging alternative data sources and providing servicing support to our clients in financial services sector. Founded in2017, by some very named senior professionals from the financial services industry, the company is headquartered in the US, with the delivery centre based in Pune. We build all our solutions while leveraging the latest technology stack in AI, ML and NLP combined with decades of experience in risk management at some of the largest financial services firms in the world. Our clients are some of the biggest and the most progressive names in the financial services industry. We are entering a significant growth phase and are looking for individuals with entrepreneurial mindset who wants us to join in this exciting journey. https://www.blucognition.com The Role: We are seeking an experienced AWS Data Engineer to design, build, and manage scalable data pipelines and cloud-based solutions. In this role, you will work closely with data scientists, analysts, and software engineers to develop systems that support data-driven decision-making. Key Responsibilities: 1) Design, implement, and maintain robust, scalable, and efficient data pipelines using AWS services. 2) Develop ETL/ELT processes and automate data workflows for real-time and batch data ingestion. 3) Optimize data storage solutions (e.g., S3, Redshift, RDS, DynamoDB) for performance and cost-efficiency. 4) Build and maintain data lakes and data warehouses following best practices for security, governance, and compliance. 5) Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. 6) Monitor, troubleshoot, and improve the reliability and quality of data systems. 7) Implement data quality checks, logging, and error handling in data pipelines. 8) Use Infrastructure as Code (IaC) tools like AWS Cloud Formation or Terraform for environment management. 9) Stay up-to-date with the latest developments in AWS services and big data technologies. Required Qualifications: 1) Bachelors degree in Computer Science, Information Systems, Engineering, or a related field. 2) 4+ years of experience working as a data engineer or in a similar role. 3) Strong experience with AWS services such as: AWS Glue, AWS Lambda, Amazon S3, Amazon Redshift, Amazon RDS, Amazon EMR, AWS Step Functions 4) Proficiency in SQL and Python. 5) Solid understanding of data modeling, ETL processes, and data warehouse architecture. 6) Experience with orchestration tools like Apache Airflow or AWS Managed Workflows. 7) Knowledge of security best practices for cloud environments (IAM, KMS, VPC, etc.). 8) Experience with monitoring and logging tools (CloudWatch, X-Ray, etc.). Preferred Qualifications: 1) Good to have - AWS Certified Data Analytics Specialty or AWS Certified Solutions Architect certification. 2) Experience with real-time data streaming technologies like Kinesis or Kafka. 3) Familiarity with DevOps practices and CI/CD pipelines. 4) Knowledge of machine learning data preparation and MLOps workflows. Soft Skills: 1) Excellent problem-solving and analytical skills. 2) Strong communication skills with both technical and non-technical stakeholders. 3) Ability to work independently and collaboratively in a team environment.
Posted 2 months ago
2 - 3 years
4 - 5 Lacs
Bengaluru
Work from Office
As a skilled Developer, you are responsible for building tools and applications that utilize the data held within company databases. The primary responsibility will be to design and develop these layers of our applications and to coordinate with the rest of the team working on different layers of IT infrastructure. A commitment to collaborative problem solving, sophisticated design and quality product is essential Python Developer Necessary Skills: Have experience in data wrangling and manipulation with Python/Pandas. Experience with Docker containers. Knowledge of data structures, algorithms and data modeling. Experience with versioning (Git, Azure DevOps). Design and implementation of ETL/ELT pipelines. Should have good knowledge and experience on web scrapping (Scrapy, BeautifulSoup, Selenium) Expertise in at least one popular Python framework (like Django, Flask or Pyramid) Design, build, and maintain efficient, reusable, and reliable Python code. (SOLID, Design principles) Have experience in SQL database (Views, Stored Procedure, etc.) Responsibilities and Activities Aside from the core development role this job position includes auxiliary roles that are not related to development. The role includes but is not limited to: Support and maintenance of customs and previously developed tools, as well as excellence of performance and responsiveness of new applications. Deliver high quality and reliable applications, including Development and Front-End. In addition, you will maintain code quality, prioritize organization, and drive automatization. Participate in the peer review of plans, technical solutions, and related documentation (Map/document technical procedures). Identify security issues, bottlenecks, and bugs, implementing solutions to mitigate and address issues of service data security and data breaches. Work with SQL / Postgres databases: installing and maintaining database systems, supporting server management, including Backups. In addition to troubleshooting issues raised by the Data Processing team.
Posted 2 months ago
3 - 5 years
6 - 8 Lacs
Pune
Work from Office
Job Title: Senior Data Engineer Experience Required: 3 to 5 Years Location: Baner, Pune Job Type: Full-Time (WFO) Job Summary We are seeking a highly skilled and motivated Senior Data Engineer to join our dynamic team. The ideal candidate will have extensive experience in building and managing scalable data pipelines, working with cloud platforms like Microsoft Azure, AWS and utilizing advanced tools such as Datalakes, PySpark, and Azure Data Factory. The role involves collaborating with cross-functional teams to design and implement robust data solutions that support business intelligence, analytics, and decision-making processes. Key Responsibilities Design, develop, and maintain scalable ETL pipelines to ingest, transform, and process large datasets from various sources. Build and optimize data pipelines and architectures for efficient and secure data processing. Work extensively with Azure Data Lake , Azure Data Factory , and Azure Synapse Analytics for cloud data integration and management. Utilize Databricks and PySpark for advanced big data processing and analytics. Implement data modelling and design data warehouses to support business intelligence tools like Power BI . Ensure data quality, governance, and security using Azure DevOps and Azure Functions . Develop and maintain SQL Server databases and write optimized SQL queries for analytics and reporting. Collaborate with stakeholders to gather requirements and translate them into effective data engineering solutions. Implement Data architecture best practices to support big data initiatives and analytics use cases. Monitor, troubleshoot, and improve data workflows and processes to ensure seamless data flow. Required Skills and Qualifications Educational Background : Bachelor's or master's degree in computer science, Information Systems, or a related field. Technical Skills : Strong expertise in ETL development , Data Engineering , and Data Pipeline -Development . Proficiency in Azure Data Lake , Azure Data Factory , and Azure Synapse Analytics . Advanced knowledge of Databricks , PySpark , and Python for data processing. Hands-on experience with SQL Azure , SQL Server , and data warehousing solutions. Knowledge of Power BI for reporting and dashboard creation. Familiarity with Azure Functions , Azure DevOps , and cloud computing in Microsoft Azure . Understanding of data architecture and data modelling principles. Experience with Big Data tools and frameworks. Experience : Proven experience in designing and implementing large-scale data processing systems. Hands-on experience with DWH and handling big data workloads. Ability to work with both structured and unstructured datasets. Soft Skills : Strong problem-solving and analytical skills. Excellent communication and collaboration abilities to work effectively in a team environment. A proactive mindset with a passion for learning and adopting new technologies. Preferred Skills Experience with Azure Data Warehouse technologies. Knowledge of Azure Machine Learning or similar AI/ML frameworks. Familiarity with Data Governance and Data Compliance practices.
Posted 2 months ago
6 - 10 years
15 - 20 Lacs
Gurugram
Remote
Title: Looker Developer Team: Data Engineering Work Mode: Remote Shift Time: 3:00 PM - 12:00AM IST Contract: 12 months Key Responsibilities Collaborate closely with engineers, architects, business analysts, product owners, and other team members to understand the requirements and develop test strategies. LookML Proficiency: LookML is Looker's proprietary language for defining data models. Looker developers need to be able to write, debug, and maintain LookML code to create and manage data models, explores, and dashboards. Data Modeling Expertise:Understanding how to structure and organize data within Looker is essential. This involves mapping database schemas to LookML, creating views, and defining measures and dimensions. SQL Knowledge: Looker leverages SQL queries under the hood. Developers need to be able to write SQL to understand the data, debug queries, and potentially extend LookML with custom SQL. Looker Environment: Familiarity with the Looker interface, including the IDE, LookML Validator, and SQL Runner, is necessary for efficient development. Education and/or Experience Bachelor's degree in MIS, Computer Science, Information Technology or equivalent required 6+ Years of IT Industry experience in Data management field.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough