Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
rajkot, gujarat
On-site
As a Senior Data Engineer, your role involves building and maintaining automated data pipelines to ensure smooth data flow. You will also be responsible for ensuring data integrity, validation, and transformation for accurate analysis. Working efficiently with large datasets and maintaining data quality standards will be crucial. Additionally, collaborating effectively with cross-functional teams to meet project requirements is an essential part of your responsibilities. Qualifications Required: - Strong hands-on experience in SQL (Postgres/Other RDBMS), Python for data processing, Apache Spark, and cloud-based ETL tools such as AWS Glue, Lambda, Step Functions, Azure Data Factory, or Databr...
Posted 2 months ago
4.0 - 8.0 years
0 Lacs
haryana
On-site
You will be responsible for working on small/medium-scale technology solution delivery engagements, with a focus on ETL interfacing technologies such as Informatica, Talend, SSIS, data warehousing, SQL, and exposure to Cloud Platforms like AWS Glue, Azure, GCP. Additionally, you will be leading a team in this role. - Work on small/medium-scale technology solution delivery engagements - Utilize ETL interfacing technologies like Informatica, Talend, SSIS - Handle data warehousing tasks and SQL queries - Gain exposure to Cloud Platforms such as AWS Glue, Azure, GCP - Lead and manage a team effectively Qualifications Required: - Bachelor's degree in Computer Science, IT, or related field - 4+ ye...
Posted 2 months ago
2.0 - 6.0 years
0 Lacs
chennai, tamil nadu
On-site
You are a hands-on Senior Data Engineer experienced in migrating and building large-scale data pipelines on Databricks using Azure or AWS platforms. Your focus will be on implementing batch and streaming pipelines, applying the bronze-silver-gold data lakehouse model, and ensuring scalable and reliable data solutions. - Design, develop, and operationalize scalable data pipelines on Databricks following medallion architecture principles. - Migrate and transform large data volumes from traditional on-prem systems (Oracle, Hadoop, Exadata) into cloud data platforms. - Develop efficient Spark (PySpark/Scala) jobs for ingestion, transformation, aggregation, and publishing of data. - Implement dat...
Posted 2 months ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
As an Integration Developer at Worley, you will play a crucial role in developing and maintaining data pipelines for ingesting and collecting data from various sources into a centralized data platform. Your responsibilities will include optimizing and troubleshooting AWS Glue jobs, utilizing Python and PySpark for data handling, collaborating with data architects on designing data models, and creating ETL processes using tools like Airflow, Python, and PySpark. Additionally, you will be involved in implementing monitoring solutions, managing databases, and working with Infrastructure as Code tools like Terraform and AWS CDK. Key Responsibilities: - Develop and implement data pipelines for co...
Posted 2 months ago
6.0 - 11.0 years
5 - 9 Lacs
bengaluru
Work from Office
Job Duties: At least 6+ years of relevant experience in design, development, complete end-end design of enterprise-wide big data solution. Experience in designing & developing a big data solution using Spark, Scala, AWS Glue, Lambda, SNS/SQS, Cloudwatch is a must. Strong Application development experience in Scala/Python. Strong Database SQL experience, preferably Redshift. Experience in Snowflake is an added advantage. Experience with ETL/ELT process and frameworks is a must. Create integration and application technical design documentation. Conduct peer-reviews of functional design documentation. Complete development, configuration, test cases and unit testing Perform code reviews and ensu...
Posted 2 months ago
6.0 - 11.0 years
5 - 14 Lacs
gurugram, bengaluru
Hybrid
6+ years of experience in Data Engineering or related role. Hands-on experience with Snowflake (data modelling, performance tuning, query optimization, Snowpipe, Time Travel, Streams & Tasks). Strong expertise in AWS Glue for ETL job development, orchestration, and optimization. Proficiency with AWS services such as S3, Lambda, Redshift, Athena, Step Functions, Kinesis, and CloudWatch. Strong programming skills in Python and/or PySpark. Knowledge of SQL and experience with performance tuning for large datasets. Experience with data warehouse and data lake architectures. Familiarity with CI/CD pipelines and infrastructure-as-code tools (Terraform, CloudFormation). Preferred Qualifications: AW...
Posted 2 months ago
4.0 - 8.0 years
12 - 20 Lacs
hyderabad
Work from Office
Job description: Expert knowledge in AWS Data Lake implementation and support (S3, Glue,DMS Athena, Lambda, API Gateway, Redshift) Handling of data related activities such as data parsing, cleansing quality definition data pipelines, storage and ETL scripts Experiences in programming language Python/Pyspark/SQL Experience with data migration with hands-on experience Experiences in consuming rest API using various authentication options with in AWS Lambda architecture orchestrate triggers, debug and schedule batch job using a AWS Glue, Lambda and step functions understanding of AWS security features such as IAM roles and policies Exposure to Devops tools AWS certification in AWS is highly pre...
Posted 2 months ago
5.0 - 9.0 years
7 - 11 Lacs
bengaluru
Work from Office
5-10 years of experience in database development or a related field. Proven experience with database design, development, and management. Experience working with large-scale databases and complex data environments Experience with data modelling and database design. Knowledge of database performance tuning and optimization. Architect, Develop and maintain tables, views, procedures, functions and packages in Database MUST HAVE Performing complex relational databases queries using SQL (AWS RDS for PostgreSQL) and Oracle PLSQL MUST HAVE Familiarity with ETL processes and tools (AWS Batch, AWS Glue etc) MUST HAVE Familiarity with CI/CD Pipelines, Jenkins Deployment, Git Repository MUST HAVE Perfo...
Posted 2 months ago
6.0 - 11.0 years
12 - 18 Lacs
pune
Work from Office
Job Description Job Title: Vanguard AWS Glue Developer Location: Pune Experience: 6 - 11 Years Employment Type: Contract Work Mode: Onsite About the Role: We are seeking an experienced AWS Glue Developer with strong expertise in building scalable ETL pipelines and data integration solutions on AWS cloud platform. The ideal candidate will have hands-on experience with Vanguard applications and AWS Glue ecosystem, capable of designing, developing, and optimizing large-scale data workflows. Key Responsibilities: Design, develop, and maintain ETL pipelines using AWS Glue for data ingestion, transformation, and loading Build and optimize Glue jobs using Python/PySpark for processing large-scale d...
Posted 2 months ago
7.0 - 12.0 years
18 - 30 Lacs
bengaluru
Work from Office
Key Responsibilities: Design, build, and maintain scalable ETL pipelines using AWS Glue , PySpark , and SQL . Develop and optimize data models for Snowflake and other cloud data warehouses. Ensure efficient data integration from multiple structured and unstructured sources. Work closely with data architects and analysts to define data requirements and best practices. Implement data quality, validation, and performance tuning processes. Monitor and troubleshoot ETL workflows and ensure high data reliability. Mentor junior engineers and review code for best practices and scalability. Required Skills & Qualifications: 7+ Years of experience is must with at-least 2+ years as lead. Strong hands-o...
Posted 2 months ago
3.0 - 5.0 years
12 - 14 Lacs
bengaluru
Work from Office
,. Job Description: We are looking for an experienced Data Engineer with strong expertise in data pipeline development, cloud-based data processing, and analytics solutions. The ideal candidate should have hands-on experience working with AWS data services and be proficient in programming and data integration tools. Key Responsibilities: Design, develop, and maintain robust, scalable, and efficient data pipelines and ETL workflows. Work with large and complex datasets using Python and SQL for data transformation and analysis. Develop distributed data processing solutions using Spark (Scala or PySpark) Spark Scala preferred . Implement and manage AWS data services including S3, Lambda, Glue C...
Posted 2 months ago
8.0 - 12.0 years
12 - 19 Lacs
pune
Work from Office
Communication, problem-solving Responsibilities: * Collaborate with cross-functional teams on pipeline design & governance * Design, develop & maintain AWS data pipelines using GLUE, Redshift, Lambda & Python Provident fund
Posted 2 months ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
As a Data Engineer at Zupee, you will play a crucial role in developing the next generation of the Data platform. Your collaboration across various functions like product, marketing, design, innovation/growth, strategy/scale, customer experience, data science & analytics, and technology will be essential for the success of the platform. **Core Responsibilities:** - Understand, implement, and automate ETL and data pipelines with up-to-date industry standards. - Hands-on involvement in the design, development, and implementation of optimal and scalable AWS services. **Qualifications and Skills:** - 2-4 years of professional experience in a data engineering profile. - BS or MS in Computer Scien...
Posted 2 months ago
4.0 - 8.0 years
8 - 16 Lacs
bengaluru
Hybrid
Role & responsibilities Sr. Data Engineer AWS, Python, Spark/Snow park, SQL
Posted 2 months ago
7.0 - 11.0 years
35 - 45 Lacs
hyderabad, pune, delhi / ncr
Hybrid
Design and develop integration of Snowflake with a cloud platform and platform enhancements, integrations, and performance optimisation. Work on data ingestion using Python, cataloguing, and lineage tracking, Develop and architect ETL workflows. Required Candidate profile 5 years in developing and scaling data platforms centered around Snowflake, with Azure. Hands on Python Understanding of modern data architectures such as Data Mesh, Lakehouse, and ELT.
Posted 2 months ago
7.0 - 12.0 years
15 - 25 Lacs
hyderabad
Remote
Position Overview: We are seeking a highly skilled and motivated Data Engineer to join our team. This role will be responsible for designing, developing, and maintaining robust data pipelines that transform raw data into actionable insights. The ideal candidate will have strong technical expertise in Python, SQL, and AWS data services, with experience in dimensional modeling, data governance, and building analytical solutions. This position offers an opportunity to work in a collaborative environment where innovation and problem-solving are encouraged. The Data Engineer will play a key role in ensuring that high-quality, reliable, and scalable data solutions support business intelligence, an...
Posted 2 months ago
4.0 - 8.0 years
8 - 12 Lacs
noida, chennai, bengaluru
Work from Office
Job Profile: Develop Innovative and Visually Appealing Qlik Sense Dashboards and Reports that Provide Actionable Insights to Stakeholders. Hands on Experience in Design, Implement, Test and Support Reports and Dashboards Within in the agreed SLA.Power BI to Qlik sense Migration experience is a plus Good skill set in advanced expression building. Working knowledge on Extensions in Qlik sense. Strong Knowledge on Optimization Techniques of Dashboard.Good Knowledge on AWS features like S3 buckets, Athena, Glue Jobs, Crawler, AWS lake formation, Python etc Relocation Supported: Visa Sponsorship Approved: At Fujitsu, we are committed to creating a diverse and inclusive workplace where everyone fe...
Posted 2 months ago
7.0 - 12.0 years
11 - 16 Lacs
chennai
Work from Office
Role Description Experience Required: 7+ yearsLocation: offshore Role Overview: We are looking for a skilled Backend Developer with strong Python expertise andexperience in building scalable microservices The candidate should be comfortableworking in AWS environments and collaborating with cross-functional teams Key Responsibilities:- Develop and maintain microservices-based backend systems - Design and implement efficient APIs - Optimize database queries and schema design - Support production systems and troubleshoot issues - Collaborate with team members on code reviews and development practices - Work with GitHub workflows and CI/CD pipelines - Integrate with ElasticSearch where needed - ...
Posted 2 months ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Data Governance and Data Quality Specialist at our company, your role will involve developing, implementing, and maintaining data governance frameworks and processes to ensure the accuracy, completeness, and reliability of organizational data. You will collaborate with various stakeholders to define data standards, policies, and procedures, and oversee compliance with regulatory requirements related to data management. **Key Responsibilities:** - Collaborate with cross-functional teams to define data quality standards and metrics and establish processes for monitoring and improving data quality. - Design and implement data quality assurance processes, including data profiling, cleansing...
Posted 2 months ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You are a skilled and motivated Data Engineer Lead at DevOn, a leading provider of innovative technology solutions focusing on data-driven decision-making, cloud computing, and advanced analytics. Your role involves leading the design, development, and maintenance of data pipelines and ETL workflows using modern cloud technologies like Python, PySpark, AWS Glue, RedShift, SQL, Jenkins, Bitbucket, EKS, and Airflow. **Key Responsibilities:** - Lead the design and implementation of scalable data pipelines and ETL workflows on AWS. - Develop and manage data ingestion, transformation, and storage frameworks using AWS Glue, PySpark, and RedShift. - Architect complex SQL queries for large datasets ...
Posted 2 months ago
5.0 - 9.0 years
0 Lacs
kerala
On-site
As a Senior Data Engineer specializing in Data Quality, Ingestion & API Development, your role will involve leading the development of a scalable data ingestion framework with a strong focus on ensuring high data quality and validation. You will also be responsible for designing and implementing robust APIs for seamless data integration. Your expertise in building and managing big data pipelines using modern AWS-based technologies will be crucial for driving quality and efficiency in data processing systems. - **Data Ingestion Framework:** - **Design & Development:** Architect, develop, and maintain an end-to-end data ingestion framework that efficiently extracts, transforms, and loads data ...
Posted 2 months ago
5.0 - 10.0 years
10 - 20 Lacs
hyderabad, chennai, bengaluru
Work from Office
Azure - ADF, SQL, Synapse, Azure data lake store, keyvaults AWS - Redshift,Glue
Posted 2 months ago
5.0 - 10.0 years
17 - 32 Lacs
hyderabad, pune, bengaluru
Work from Office
Job Description: Strong expertise in Python (including data processing/manipulation libraries such as pandas, PySpark). Hands-on experience in designing and maintaining ETL pipelines. Proficiency with AWS cloud services (Glue, Lambda, EMR, ECS, Lake Formation, etc.). Experience with orchestration tools such as Airflow and AWS Step Functions. Exposure to databases like Redshift, Aurora, Postgres, or Snowflake. Experience with Reshift is preferred.
Posted 2 months ago
5.0 - 8.0 years
11 - 16 Lacs
chennai
Work from Office
Role Overview: We are looking for a skilled Backend Developer with strong Python expertise and experience in building scalable microservices. The candidate should be comfortable working in AWS environments and collaborating with cross-functional teams. Key Responsibilities: - Develop and maintain microservices-based backend systems. - Design and implement efficient APIs. - Optimize database queries and schema design. - Support production systems and troubleshoot issues. - Collaborate with team members on code reviews and development practices. - Work with GitHub workflows and CI/CD pipelines. - Integrate with ElasticSearch where needed. - Utilize AWS services including AWS Glue for ETL and d...
Posted 2 months ago
2.0 - 3.0 years
5 - 5 Lacs
kochi, chennai, thiruvananthapuram
Work from Office
Role Proficiency: This role requires proficiency in data pipeline development including coding testing and implementing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be adept at using ETL tools such as Informatica Glue Databricks and DataProc along with coding skills in Python PySpark and SQL. Works independently according to work allocation. Outcomes: Operate with minimal guidance to develop error-free code test applications and document the development process. Understand application features and component designs to develop them in accordance with user stories and requirements. Code debug test document and communicate the stages of product...
Posted 2 months ago
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
192783 Jobs | Dublin
Wipro
61786 Jobs | Bengaluru
EY
49321 Jobs | London
Accenture in India
40642 Jobs | Dublin 2
Turing
35027 Jobs | San Francisco
Uplers
31887 Jobs | Ahmedabad
IBM
29626 Jobs | Armonk
Capgemini
26439 Jobs | Paris,France
Accenture services Pvt Ltd
25841 Jobs |
Infosys
25077 Jobs | Bangalore,Karnataka