268 Aws Emr Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 - 3.0 years

2 - 5 Lacs

mumbai, bengaluru, delhi / ncr

Work from Office

We are seeking a skilled Data Engineer with strong experience in SQL, PySpark, and cloud technologies to join our dynamic team. The ideal candidate will have a solid background in designing and implementing data engineering pipelines, with a focus on performance, scalability, and reliability. You will work closely with other data engineers, data scientists, and stakeholders to develop, maintain, and optimize data infrastructure. Key Responsibilities : SQL Development : - Write medium-complexity SQL queries to extract, transform, and load data efficiently. - Optimize SQL queries for performance and scalability. - Collaborate with team members to understand data requirements and translate them...

Posted 20 hours ago

AI Match Score
Apply

4.0 - 9.0 years

15 - 25 Lacs

pune

Work from Office

AIA-Pune Job Summary We are seeking a skilled Developer with 4 to 9 years of experience to join our team in a hybrid work model. The ideal candidate will have expertise in Amazon RDS AWS Glue AWS DevOps and other AWS technologies. This role involves working with big data and requires proficiency in Python and Apache Spark. Experience in regulatory and process control is a plus. Responsibilities Develop and maintain scalable data processing systems using Amazon RDS AWS Glue and AWS DevOps to ensure efficient data management and processing. Collaborate with cross-functional teams to design and implement data solutions that meet business requirements and enhance operational efficiency. Utilize ...

Posted 1 day ago

AI Match Score
Apply

2.0 - 6.0 years

12 - 16 Lacs

bengaluru

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor...

Posted 2 days ago

AI Match Score
Apply

2.0 - 6.0 years

12 - 16 Lacs

bengaluru

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor...

Posted 2 days ago

AI Match Score
Apply

8.0 - 10.0 years

9 - 13 Lacs

noida

Work from Office

AWS data developers with 8-10 years experience certified candidates (AWS data engineer associate or AWS solution architect) are preferred Skills required - SQL, AWS Glue, PySpark, Air Flow, CDK, Red shift Good communication skills and can deliver independently Mandatory Competencies Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Beh - Communication Big Data - Big Data - Pyspark Database - Database Programming - SQL Programming Language - Python - Apache Airflow.

Posted 2 days ago

AI Match Score
Apply

6.0 - 10.0 years

5 - 9 Lacs

noida

Work from Office

AWS data developers with 6-10 years experience certified candidates (AWS data engineer associate or AWS solution architect) are preferred Skills required - SQL, AWS Glue, PySpark, Air Flow, CDK, Red shift Good communication skills and can deliver independently. Mandatory Competencies Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Big Data - Big Data - Pyspark Beh - Communication Database - Database Programming - SQL.

Posted 2 days ago

AI Match Score
Apply

8.0 - 10.0 years

9 - 13 Lacs

noida

Work from Office

AWS data developers with 8-10 years experience certified candidates (AWS data engineer associate or AWS solution architect) are preferred Skills required - SQL, AWS Glue, PySpark, Air Flow, CDK, Red shift Good communication skills and can deliver independently Mandatory Competencies Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Beh - Communication Big Data - Big Data - Pyspark Database - Database Programming - SQL Programming Language - Python - Apache Airflow

Posted 2 days ago

AI Match Score
Apply

1.0 - 4.0 years

3 - 6 Lacs

noida

Work from Office

Contract Duration : 3 months Skills : - PySpark or Scala with Spark, Spark Architecture, Hadoop, SQL - Streaming Technologies like Kafka etc.- Proficiency in Advanced SQL (Window functions)- Airflow, S3, and Stream Sets or similar ETL tools.- Basic Knowledge on AWS IAM, AWS EMR and Snowflake. Responsibilities : - Data Pipeline Development: Design, implement, and maintain scalable and efficient data pipelines to collect, process, and store large volumes of structured and unstructured data. - Data Modeling: Develop and maintain data models, schemas, and metadata to support the organization's data initiatives. Ensure data integrity and optimize data storage and retrieval processes. - Data Integ...

Posted 2 days ago

AI Match Score
Apply

6.0 - 10.0 years

5 - 9 Lacs

noida

Work from Office

AWS data developers with 6-10 years experience certified candidates (AWS data engineer associate or AWS solution architect) are preferred Skills required - SQL, AWS Glue, PySpark, Air Flow, CDK, Red shift Good communication skills and can deliver independently Mandatory Competencies Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Big Data - Big Data - Pyspark Beh - Communication Database - Database Programming - SQL

Posted 3 days ago

AI Match Score
Apply

8.0 - 10.0 years

9 - 13 Lacs

noida

Work from Office

AWS data developers with 8-10 years experience certified candidates (AWS data engineer associate or AWS solution architect) are preferred Skills required - SQL, AWS Glue, PySpark, Air Flow, CDK, Red shift Good communication skills and can deliver independently Mandatory Competencies Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Beh - Communication Big Data - Big Data - Pyspark Database - Database Programming - SQL Programming Language - Python - Apache Airflow

Posted 3 days ago

AI Match Score
Apply

6.0 - 10.0 years

5 - 9 Lacs

noida

Work from Office

AWS data developers with 6-10 years experience certified candidates (AWS data engineer associate or AWS solution architect) are preferred Skills required - SQL, AWS Glue, PySpark, Air Flow, CDK, Red shift Good communication skills and can deliver independently Mandatory Competencies Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Big Data - Big Data - Pyspark Beh - Communication Database - Database Programming - SQL

Posted 3 days ago

AI Match Score
Apply

8.0 - 10.0 years

9 - 13 Lacs

noida

Work from Office

AWS data developers with 8-10 years experience certified candidates (AWS data engineer associate or AWS solution architect) are preferred Skills required - SQL, AWS Glue, PySpark, Air Flow, CDK, Red shift Good communication skills and can deliver independently Mandatory Competencies Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Beh - Communication Big Data - Big Data - Pyspark Database - Database Programming - SQL Programming Language - Python - Apache Airflow

Posted 3 days ago

AI Match Score
Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Python Developer, you will be joining a team working with a renowned financial institution to deliver business value through your expertise. Your role will involve analyzing existing SAS DI pipelines and SQL-based transformations, translating and optimizing SAS SQL logic into Python code using frameworks like Pyspark, developing scalable ETL pipelines on AWS EMR, implementing data transformation and aggregation logic, designing modular code for distributed data processing tasks, integrating EMR jobs with various systems, and developing Tableau reports for business reporting. Key Responsibilities: - Analyze existing SAS DI pipelines and SQL-based transformations. - Translate and optimize...

Posted 4 days ago

AI Match Score
Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a skilled and proactive Python / PySpark Developer at our company, you will join our data engineering or analytics team. Your primary responsibility will be to build scalable data pipelines, perform large-scale data processing, and collaborate with data scientists, analysts, and business stakeholders. Key Responsibilities: - Design, develop, and optimize ETL data pipelines using PySpark on big data platforms (e.g., Hadoop, Databricks, EMR). - Write clean, efficient, and modular code in Python for data processing and integration tasks. - Work with large datasets to extract insights, transform raw data, and ensure data quality. - Collaborate with cross-functional teams to understand busines...

Posted 4 days ago

AI Match Score
Apply

10.0 - 14.0 years

0 Lacs

noida, uttar pradesh

On-site

Role Overview: You are a highly skilled and experienced Senior ETL & Data Streaming Engineer with over 10 years of experience. Your main responsibility will be to design, develop, and maintain robust data pipelines. Your expertise in both batch ETL processes and real-time data streaming technologies, along with hands-on experience with AWS data services, will be crucial. Your track record in working with Data Lake architectures and traditional Data Warehousing environments will be essential for this role. Key Responsibilities: - Design, develop, and implement highly scalable, fault-tolerant, and performant ETL processes using industry-leading ETL tools to extract, transform, and load data fr...

Posted 4 days ago

AI Match Score
Apply

5.0 - 10.0 years

6 - 16 Lacs

pune, chennai, bengaluru

Work from Office

Role & responsibilities Role : Pyspark Developer Location: Bangalore, Chennai, Pune, Hyderabad, Gurugram Date of Interview: 30th Oct, 2025 Time of Interview: 10:00 AM to 4:00 PM Job Description: Overall 5+ years of experience in Pyspark, Big Data experience, AWS EMR, S3, IAM, Lambda, SNS, SQS Good work experience on Big Data Platforms like Hadoop, Spark, Scala, Hive, Impala, SQL Good Spark, Pyspark, Big Data experience Spark UI/Optimization/debugging techniques Good python scripting skills Intermediate SQL exposure Subquery, Joins, CTEs Database technologies AWS EMR, S3, IAM, Lambda, SNS, SQS

Posted 4 days ago

AI Match Score
Apply

5.0 - 10.0 years

0 Lacs

hyderabad, chennai, bengaluru

Hybrid

Job description Hiring for AWS Developer Mandatory Skills: AWS cloud , AWS Lambda/Glue Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to actively aid the consulting team in different phases of the project including problem definition, effort estimation, diagnosis, solution generation and design and deployment You will explore the alternatives to the recommended solutions based on research that includes literature surveys, information available in public domains, vendor evaluation information, etc. and build POCs You will create requirement specifications from the business needs, define the to-be-processes and detailed fun...

Posted 4 days ago

AI Match Score
Apply

3.0 - 5.0 years

0 Lacs

india

On-site

Overview We are seeking a Data Engineer to join our data engineering team, focusing on data processing and analytics solutions using big modern data technologies. Responsibilities Develop and maintain data processing pipelines using Python, Spark, and Hadoop ecosystems. Build ETL workflows on AWS EMR clusters for processing large datasets. Assist in optimizing Spark applications for performance and reliability. Support data visualization and exploration using Hue and other analytics platforms. Work with data scientists and analysts on machine learning pipeline implementation. Monitor distributed computing environments and assist with troubleshooting. Qualifications 3+ years of software devel...

Posted 6 days ago

AI Match Score
Apply

5.0 - 8.0 years

10 - 20 Lacs

hyderabad

Hybrid

Role & responsibilities Proficient in Python for data processing and automation. Strong experience with Apache Spark for large-scale data processing. Familiarity with AWS S3 for data storage and management. Experience with Kafka for real-time data streaming. Knowledge of Redshift for data warehousing solutions. Proficient in Oracle databases for data management. Experience with AWS Glue for ETL processes. Familiarity with Apache Airflow for workflow orchestration. Experience with EMR for big data processing. Mandatory: Strong AWS data engineering skills. Apply link : https://careers.ey.com/job-invite/1625618/

Posted 1 week ago

AI Match Score
Apply

4.0 - 9.0 years

10 - 20 Lacs

kochi, kolkata, thiruvananthapuram

Hybrid

Role & responsibilities Experience building on AWS using S3, EC2, Redshift, Glue, EMR, DynamoDB, Lambda, Quick Sight, etc. Experience in Pyspark/Spark / Scala Experience using software version control tools (Git, Jenkins, Apache Subversion) AWS certifications or other related professional technical certifications Experience with cloud or on-premises middleware and other enterprise integration technologies. Experience in writing MapReduce and/or Spark jobs. Demonstrated strength in architecting data warehouse solutions and integrating technical components. Good analytical skills with excellent knowledge of SQL. 3+ years of work experience with very large data warehousing environment 3+ years ...

Posted 1 week ago

AI Match Score
Apply

3.0 - 8.0 years

0 - 3 Lacs

ghaziabad

Remote

Project Duration 3 months and can get extended. Data Engineer (Databricks experience is mandatory) Exp: 3-8 Years Qualification: B. Tech/B.Sc. (IT) Expertise in Python, PySpark , and Apache Spark for data engineering Strong experience in building and optimizing ETL/data transformation pipelines Proficient in Databricks and Delta Lake storage format (Databricks experience is mandatory) Cloud platforms like AWS (Glue, EMR), Snowflake, Dataiku, Alteryx Skilled in managing large-scale, multi-dimensional datasets and designing scalable data architectures Solid understanding of data modeling and data warehouse concepts (On-Prem & Cloud) Familiar with modern data tools and continually updated on em...

Posted 1 week ago

AI Match Score
Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Role Overview: You will be responsible for designing, developing, and maintaining data pipelines and solutions using Databricks, Apache Spark, and other related tools to process large datasets and extract actionable business insights. Your role will involve building metrics deck and dashboards for KPIs, including the underlying data models. Additionally, you will be required to design, implement, and maintain a platform providing secured access to large datasets. Key Responsibilities: - Hold a Bachelor's degree in computer science, Data Science, engineering, mathematics, information systems, or a related technical discipline - Possess 5+ years of relevant experience in data engineering roles...

Posted 1 week ago

AI Match Score
Apply

5.0 - 8.0 years

10 - 17 Lacs

chennai, bengaluru

Hybrid

Data Engineer Immediate joiner only Bangalore /Chennai (Local candidates) Job Duties: Develop and support data solutions in support of Geo and Marketplace reporting and analytics requirements Engage with product owner, technology lead, report developers, product analysts, and business partners to understand capability requirements and develop data solutions based on product backlog priorities Skills / Qualifications: 5+ years of experience with data engineering with emphasis on data analytics and reporting Expert level experience with Python which demonstrate: Proficiency authoring distributable packages Proficiency authoring and automating unit tests using tools such as tox and pytest , pre...

Posted 1 week ago

AI Match Score
Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Pyspark Developer at our company, you will be responsible for optimizing Pyspark applications to improve performance and resource utilization, ensuring efficient data processing and analysis. Your key responsibilities will include: - Developing and maintaining scalable solutions using AWS EC2 and AWS EMR to support business needs and data-driven decision-making. - Implementing PySpark frameworks to process large datasets, enhancing data accessibility and insights for stakeholders. - Collaborating with cross-functional teams to design and deploy robust data pipelines, ensuring seamless integration with existing systems. - Utilizing Python programming to automate tasks and streamline work...

Posted 1 week ago

AI Match Score
Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

Role Overview: As an IT professional at Capgemini Invent, you will be responsible for creating data warehouses, data lakes, ETL/ELT, and data pipelines on cloud platforms. Your focus will be on delivering cutting-edge solutions tailored to the challenges faced by clients in the Life Sciences Domain. You will leverage your expertise in cloud storage, cloud database, and data warehousing solutions such as Snowflake, BigQuery, AWS Redshift, ADLS, and S3. Additionally, you will use cloud data integration services like Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, and Dataproc for handling structured, semi-structured, and unstructured data. Your role ...

Posted 2 weeks ago

AI Match Score
Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies