268 Aws Emr Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

J ob Title: Senior Data Engineer - Big Data, ETL & Java Experience Level: 5 + Years Employment Type: Full - time About the Role EXL is seeking a Senior Software Engineer with a strong foundation in Java , along with expertise in Big Data technologies and ETL development . In this role, you'll design and implement scalable, high - performance data and backend systems for clients in retail, media, and other data - driven industries. You'll work across cloud platforms such as AWS and GCP to build end - to - end data and application pipelines. Key Responsibilities . Design, develop, and maintain scalable data pipelines and ETL workflows using Apache Spark, Apache Airflow, and cloud platforms (AW...

Posted 3 months ago

AI Match Score
Apply

6.0 - 11.0 years

15 - 30 Lacs

Hyderabad, Chennai

Work from Office

Interested can also apply with Sanjeevan Natarajan - 94866 21923 sanjeevan.natarajan@careernet.in Role & responsibilities Technical Leadership Lead a team of data engineers and developers; define technical strategy, best practices, and architecture for data platforms. End-to-End Solution Ownership Architect, develop, and manage scalable, secure, and high-performing data solutions on AWS and Databricks. Data Pipeline Strategy Oversee the design and development of robust data pipelines for ingestion, transformation, and storage of large-scale datasets. Data Governance & Quality Enforce data validation, lineage, and quality checks across the data lifecycle. Define standards for metadata, catalo...

Posted 3 months ago

AI Match Score
Apply

2.0 - 7.0 years

3 - 6 Lacs

Bengaluru

Work from Office

Looking for an AWS & DevOps trainer to take 1-hour daily virtual classes (MonFri). Should cover AWS services, DevOps tools (Jenkins, Docker, K8s, etc.), give hands-on tasks, guide on interviews & certs, and support doubt sessions.

Posted 3 months ago

AI Match Score
Apply

4.0 - 8.0 years

4 - 8 Lacs

Bengaluru, Karnataka, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage...

Posted 3 months ago

AI Match Score
Apply

6.0 - 10.0 years

0 Lacs

kolkata, west bengal

On-site

You must have knowledge in Azure Datalake, Azure function, Azure Databricks, Azure Data Factory, and PostgreSQL. Working knowledge in Azure DevOps and Git flow would be an added advantage. Alternatively, you should have working knowledge in AWS Kinesis, AWS EMR, AWS Glue, AWS RDS, AWS Athena, and AWS RedShift. Demonstrable expertise in working with timeseries data is essential. Experience in delivering data engineering/data science projects in Industry 4.0 is an added advantage. Knowledge of Palantir is required. You must possess strong problem-solving skills with a focus on sustainable and reusable development. Proficiency in using statistical computer languages like Python/PySpark, Pandas,...

Posted 3 months ago

AI Match Score
Apply

5.0 - 10.0 years

9 - 13 Lacs

Noida

Work from Office

We are looking for a skilled AI/ML Ops Engineer to join our team to bridge the gap between data science and production systems. You will be responsible for deploying, monitoring, and maintaining machine learning models and data pipelines at scale. This role involves close collaboration with data scientists, engineers, and DevOps to ensure that ML solutions are robust, scalable, and reliable. Key Responsibilities: Design and implement ML pipelines for model training, validation, testing, and deployment. Automate ML workflows using tools such as MLflow, Kubeflow, Airflow, or similar. Deploy machine learning models to production environments (cloud). Monitor model performance, drift, and data q...

Posted 3 months ago

AI Match Score
Apply

5.0 - 7.0 years

15 - 30 Lacs

Gurugram

Remote

Design, develop, and maintain robust data pipelines and ETL/ELT processes on AWS. Leverage AWS services such as S3, Glue, Lambda, Redshift, Athena, EMR , and others to build scalable data solutions. Write efficient and reusable code using Python for data ingestion, transformation, and automation tasks. Collaborate with cross-functional teams including data analysts, data scientists, and software engineers to support data needs. Monitor, troubleshoot, and optimize data workflows for performance, reliability, and cost efficiency. Ensure data quality, security, and governance across all systems. Communicate technical solutions clearly and effectively with both technical and non-technical stakeh...

Posted 3 months ago

AI Match Score
Apply

14.0 - 18.0 years

15 - 25 Lacs

Hyderabad

Work from Office

15+ yrs in designing distributed systems & 10+ yrs in building data lakes. Expert in AWS, Python, PySpark, ETL, serverless, MongoDB. Skilled in data governance, RAG, NoSQL, Parquet/Iceberg, and AWS AI services like Comprehend & Entity Resolution.

Posted 3 months ago

AI Match Score
Apply

3.0 - 7.0 years

3 - 7 Lacs

Pune, Maharashtra, India

On-site

Job Description Responsibilities Be able to lead an effort to design, architect and write software components. Be able to independently handle activities related to builds and deployments. Create design documentation for new software development and subsequent versions. Identify opportunities to improve and optimize applications. Diagnose complex developmental & operational problems and recommend upgrades & improvements at a component level. Collaborate with global stakeholders and business partners for product delivery. Follow company software development processes and standards. Work on POC or guide the team members. Unblock the team members from technical and solutioning perspective. Know...

Posted 3 months ago

AI Match Score
Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

Capgemini Invent is the digital innovation, consulting, and transformation brand of the Capgemini Group, a global business line that combines market-leading expertise in strategy, technology, data science, and creative design to help CxOs envision and build what's next for their businesses. In this role, you should have developed/worked on at least one Gen AI project and have experience in data pipeline implementation with cloud providers such as AWS, Azure, or GCP. You should also be familiar with cloud storage, cloud database, cloud data warehousing, and Data lake solutions like Snowflake, BigQuery, AWS Redshift, ADLS, and S3. Additionally, a good understanding of cloud compute services, l...

Posted 3 months ago

AI Match Score
Apply

10.0 - 15.0 years

30 - 40 Lacs

Bengaluru

Hybrid

We are looking for a Cloud Data Engineer with strong hands-on experience in data pipelines, cloud-native services (AWS), and modern data platforms like Snowflake or Databricks. Alternatively, were open to Data Visualization Analysts with strong BI experience and exposure to data engineering or pipelines. You will collaborate with technology and business leads to build scalable data solutions, including data lakes, data marts, and virtualization layers using tools like Starburst. This is an exciting opportunity to work with modern cloud tech in a dynamic, enterprise-scale financial services environment. Key Responsibilities: Design and develop data pipelines for structured/unstructured data i...

Posted 3 months ago

AI Match Score
Apply

10.0 - 15.0 years

35 - 100 Lacs

Bengaluru

Work from Office

Senior AWS Administrator Req number: R5740 Employment type: Full time Worksite flexibility: Remote Who we are CAI is a global technology services firm with over 8,500 associates worldwide and a yearly revenue of $1 billion+. We have over 40 years of excellence in uniting talent and technology to power the possible for our clients, colleagues, and communities. As a privately held company, we have the freedom and focus to do what is right—whatever it takes. Our tailor-made solutions create lasting results across the public and commercial sectors, and we are trailblazers in bringing neurodiversity to the enterprise. Job Summary We are seeking an experienced Senior AWS Administrator to join our ...

Posted 3 months ago

AI Match Score
Apply

6.0 - 8.0 years

5 - 7 Lacs

Mumbai, Maharashtra, India

On-site

Key Responsibilities: Design, architect, and implement end-to-end big data solutions using MapR , Apache Hadoop , and associated ecosystem tools (e.g., Hive, HBase, Spark, Kafka). Lead data platform modernization efforts including architecture reviews, platform upgrades, and migrations. Collaborate with data engineers, data scientists, and application teams to gather requirements and build scalable, secure data pipelines. Define data governance, security, and access control strategies in the MapR ecosystem. Optimize performance of distributed systems, including storage and compute workloads. Guide teams on best practices in big data development, deployment, and maintenance. Conduct code revi...

Posted 3 months ago

AI Match Score
Apply

5.0 - 10.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Job Title:EMR_Spark SMEExperience:5-10 YearsLocation:Bangalore : Technical Skills: 5+ years of experience in big data technologies with hands-on expertise in AWS EMR and Apache Spark. Proficiency in Spark Core, Spark SQL, and Spark Streaming for large-scale data processing. Strong experience with data formats (Parquet, Avro, JSON) and data storage solutions (Amazon S3, HDFS). Solid understanding of distributed systems architecture and cluster resource management (YARN). Familiarity with AWS services (S3, IAM, Lambda, Glue, Redshift, Athena). Experience in scripting and programming languages such as Python, Scala, and Java. Knowledge of containerization and orchestration (Docker, Kubernetes) ...

Posted 3 months ago

AI Match Score
Apply

8.0 - 13.0 years

10 - 15 Lacs

Noida

Work from Office

8+ years of experience in data engineering with a strong focus on AWS services . Proven expertise in: Amazon S3 for scalable data storage AWS Glue for ETL and serverless data integration using Amazon S3, DataSync, EMR, Redshift for data warehousing and analytics Proficiency in SQL , Python , or PySpark for data processing. Experience with data modeling , partitioning strategies , and performance optimization . Familiarity with orchestration tools like AWS Step Functions , Apache Airflow , or Glue Workflows . Strong understanding of data lake and data warehouse architectures. Excellent problem-solving and communication skills. Mandatory Competencies Beh - Communication ETL - ETL - AWS Glue Bi...

Posted 3 months ago

AI Match Score
Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

Propel operational success with your expertise in technology support and a commitment to continuous improvement. As a Technology Support II team member within JPMorgan Chase, you will play a vital role in ensuring the operational stability, availability, and performance of our production application flows. You will be responsible for troubleshooting, maintaining, identifying, escalating, and resolving production service interruptions for all internally and externally developed systems, thereby supporting a seamless user experience and fostering a culture of continuous improvement. You will analyze and troubleshoot production application flows to ensure end-to-end application or infrastructur...

Posted 3 months ago

AI Match Score
Apply

8.0 - 12.0 years

12 - 18 Lacs

Noida

Work from Office

General Roles & Responsibilities: Technical Leadership: Demonstrate leadership, and ability to guide business and technology teams in adoption of best practices and standards Design & Development: Design, develop, and maintain robust, scalable, and high-performance data estate Architecture: Architect and design robust data solutions that meet business requirements & include scalability, performance, and security. Quality: Ensure the quality of deliverables through rigorous reviews, and adherence to standards. Agile Methodologies: Actively participate in agile processes, including planning, stand-ups, retrospectives, and backlog refinement. Collaboration: Work closely with system architects, ...

Posted 3 months ago

AI Match Score
Apply

5.0 - 10.0 years

6 - 11 Lacs

Noida

Work from Office

5+ years of experience in data engineering with a strong focus on AWS services . Proven expertise in: Amazon S3 for scalable data storage AWS Glue for ETL and serverless data integration using Amazon S3, DataSync, EMR, Redshift for data warehousing and analytics Proficiency in SQL , Python , or PySpark for data processing. Experience with data modeling , partitioning strategies , and performance optimization . Familiarity with orchestration tools like AWS Step Functions , Apache Airflow , or Glue Workflows . Strong understanding of data lake and data warehouse architectures. Excellent problem-solving and communication skills. Mandatory Competencies Beh - Communication ETL - ETL - AWS Glue Bi...

Posted 3 months ago

AI Match Score
Apply

6.0 - 11.0 years

11 - 16 Lacs

Gurugram

Work from Office

Project description We are looking for the star Python Developer who is not afraid of work and challenges! Gladly becoming a partner with famous financial institution, we are gathering a team of professionals with wide range of skills to successfully deliver business value to the client. Responsibilities Analyse existing SAS DI pipelines and SQL-based transformations. Translate and optimize SAS SQL logic into Python code using frameworks such as Pyspark. Develop and maintain scalable ETL pipelines using Python on AWS EMR. Implement data transformation, cleansing, and aggregation logic to support business requirements. Design modular and reusable code for distributed data processing tasks on ...

Posted 3 months ago

AI Match Score
Apply

7.0 - 9.0 years

7 - 17 Lacs

Pune

Remote

Requirements for the candidate: The role will require deep knowledge of data engineering techniques to create data pipelines and build data assets. At least 4+ years of Strong hands on programming experience with Pyspark / Python / Boto3 including Python Frameworks, libraries according to python best practices. Strong experience in code optimization using spark SQL and pyspark. Understanding of Code versioning, Git repository, JFrog Artifactory. AWS Architecture knowledge specially on S3, EC2, Lambda, Redshift, CloudFormation etc and able to explain benefits of each Code Refactorization of Legacy Codebase: Clean, modernize, improve readability and maintainability. Unit Tests/TDD: Write tests...

Posted 3 months ago

AI Match Score
Apply

8.0 - 13.0 years

30 - 40 Lacs

Noida, Hyderabad

Hybrid

Job Title: Data Engineer Location : Noida / Hyderabad (Hybrid 3 days/week) Shift Timings : 2:30 PM to 10:30 PM IST Start Date : Immediate / July 2025 Experience : 8+ years Tech Stack : AWS, Python, PySpark, EMR, Athena, Glue, Lambda, EC2, S3, Git, Data Warehousing, Parquet, Avro, ORC Job Description : We're hiring experienced Data Engineers with a strong background in building scalable data pipelines using AWS and PySpark. You'll work with distributed systems, big data tools, and analytics services to deliver solutions for high-volume data processing. Key Responsibilities : Build and optimize PySpark applications Work with AWS services: EMR, Glue, Lambda, Athena, etc. Implement data modeling...

Posted 3 months ago

AI Match Score
Apply

3.0 - 5.0 years

12 - 16 Lacs

Kochi

Work from Office

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing...

Posted 3 months ago

AI Match Score
Apply

4.0 - 9.0 years

12 - 16 Lacs

Kochi

Work from Office

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developin...

Posted 3 months ago

AI Match Score
Apply

5.0 - 10.0 years

0 - 1 Lacs

Hyderabad, Pune, Ahmedabad

Hybrid

Contractual (Project-Based) Notice Period: Immediate - 15 Days Fill this form: https://forms.office.com/Pages/ResponsePage.aspx?id=hLjynUM4c0C8vhY4bzh6ZJ5WkWrYFoFOu2ZF3Vr0DXVUQlpCTURUVlJNS0c1VUlPNEI3UVlZUFZMMC4u Resume- shweta.soni@panthsoftech.com

Posted 3 months ago

AI Match Score
Apply

12.0 - 17.0 years

30 - 45 Lacs

Bengaluru

Work from Office

Work Location: Bangalore Experience :10+yrs Required Skills: Experience AWS cloud and AWS services such as S3 Buckets, Lambda, API Gateway, SQS queues; Experience with batch job scheduling and identifying data/job dependencies; Experience with data engineering using AWS platform and Python; Familiar with AWS Services like EC2, S3, Redshift/Spectrum, Glue, Athena, RDS, Lambda, and API gateway; Familiar with software DevOps CI/CD tools, such Git, Jenkins, Linux, and Shell Script Thanks & Regards Suganya R suganya@spstaffing.in

Posted 3 months ago

AI Match Score
Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies