1333 Aws Glue Jobs - Page 39

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : AWS Glue Good to have skills : Python (Programming Language), Amazon Web Services (AWS), Machine LearningMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutio...

Posted 4 months ago

AI Match Score
Apply

4.0 - 8.0 years

6 - 10 Lacs

Gurugram

Work from Office

Responsibilities: * Design, develop & maintain data pipelines using ETL, Python, PySpark & AWS tools. * Collaborate with cross-functional teams on project requirements & deliverables.

Posted 4 months ago

AI Match Score
Apply

5.0 - 10.0 years

12 - 18 Lacs

Chennai

Work from Office

Sr. ETL Developer 5-10 yrs Client : US (1-10pm) Chennai/Madurai (Hybrid) Third-party payroll: Smiligence Skills : Talend, Informatica, SSIS, PostgreSQL, AWS (S3, Glue, RDS, Redshift), Linux, Shell/Python, Airflow, Git, Quilt, SSIS, NiFi, Databricks

Posted 4 months ago

AI Match Score
Apply

6.0 - 9.0 years

10 - 20 Lacs

Pune

Hybrid

Pattern values data and engineering required to take full advantage of it. As a Senior Data Engineer at Pattern, you will be working on business problems that have a huge impact on how the company maintains its competitive edge. Essential Duties and Responsibilities Develop, deploy, and support real-time, automated, scalable data streams from a variety of sources into the data lake or data warehouse. Develop and implement data auditing strategies and processes to ensure data quality; identify and resolve problems associated with large-scale data processing workflows; implement technical solutions to maintain data pipeline processes and troubleshoot failures. Collaborate with technology teams...

Posted 4 months ago

AI Match Score
Apply

5.0 - 10.0 years

20 - 25 Lacs

Gurugram

Work from Office

Role & responsibilities Key Responsibilities Design, build, and maintain scalable and efficient data pipelines to move data between cloud-native databases (e.g., Snowflake) and SaaS providers using AWS Glue and Python Implement and manage ETL/ELT processes to ensure seamless data integration and transformation Ensure information security and compliance with data governance standards Maintain and enhance data environments, including data lakes, warehouses, and distributed processing systems Utilize version control systems (e.g., GitHub) to manage code and collaborate effectively with the team Primary Skills: Enhancements, new development, defect resolution, and production support of ETL devel...

Posted 4 months ago

AI Match Score
Apply

5.0 - 10.0 years

0 - 0 Lacs

Hyderabad

Remote

Data Engineering / Big Data part time Work from Home (Any where in world) Warm Greetings from Excel Online Classes, We are a team of industry professionals running an institute that provides comprehensive online IT training, technical support, and development services. We are currently seeking Data Engineering / Big Data Experts who are passionate about technology and can collaborate with us in their free time. If you're enthusiastic, committed, and ready to share your expertise, we would love to work with you! Were hiring for the following services: Online Training Online Development Online Technical Support Conducting Online Interviews Corporate Training Proof of Concept (POC) Projects Res...

Posted 4 months ago

AI Match Score
Apply

12.0 - 17.0 years

30 - 45 Lacs

Bengaluru

Work from Office

Work Location: Bangalore Experience :10+yrs Required Skills: Experience AWS cloud and AWS services such as S3 Buckets, Lambda, API Gateway, SQS queues; Experience with batch job scheduling and identifying data/job dependencies; Experience with data engineering using AWS platform and Python; Familiar with AWS Services like EC2, S3, Redshift/Spectrum, Glue, Athena, RDS, Lambda, and API gateway; Familiar with software DevOps CI/CD tools, such Git, Jenkins, Linux, and Shell Script Thanks & Regards Suganya R suganya@spstaffing.in

Posted 4 months ago

AI Match Score
Apply

4.0 - 9.0 years

6 - 11 Lacs

Kochi

Work from Office

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developin...

Posted 4 months ago

AI Match Score
Apply

7.0 - 10.0 years

25 - 30 Lacs

Navi Mumbai

Work from Office

We are looking for a highly skilled Data Catalog Engineer to join our team at Serendipity Corporate Services, with 6-8 years of experience in the IT Services & Consulting industry. Roles and Responsibility Design and implement data cataloging solutions to meet business requirements. Develop and maintain large-scale data catalogs using various tools and technologies. Collaborate with cross-functional teams to identify and prioritize data needs. Ensure data quality and integrity by implementing data validation and testing procedures. Optimize data catalog performance by analyzing query logs and identifying improvement areas. Provide technical support and training to end-users on data catalog u...

Posted 4 months ago

AI Match Score
Apply

5.0 - 10.0 years

20 - 27 Lacs

Pune

Hybrid

Job Description Job Duties and Responsibilities: We are looking for a self-starter to join our Data Engineering team. You will work in a fast-paced environment where you will get an opportunity to build and contribute to the full lifecycle development and maintenance of the data engineering platform. With the Data Engineering team you will get an opportunity to - Design and implement data engineering solutions that is scalable, reliable and secure on the Cloud environment Understand and translate business needs into data engineering solutions Build large scale data pipelines that can handle big data sets using distributed data processing techniques that supports the efforts of the data scien...

Posted 4 months ago

AI Match Score
Apply

7.0 - 10.0 years

25 - 35 Lacs

Hyderabad

Work from Office

Job Summary As a Senior Data Engineer, you will play a key role in developing and maintaining the databases and scripts that power Creditsafes products and websites. You will be responsible for handling large datasets, designing scalable data pipelines, and ensuring seamless data processing across cloud environments. This role provides an excellent opportunity to contribute to an exciting, fast paced, and rapidly expanding organization. Key Responsibilities Develop and maintain scalable, metadata-driven, event-based distributed data processing platforms. Design and implement data solutions using Python, Airflow, Redshift, DynamoDB, AWS Glue, and S3. Build and optimize APIs to securely handle...

Posted 4 months ago

AI Match Score
Apply

7.0 - 9.0 years

25 - 30 Lacs

Navi Mumbai

Work from Office

Key Responsibilities: Lead the end-to-end implementation of a data cataloging solution within AWS (preferably AWS Glue Data Catalog or third-party tools like Apache Atlas, Alation, Collibra, etc.). Establish and manage metadata frameworks for structured and unstructured data assets in the data lake and data warehouse environments. Integrate the data catalog with AWS-based storage solutions such as S3, Redshift, Athena, Glue, and EMR. Collaborate with data Governance/BPRG/IT projects teams to define metadata standards, data classifications, and stewardship processes. Develop automation scripts for catalog ingestion, lineage tracking, and metadata updates using Python, Lambda, Pyspark or Glue/...

Posted 4 months ago

AI Match Score
Apply

4.0 - 8.0 years

6 - 15 Lacs

Hyderabad

Remote

Role & responsibilities: To Design and develop scalable data pipelines. Integrate and transform financial data using flat files JSON and XML formats. Create optimised queries using Hive SQL PostGRE SQL and other data tools. Preferred candidate profile 4+ years of experience in data bricks using Python And Scala. Hands on experience using Azure cloud platform.

Posted 4 months ago

AI Match Score
Apply

3.0 - 5.0 years

2 - 6 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high qu...

Posted 4 months ago

AI Match Score
Apply

5.0 - 10.0 years

10 - 18 Lacs

Bengaluru, Mumbai (All Areas)

Hybrid

About the Role: We are seeking a passionate and experienced Subject Matter Expert and Trainer to deliver our comprehensive Data Engineering with AWS program. This role combines deep technical expertise with the ability to coach, mentor, and empower learners to build strong capabilities in data engineering, cloud services, and modern analytics tools. If you have a strong background in data engineering and love to teach, this is your opportunity to create impact by shaping the next generation of cloud data professionals. Key Responsibilities: Deliver end-to-end training on the Data Engineering with AWS curriculum, including: - Oracle SQL and ANSI SQL - Data Warehousing Concepts, ETL & ELT - Da...

Posted 4 months ago

AI Match Score
Apply

6.0 - 11.0 years

0 - 2 Lacs

Chennai

Work from Office

Requirement 1: Skills: AWS Redshift dev with Apache Airflow Location: Chennai Experience: 8+ Years Work Mode: Hybrid. Role & responsibilities: Senior Data Engineer AWS Redshift & Apache Airflow Location: Chennai Experience Required: 8+ Years Job Summary We are seeking a highly experienced Senior Data Engineer to lead the design, development, and optimization of scalable data pipelines using AWS Redshift and Apache Airflow. The ideal candidate will have deep expertise in cloud-based data warehousing, workflow orchestration, and ETL processes, with a strong background in SQL and Python. Key Responsibilities Design, build, and maintain robust ETL/ELT pipelines using Apache Airflow. Integrate da...

Posted 4 months ago

AI Match Score
Apply

3.0 - 6.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Should have developed/Worked for atleast 1 Gen AI project. Has data pipeline implementation experience with any of these cloud providers - AWS, Azure, GCP. Experience with cloud storage, cloud database, cloud data warehousing and Data lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Has good knowledge of cloud compute services and load balancing. Has good knowledg...

Posted 4 months ago

AI Match Score
Apply

1.0 - 3.0 years

4 - 9 Lacs

Hyderabad

Work from Office

Key Responsibilities: Create and maintain optimal data pipeline architecture. Assemble large, complex data sets that meet functional/non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. using Python/open source technologies. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and cloud database technologies. Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support thei...

Posted 4 months ago

AI Match Score
Apply

15.0 - 20.0 years

10 - 14 Lacs

Noida

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS BigData Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategi...

Posted 4 months ago

AI Match Score
Apply

6.0 - 10.0 years

15 - 30 Lacs

Kolkata, Mumbai (All Areas)

Work from Office

Experience-6 to 10 Years Job Locations-Kolkata & Mumbai Notice Period-30 Days Job Role-ETL Lead ETL Lead with strong AWS expertise AWS Glue, Lambda, RDS (e.g. MySQL) Primary Responsibilities: • 6 to 9 years of experience in data engineering or ETL development. • Proven expertise in AWS Glue, Lambda, S3, and RDS (MySQL) for ETL workflows. • Strong SQL and Python/PySpark development skills. • Solid understanding of data warehousing concepts and data modeling (star/snowflake schemas). • Experience delivering data solutions consumed by Power BI dashboards. • Ability to lead and manage a small team of developers. • Understanding of data modeling concepts and dimensional models (star/snowflake sch...

Posted 4 months ago

AI Match Score
Apply

8.0 - 13.0 years

7 - 14 Lacs

Pune, Mumbai (All Areas)

Hybrid

Job Title: Lead Data Engineer Location: Mumbai / Pune Experience: 8+ yrs Job Summary: We are seeking a technically strong and delivery-focused Lead Engineer to support and enhance enterprise-grade data and application products under the Durables model. The ideal candidate will act as the primary technical interface for the client, ensuring high system availability, performance, and continuous improvement. This role requires a hands-on technologist with strong team management experience, cloud (AWS) expertise, and excellent communication skills to handle client interactions and drive technical decisions. Key Responsibilities: Support & Enhancement Leadership Act as the primary technical lead ...

Posted 4 months ago

AI Match Score
Apply

5.0 - 8.0 years

7 - 10 Lacs

Mumbai, New Delhi, Bengaluru

Work from Office

Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Data Governance, Lakehouse architecture, Medallion Architecture, Azure DataBricks, Azure Synapse, Data Lake Storage, Azure Data Factory Intelebee LLC is Looking for: Data Engineer:We are seeking a skilled and hands-on Cloud Data Engineer with 5-8 years of experience to drive end-to-end data engineering solutions. The ideal candidate will have a deep understanding of dimensional modeling, data warehousing (DW), Lakehouse architecture, and the Medallion architecture. This role will focus on leveraging Azure's/AWS ecosystem to build scalable, efficient, and s...

Posted 4 months ago

AI Match Score
Apply

7.0 - 8.0 years

9 - 10 Lacs

Mumbai, New Delhi, Bengaluru

Work from Office

Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, PySpark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform - Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and obse...

Posted 4 months ago

AI Match Score
Apply

7.0 - 12.0 years

15 - 30 Lacs

Gurugram, Delhi / NCR

Work from Office

Job Description We are seeking a highly skilled Senior Data Engineer with deep expertise in AWS data services, data wrangling using Python & PySpark, and a solid understanding of data governance, lineage, and quality frameworks. The ideal candidate will have a proven track record of delivering end-to-end data pipelines for logistics, supply chain, enterprise finance, or B2B analytics use cases. Role & responsibilities. Design, build, and optimize ETL pipelines using AWS Glue 3.0+ and PySpark. Implement scalable and secure data lakes using Amazon S3, following bronze/silver/gold zoning. Write performant SQL using AWS Athena (Presto) with CTEs, window functions, and aggregations. Take full own...

Posted 4 months ago

AI Match Score
Apply

5.0 - 10.0 years

20 - 25 Lacs

Gurugram

Work from Office

Role & responsibilities Key Responsibilities Design, build, and maintain scalable and efficient data pipelines to move data between cloud-native databases (e.g., Snowflake) and SaaS providers using AWS Glue and Python Implement and manage ETL/ELT processes to ensure seamless data integration and transformation Ensure information security and compliance with data governance standards Maintain and enhance data environments, including data lakes, warehouses, and distributed processing systems Utilize version control systems (e.g., GitHub) to manage code and collaborate effectively with the team Primary Skills: Enhancements, new development, defect resolution, and production support of ETL devel...

Posted 4 months ago

AI Match Score
Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies