Jobs
Interviews

572 Glue Jobs - Page 12

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

8 - 13 Lacs

Hyderabad

Work from Office

P2-C3-STS JD Data Warehouse. In this role you will be part of a team working to develop solutions enabling the business to leverage data as an asset at the bank. As a Lead ETL Developer, you will be leading teams to develop, maintain and enhance code ensuring all IT SDLC processes are documented and practiced, working closely with multiple technologies teams across the enterprise. The Lead ETL Developer should have extensive knowledge of data warehousing cloud technologies. If you consider data as a strategic asset, evangelize the value of good data and insights, have a passion for learning and continuous improvement, this role is for you. Key Responsibilities: Translate requirements and data mapping documents to a technical design. Develop, enhance and maintain code following best practices and standards. Create and execute unit test plans. Support regression and system testing efforts. Debug and problem solve issues found during testing and or production. Communicate status, issues and blockers with project team. Support continuous improvement by identifying and solving opportunities. Basic Qualifications Bachelor degree or military experience in related field (preferably computer science). At least 5 years of experience in ETL development within a Data Warehouse. Deep understanding of enterprise data warehousing best practices and standards. Strong experience in software engineering comprising of designing, developing and operating robust and highly-scalable cloud infrastructure services. Strong experience with Python/PySpark, DataStage ETL and SQL development. Proven experience in cloud infrastructure projects with hands on migration expertise on public clouds such as AWS and Azure, preferably Snowflake. Knowledge of Cybersecurity organization practices, operations, risk management processes, principles, architectural requirements, engineering and threats and vulnerabilities, including incident response methodologies. Understand Authentication & Authorization Services, Identity & Access Management. Strong communication and interpersonal skills. Strong organization skills and the ability to work independently as well as with a team. Preferred Qualifications AWS Certified Solutions Architect Associate, AWS Certified DevOps Engineer Professional and/or AWS Certified Solutions Architect Professional Experience defining future state roadmaps for data warehouse applications. Experience leading teams of developers within a project. Experience in financial services (banking) industry. Mandatory Skills ETL - Datawarehouse concepts AWS, Glue SQL python SNOWFLAKE CI/CD Tools (Jenkins, GitHub) Secondary Skills zena pyspark infogix

Posted 1 month ago

Apply

8.0 - 13.0 years

5 - 10 Lacs

Bengaluru

Work from Office

6+ years of experience with Java Spark. Strong understanding of distributed computing, big data principles, and batch/stream processing. Proficiency in working with AWS services such as S3, EMR, Glue, Lambda, and Athena. Experience with Data Lake architectures and handling large volumes of structured and unstructured data. Familiarity with various data formats. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Design, develop, and optimize large-scale data processing pipelines using Java Spark Build scalable solutions to manage data ingestion, transformation, and storage in AWS-based Data Lake environments. Collaborate with data architects and analysts to implement data models and workflows aligned with business requirements. Ensure performance tuning, fault tolerance, and reliability of distributed data processing systems.

Posted 1 month ago

Apply

8.0 - 13.0 years

8 - 12 Lacs

Hyderabad

Work from Office

10+ years of experience with Java Spark. Strong understanding of distributed computing, big data principles, and batch/stream processing. Proficiency in working with AWS services such as S3, EMR, Glue, Lambda, and Athena. Experience with Data Lake architectures and handling large volumes of structured and unstructured data. Familiarity with various data formats. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Design, develop, and optimize large-scale data processing pipelines using Java Spark Build scalable solutions to manage data ingestion, transformation, and storage in AWS-based Data Lake environments. Collaborate with data architects and analysts to implement data models and workflows aligned with business requirements.

Posted 1 month ago

Apply

6.0 - 11.0 years

3 - 7 Lacs

Pune

Work from Office

Experience :7-9 yrs Experience in AWS services must like S3, Lambda , Airflow, Glue, Athena, Lake formation ,Step functions etc. Experience in programming in JAVA and Python. Experience performing data analysis (NOT DATA SCIENCE) on AWS platforms Nice to have : Experience in a Big Data technologies (Terradata, Snowflake, Spark, Redshift, Kafka, etc.) Experience with data management process on AWS is a huge Plus Experience in implementing complex ETL transformations on AWS using Glue. Familiarity with relational database environment (Oracle, Teradata, etc.) leveraging databases, tables/views, stored procedures, agent jobs, etc.

Posted 1 month ago

Apply

8.0 - 13.0 years

3 - 7 Lacs

Hyderabad

Work from Office

P1-C3-STS Seeking a developer who has good Experience in Athena, Python code, Glue, Lambda, DMS , RDS, Redshift Cloud Formation and other AWS serverless resources. Can optimize data models for performance and efficiency. Able to write SQL queries to support data analysis and reporting Design, implement, and maintain the data architecture for all AWS data services. Work with stakeholders to identify business needs and requirements for data-related projects Design and implement ETL processes to load data into the data warehouse Good Experience in Athena, Python code, Glue, Lambda, DMS , RDS, Redshift, Cloud Formation and other AWS serverless resources Cloud Formation and other AWS serverless resources

Posted 1 month ago

Apply

4.0 - 9.0 years

3 - 7 Lacs

Hyderabad

Work from Office

Minimum 6 years of hands-on experience in data engineering or big data development roles. Strong programming skills in Python and experience with Apache Spark (PySpark preferred). Proficient in writing and optimizing complex SQL queries. Hands-on experience with Apache Airflow for orchestration of data workflows. Deep understanding and practical experience with AWS services: Data Storage & ProcessingS3, Glue, EMR, Athena Compute & ExecutionLambda, Step Functions DatabasesRDS, DynamoDB MonitoringCloudWatch Experience with distributed data processing, parallel computing, and performance tuning. Strong analytical and problem-solving skills. Familiarity with CI/CD pipelines and DevOps practices is a plus.

Posted 1 month ago

Apply

6.0 - 11.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Sr Developer with special emphasis and experience of 8 to 10 years on Python and Pyspark along with hands on experience on AWS Data components like AWS Glue, Athena etc.,. Also have good knowledge on Data ware house tools to understand the existing system. Candidate should also have experience on Datalake, Teradata and Snowflake. Should be good at terraform. 8-10 years of experience in designing and developing Python and Pyspark applications Creating or maintaining data lake solutions using Snowflake,taradata and other dataware house tools. Should have good knowledge and hands on experience on AWS Glue , Athena etc., Sound Knowledge on all Data lake concepts and able to work on data migration projects. Providing ongoing support and maintenance for applications, including troubleshooting and resolving issues. Expertise in practices like Agile, Peer reviews and CICD Pipelines.

Posted 1 month ago

Apply

4.0 - 9.0 years

4 - 8 Lacs

Gurugram

Work from Office

Data Engineer Location PAN INDIA Workmode Hybrid Work Timing :2 Pm to 11 PM Primary Skill Data Engineer Experience in data engineering, with a proven focus on data ingestion and extraction using Python/PySpark.. Extensive AWS experience is mandatory, with proficiency in Glue, Lambda, SQS, SNS, AWS IAM, AWS Step Functions, S3, and RDS (Oracle, Aurora Postgres). 4+ years of experience working with both relational and non-relational/NoSQL databases is required. Strong SQL experience is necessary, demonstrating the ability to write complex queries from scratch. Also, experience in Redshift is required along with other SQL DB experience Strong scripting experience with the ability to build intricate data pipelines using AWS serverless architecture. understanding of building an end-to end Data pipeline. Strong understanding of Kinesis, Kafka, CDK. Experience with Kafka and ECS is also required. strong understanding of data concepts related to data warehousing, business intelligence (BI), data security, data quality, and data profiling is required Experience in Node Js and CDK. JDResponsibilities Lead the architectural design and development of a scalable, reliable, and flexible metadata-driven data ingestion and extraction framework on AWS using Python/PySpark. Design and implement a customizable data processing framework using Python/PySpark. This framework should be capable of handling diverse scenarios and evolving data processing requirements. Implement data pipeline for data Ingestion, transformation and extraction leveraging the AWS Cloud Services Seamlessly integrate a variety of AWS services, including S3,Glue, Kafka, Lambda, SQL, SNS, Athena, EC2, RDS (Oracle, Postgres, MySQL), AWS Crawler to construct a highly scalable and reliable data ingestion and extraction pipeline. Facilitate configuration and extensibility of the framework to adapt to evolving data needs and processing scenarios. Develop and maintain rigorous data quality checks and validation processes to safeguard the integrity of ingested data. Implement robust error handling, logging, monitoring, and alerting mechanisms to ensure the reliability of the entire data pipeline. QualificationsMust Have Over 6 years of hands-on experience in data engineering, with a proven focus on data ingestion and extraction using Python/PySpark. Extensive AWS experience is mandatory, with proficiency in Glue, Lambda, SQS, SNS, AWS IAM, AWS Step Functions, S3, and RDS (Oracle, Aurora Postgres). 4+ years of experience working with both relational and non-relational/NoSQL databases is required. Strong SQL experience is necessary, demonstrating the ability to write complex queries from scratch. Strong working experience in Redshift is required along with other SQL DB experience. Strong scripting experience with the ability to build intricate data pipelines using AWS serverless architecture. Complete understanding of building an end-to end Data pipeline. Nice to have Strong understanding of Kinesis, Kafka, CDK. A strong understanding of data concepts related to data warehousing, business intelligence (BI), data security, data quality, and data profiling is required. Experience in Node Js and CDK. Experience with Kafka and ECS is also required.

Posted 1 month ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Hyderabad

Work from Office

P2-C3-STS JD In this role you will be part of a team working to develop solutions enabling the business to leverage data as an asset at the bank.The Senior ETL Developer should have extensive knowledge of data warehousing cloud technologies. If you consider data as a strategic asset, evangelize the value of good data and insights, have a passion for learning and continuous improvement, this role is for you. Translate requirements and data mapping documents to a technical design. Develop, enhance and maintain code following best practices and standards. Create and execute unit test plans. Support regression and system testing efforts. Debug and problem solve issues found during testing and/or production. Communicate status, issues and blockers with project team. Support continuous improvement by identifying and solving opportunities. Basic Qualifications Bachelor degree or military experience in related field (preferably computer science). At least 5 years of experience in ETL development within a Data Warehouse. Deep understanding of enterprise data warehousing best practices and standards. Strong experience in software engineering comprising of designing, developing and operating robust and highly-scalable cloud infrastructure services. Strong experience with Python/PySpark, DataStage ETL and SQL development. Proven experience in cloud infrastructure projects with hands on migration expertise on public clouds such as AWS and Azure, preferably Snowflake. Knowledge of Cybersecurity organization practices, operations, risk management processes, principles, architectural requirements, engineering and threats and vulnerabilities, including incident response methodologies. Understand Authentication & Authorization Services, Identity & Access Management. Strong communication and interpersonal skills. Strong organization skills and the ability to work independently as well as with a team. Preferred Qualifications AWS Certified Solutions Architect Associate, AWS Certified DevOps Engineer Professional and/or AWS Certified Solutions Architect Professional Experience defining future state roadmaps for data warehouse applications. Experience leading teams of developers within a project. Experience in financial services (banking) industry. Mandatory Skills ETL - Datawarehouse concepts Snowflake AWS, Glue CI/CD Tools (Jenkins, GitHub) python Datastage Secondary Skills zena pyspark infogix

Posted 1 month ago

Apply

8.0 - 13.0 years

4 - 8 Lacs

Mumbai

Work from Office

Sr Devloper with special emphasis and experience of 8 to 10 years on Python and Pyspark along with hands on experience on AWS Data components like AWS Glue, Athena etc.,. Also have good knowledge on Data ware house tools to understand the existing system. Candidate should also have experience on Datalake, Teradata and Snowflake. Should be good at terraform. 8-10 years of experience in designing and developing Python and Pyspark applications Creating or maintaining data lake solutions using Snowflake,taradata and other dataware house tools. Should have good knowledge and hands on experience on AWS Glue , Athena etc., Sound Knowledge on all Data lake concepts and able to work on data migration projects. Providing ongoing support and maintenance for applications, including troubleshooting and resolving issues. Expertise in practices like Agile, Peer reviews and CICD Pipelines.

Posted 1 month ago

Apply

8.0 - 13.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Experience: 8 years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in PySpark for distributed data processing and transformation. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Technical Skills: Proficiency in Python and PySpark for data processing and transformation tasks. Deep understanding of ETL concepts and best practices. Familiarity with AWS Glue (ETL jobs, Data Catalog, and Crawlers). Experience building and maintaining data pipelines with AWS Data Pipeline or similar orchestration tools. Familiarity with AWS S3 for data storage and management, including file formats (CSV, Parquet, Avro). Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS. Additional Skills: Experience with AWS Lambda for serverless data processing and orchestration. Understanding of AWS Redshift for data warehousing and analytics. Familiarity with Data Lakes, Amazon EMR, and Kinesis for streaming data processing. Knowledge of data governance practices, including data lineage and auditing. Familiarity with CI/CD pipelines and Git for version control. Experience with Docker and containerization for building and deploying applications. Design and Build Data PipelinesDesign, implement, and optimize data pipelines on AWS using PySpark, AWS Glue, and AWS Data Pipeline to automate data integration, transformation, and storage processes. ETL DevelopmentDevelop and maintain Extract, Transform, and Load (ETL) processes using AWS Glue and PySpark to efficiently process large datasets. Data Workflow AutomationBuild and manage automated data workflows using AWS Data Pipeline, ensuring seamless scheduling, monitoring, and management of data jobs. Data IntegrationWork with different AWS data storage services (e.g., S3, Redshift, RDS) to ensure smooth integration and movement of data across platforms. Optimization and ScalingOptimize and scale data pipelines for high performance and cost efficiency, utilizing AWS services like Lambda, S3, and EC2.

Posted 1 month ago

Apply

10.0 - 15.0 years

12 - 16 Lacs

Gurugram

Work from Office

1. Experience of working in AWS cloud service (i.e. S3, AWS Glue, Glue catalogue, Step Functions, lambda, Event Bridge etc) 2. Must have hands on DQ Libraries for data quality checks 3. Proficiency in data modelling and database management. 4. Strong programming skills in python, unix and on ETL technologies like Informatica 5. Experience of DevOps and Agile methodology and associated toolsets (including working with code repositories) and methodologies 6. Knowledge of big data technologies like Hadoop and Spark. 7. Must have hands on Reporting tools Tableau, quick sight and MS Power BI 8. Must have hands on databases like Postgres, MongoDB 9. Experience of using industry recognised frameworks and experience on Streamsets & Kafka is preferred 10. Experience in data sourcing including real time data integration 11. Proficiency in Snowflake Cloud and associated data migration from on premise to Cloud with knowledge on databases like Snowflake, Azure data lake, Postgres

Posted 1 month ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Bengaluru

Work from Office

P2-C2-STS You are passionate about driving SRE / DevSecOps mindset and culture in a fast-paced, challenging environment where you get the opportunity to work with a spectrum of latest tools and technologies to drive forward Automation, Observability and CI/CD automation You are actively looking to improve implemented solutions, understand the efficacy of collaboration, work with cross functional teams to build and improve CI/CD pipeline and improve automation (reduce Toil). As a member of this team, you possess the ability to inspire and leverage your experience to inject new knowledge and skills into an already high performing team. Help Identifying areas of improvement, especially when it comes to Observability, Proactiveness, Automation & Toil Management. Strategic approach with clear objectives to improve System Availability, Performance Optimization, and improve Incident MTBuild and maintain Reliable Engineering Systems using SRE and DevSecOps models with special focus on Event Management (monitoring/alerts), Self Healing and Reliability testing Strong programming skills with experience in API and Webhook development using Dynatrace, GitHub workflows, Ansible, CDK, Type/Java script, Python, Node.js, Ruby, PowerShell, and Shell Scripting languages. Strong understanding of Cloud computing (AWS) Strong understanding of SDLC and DevSecOps Experience in CI/CD pipeline tools such as JIRA, GitHub, Bitbucket, Artifactory, Ansible, or equivalent Working knowledge of Lambda, Glue and CDK Knowledge of cloud servicesApplication integration, functions, Cloud Databases, data warehouse and analytics, Machine Learning, Developer Tools, Security and identity management Knowledge of software development practices, concepts, and technology obtained through formal training and/or work experience. Knowledge of required programming languages and can code with minimum guidance. Understand functional aspects and technical behavior of the underlying operating system, development environment, and deployment practices.

Posted 1 month ago

Apply

15.0 - 20.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Apache Spark Good to have skills : AWS GlueMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and optimize data workflows, ensuring that the data infrastructure supports the organization's analytical needs effectively. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processing workflows to enhance efficiency and performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark.- Good To Have Skills: Experience with AWS Glue.- Strong understanding of data pipeline architecture and design.- Experience with ETL processes and data integration techniques.- Familiarity with data quality frameworks and best practices. Additional Information:- The candidate should have minimum 5 years of experience in Apache Spark.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Apache Spark Good to have skills : AWS GlueMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy of the organization, ensuring that data solutions are efficient, scalable, and aligned with business objectives. You will also monitor and optimize existing data processes to enhance performance and reliability, making data accessible and actionable for stakeholders. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with data architects and analysts to design data models that meet business needs.- Develop and maintain documentation for data processes and workflows to ensure clarity and compliance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark.- Good To Have Skills: Experience with AWS Glue.- Strong understanding of data processing frameworks and methodologies.- Experience in building and optimizing data pipelines for performance and scalability.- Familiarity with data warehousing concepts and best practices. Additional Information:- The candidate should have minimum 3 years of experience in Apache Spark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

7.0 - 10.0 years

13 - 18 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Please Note - NP should be 0-15 days AWS, KAFKA, ETL, Glue, Lamda

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Pune

Work from Office

The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Process ManagerRoles and responsibilities: Designing and implementing scalable, reliable, and maintainable data architectures on AWS. Developing data pipelines to extract, transform, and load (ETL) data from various sources into AWS environments. Creating and optimizing data models and schemas for performance and scalability using AWS services like Redshift, Glue, Athena, etc. Integrating AWS data solutions with existing systems and third-party services. Monitoring and optimizing the performance of AWS data solutions, ensuring efficient query execution and data retrieval. Implementing data security and encryption best practices in AWS environments. Documenting data engineering processes, maintaining data pipeline infrastructure, and providing support as needed. Working closely with cross-functional teams including data scientists, analysts, and stakeholders to understand data requirements and deliver solutions. Technical and Functional Skills: Typically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. Strong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc Proficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Experience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. Familiarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Knowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Understanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Proficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Ability to analyze complex technical problems and propose effective solutions. Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders.

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Pune

Work from Office

The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Process ManagerRoles and responsibilities: Designing and implementing scalable, reliable, and maintainable data architectures on AWS. Developing data pipelines to extract, transform, and load (ETL) data from various sources into AWS environments. Creating and optimizing data models and schemas for performance and scalability using AWS services like Redshift, Glue, Athena, etc. Integrating AWS data solutions with existing systems and third-party services. Monitoring and optimizing the performance of AWS data solutions, ensuring efficient query execution and data retrieval. Implementing data security and encryption best practices in AWS environments. Documenting data engineering processes, maintaining data pipeline infrastructure, and providing support as needed. Working closely with cross-functional teams including data scientists, analysts, and stakeholders to understand data requirements and deliver solutions. Technical and Functional Skills: Typically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. Strong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc Proficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Experience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. Familiarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Knowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Understanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Proficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Ability to analyze complex technical problems and propose effective solutions. Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders.

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Pune

Work from Office

Process Manager - AWS Data Engineer Mumbai/Pune| Full-time (FT) | Technology Services Shift Timings - EMEA(1pm-9pm)|Management Level - PM| Travel - NA The ideal candidate must possess in-depth functional knowledge of the process area and apply it to operational scenarios to provide effective solutions. The role enables to identify discrepancies and propose optimal solutions by using a logical, systematic, and sequential methodology. It is vital to be open-minded towards inputs and views from team members and to effectively lead, control, and motivate groups towards company objects. Additionally, candidate must be self-directed, proactive, and seize every opportunity to meet internal and external customer needs and achieve customer satisfaction by effectively auditing processes, implementing best practices and process improvements, and utilizing the frameworks and tools available. Goals and thoughts must be clearly and concisely articulated and conveyed, verbally and in writing, to clients, colleagues, subordinates, and supervisors. Process Manager Roles and responsibilities: Understand clients requirement and provide effective and efficient solution in AWS using Snowflake. Assembling large, complex sets of data that meet non-functional and functional business requirements Using Snowflake / Redshift Architect and design to create data pipeline and consolidate data on data lake and Data warehouse. Demonstrated strength and experience in data modeling, ETL development and data warehousing concepts Understanding data pipelines and modern ways of automating data pipeline using cloud based Testing and clearly document implementations, so others can easily understand the requirements, implementation, and test conditions Perform data quality testing and assurance as a part of designing, building and implementing scalable data solutions in SQL Technical and Functional Skills: AWS ServicesStrong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc. Programming LanguagesProficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Data WarehousingExperience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. ETL ToolsFamiliarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Database ManagementKnowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Big Data TechnologiesUnderstanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Version ControlProficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Problem-solving Skills: Ability to analyze complex technical problems and propose effective solutions. Communication Skills: Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders. Education and ExperienceTypically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. About eClerx eClerx is a global leader in productized services, bringing together people, technology and domain expertise to amplify business results. Our mission is to set the benchmark for client service and success in our industry. Our vision is to be the innovation partner of choice for technology, data analytics and process management services. Since our inception in 2000, we've partnered with top companies across various industries, including financial services, telecommunications, retail, and high-tech. Our innovative solutions and domain expertise help businesses optimize operations, improve efficiency, and drive growth. With over 18,000 employees worldwide, eClerx is dedicated to delivering excellence through smart automation and data-driven insights. At eClerx, we believe in nurturing talent and providing hands-on experience. About eClerx Technology eClerxs Technology Group collaboratively delivers Analytics, RPA, AI, and Machine Learning digital technologies that enable our consultants to help businesses thrive in a connected world. Our consultants and specialists partner with our global clients and colleagues to build and implement digital solutions through a broad spectrum of activities. To know more about us, visit https://eclerx.com eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, per applicable law

Posted 1 month ago

Apply

9.0 - 12.0 years

4 - 8 Lacs

Hyderabad

Work from Office

We have Immedaite openings for AWS IAM engineer. JD: Primary Skills: Extensive experience with AWS services : IAM, S3, Glue, CloudFormation and CloudWatch In-depth understanding of AWS IAM policy evaluation for permissions and access control Proficient in using Bitbucket, Confluence, GitHub, and Visual Studio Code Proficient in policy languages, particularly Rego scriptingGood to Have Skills : Experience with the WIZ tool for security and compliance Good programming skills in Python Advanced knowledge of additional AWS services : ECS, EKS, Lambda, SNS and SQSRoles & ResponsibilitiesSenior Developer on the Wiz team specializing in Rego and AWS- -Project Manager - One to Three Years,AWS Cloud Formation - Four to Six Years,AWS IAM - Four to Six Years-PSP Defined SCU in Data engineer.

Posted 1 month ago

Apply

8.0 - 10.0 years

7 - 16 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Mandatory skills: AWS Python

Posted 1 month ago

Apply

6.0 - 7.0 years

3 - 7 Lacs

Hyderabad

Work from Office

We are looking for a skilled AWS Data Engineer with 6 to 7 years of experience to join our team at IDESLABS PRIVATE LIMITED. The ideal candidate will have a strong background in designing and implementing data pipelines on AWS. Roles and Responsibility Design, develop, and maintain large-scale data pipelines using AWS services such as S3, Lambda, Step Functions, etc. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and implement data quality checks and validation processes to ensure data integrity. Optimize data processing workflows for performance, scalability, and cost-effectiveness. Troubleshoot and resolve complex technical issues related to data engineering projects. Ensure compliance with industry standards and best practices for data security and privacy. Job Requirements Strong understanding of AWS ecosystem including S3, Lambda, Step Functions, Redshift, Glue, Athena, etc. Experience with data modeling, data warehousing, and ETL processes. Proficiency in programming languages such as Python, Java, or Scala. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a fast-paced environment. Strong communication and interpersonal skills.

Posted 1 month ago

Apply

6.0 - 10.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As a AWS Data Engineer at Kyndryl, you'll be at the forefront of the data revolution, crafting and shaping data platforms that power our organization's success. This role is not just about code and databases; it's about transforming raw data into actionable insights that drive strategic decisions and innovation. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset—a true data alchemist. Armed with a keen eye for detail, you'll scrutinize data solutions, ensuring they align with business and technical requirements. Your work isn't just a means to an end; it's the foundation upon which data-driven decisions are made – and your lifecycle management expertise will ensure our data remains fresh and impactful. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future ar Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience • 10+ years of experience in data engineering with a minimum of 6 years on AWS. • Proficiency in AWS data services, including S3, Redshift, DynamoDB, Glue, Lambda, and EMR. • Strong SQL skills and experience with NoSQL databases on AWS. • Programming skills in Python, Java, or Scala for data processing and ETL tasks. • Solid understanding of data warehousing concepts, data modeling, and ETL best practices. • Experience with machine learning model deployment on AWS SageMaker. • Familiarity with data orchestration tools, such as Apache Airflow, AWS Step Functions, or AWS Data Pipeline. • Excellent problem-solving and analytical skills with attention to detail. • Strong communication skills and ability to collaborate effectively with both technical and non-technical stakeholders. • Experience with advanced AWS analytics services such as Athena, Kinesis, QuickSight, and Elasticsearch. • Hands-on experience with Amazon Bedrock and generative AI tools for exploring and implementing AI-based solutions. • AWS Certifications, such as AWS Certified Big Data – Specialty, AWS Certified Machine Learning – Specialty, or AWS Certified Solutions Architect. • Familiarity with CI/CD pipelines, containerization (Docker), and serverless computing concepts on AWS. Preferred Skills and Experience •Experience working as a Data Engineer and/or in cloud modernization. •Experience in Data Modelling, to create conceptual model of how data is connected and how it will be used in business processes. •Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization. •Cloud platform certification, e.g., AWS Certified Data Analytics– Specialty, Elastic Certified Engineer, Google CloudProfessional Data Engineer, or Microsoft Certified: Azure Data Engineer Associate. •Understanding of social coding and Integrated Development Environments, e.g., GitHub and Visual Studio. •Degree in a scientific discipline, such as Computer Science, Software Engineering, or Information Technology. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 1 month ago

Apply

6.0 - 10.0 years

4 - 8 Lacs

Pune

Work from Office

Position Overview Summary: The Data Engineer will expand and optimize the data and data pipeline architecture, as well as optimize data flow and collection for cross functional teams. The Data Engineer will perform data architecture analysis, design, development and testing to deliver data applications, services, interfaces, ETL processes, reporting and other workflow and management initiatives. The role also will follow modern SDLC principles, test driven development and source code reviews and change control standards in order to maintain compliance with policies. This role requires a highly motivated individual with strong technical ability, data capability, excellent communication and collaboration skills including the ability to develop and troubleshoot a diverse range of problems. Responsibilities Design and develop enterprise data data architecture solutions using Hadoop and other data technologies like Spark, Scala.

Posted 1 month ago

Apply

0.0 - 5.0 years

2 - 7 Lacs

Gurugram

Work from Office

Company: Oliver Wyman Description: RoleData Engineer Who We Are Oliver Wyman is a global leader in management consulting. With offices in 50+ cities across 30 countries, Oliver Wyman combines deep industry knowledge with specialized expertise in strategy, finance, operations, technology, risk management, and organizational transformation. Our 4000+ professionals help clients optimize their business, improve their IT, operations, and risk profile, and accelerate their organizational performance to seize the most attractive opportunities. Our professionals see what others don't, challenge conventional thinking, and consistently deliver innovative, customized solutions. As a result, we have a tangible impact on clients top and bottom lines. Our clients are the CEOs and executive teams of the top global 1000 companies. Oliver Wyman is a business of Marsh McLennan [NYSEMMC] For more information, visit www.oliverwyman.com Follow Oliver Wyman on Twitter @OliverWyman Practice Overview PracticeData and Analytics (DNA) - Analytics Consulting LocationGurugram, India At Oliver Wyman DNA, we partner with clients to solve tough strategic business challenges with the power of analytics, technology, and industry expertise. We drive digital transformation, create customer-focused solutions, and optimize operations for the future. Our goal is to achieve lasting results in collaboration with our clients and stakeholders. We value and offer opportunities for personal and professional growth. Join our entrepreneurial team focused on delivering impact globally. Our Mission and Purpose Mission Leverage Indias high-quality talent to provide exceptional analytics-driven management consulting services that empower clients globally to achieve their business goals and drive sustainable growth, by working alongside Oliver Wyman consulting teams. Purpose Our purpose is to bring together a diverse team of highest-quality talent, equipped with innovative analytical tools and techniques to deliver insights that drive meaningful impact for our global client base. We strive to build long-lasting partnerships with clients based on trust, mutual respect, and a commitment to deliver results. We aim to build a dynamic and inclusive organization that attracts and retains the top analytics talent in India and provides opportunities for professional growth and development. Our goal is to provide a sustainable work environment while fostering a culture of innovation and continuous learning for our team members. The Role and Responsibilities We have open positions ranging from Associate Data Engineer to Lead Data Engineer, providing talented and motivated professionals with excellent career and growth opportunities. We seek individuals with relevant prior experience in quantitatively intense areas to join our team. Youll be working with varied and diverse teams to deliver unique and unprecedented solutions across all industries. In the data engineering track, you will be primarily responsible for developing and monitoring high-performance applications that can rapidly deploy latest machine learning frameworks and other advanced analytical techniques at scale. This role requires you to be a proactive learner and quickly pick up new technologies, whenever required. Most of the projects require handling big data, so you will be required to work on related technologies extensively. You will work closely with other team members to support project delivery and ensure client satisfaction. Your responsibilities will include Working alongside Oliver Wyman consulting teams and partners, engaging directly with clients to understand their business challenges Exploring large-scale data and designing, developing, and maintaining data/software pipelines, and ETL processes for internal and external stakeholders Explaining, refining, and developing the necessary architecture to guide stakeholders through the journey of model building Advocating application of best practices in data engineering, code hygiene, and code reviews Leading the development of proprietary data engineering, assets, ML algorithms, and analytical tools on varied projects Creating and maintaining documentation to support stakeholders and runbooks for operational excellence Working with partners and principals to shape proposals that showcase our data engineering and analytics capabilities Travelling to clients locations across the globe, when required, understanding their problems, and delivering appropriate solutions in collaboration with them Keeping up with emerging state-of-the-art data engineering techniques in your domain Your Attributes, Experience & Qualifications Bachelor's or masters degree in a computational or quantitative discipline from a top academic program (Computer Science, Informatics, Data Science, or related) Exposure to building cloud ready applications Exposure to test-driven development and integration Pragmatic and methodical approach to solutions and delivery with a focus on impact Independent worker with ability to manage workload and meet deadlines in a fast-paced environment Collaborative team player Excellent verbal and written communication skills and command of English Willingness to travel Respect for confidentiality Technical Background Prior experience in designing and deploying large-scale technical solutions Fluency in modern programming languages (Python is mandatory; R, SAS desired) Experience with AWS/Azure/Google Cloud, including familiarity with services such as S3, EC2, Lambda, Glue Strong SQL skills and experience with relational databases such as MySQL, PostgreSQL, or Oracle Experience with big data tools like Hadoop, Spark, Kafka Demonstrated knowledge of data structures and algorithms Familiarity with version control systems like GitHub or Bitbucket Familiarity with modern storage and computational frameworks Basic understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security Valued but not required Compelling side projects or contributions to the Open-Source community Prior experience with machine learning frameworks (e.g., Scikit-Learn, TensorFlow, Keras/Theano, Torch, Caffe, MxNet) Familiarity with containerization technologies, such as Docker and Kubernetes Experience with UI development using frameworks such as Angular, VUE, or React Experience with NoSQL databases such as MongoDB or Cassandra Experience presenting at data science conferences and connections within the data science community Interest/background in Financial Services in particular, as well as other sectors where Oliver Wyman has a strategic presence Interview Process The application process will include testing technical proficiency, case study, and team-fit interviews. Please include a brief note introducing yourself, what youre looking for when applying for the role, and your potential value-add to our team. Roles and levels We are hiring for engineering role across the levels from Associate Data Engineer to Lead Data Engineer level for experience ranging from 0-8 years. In addition to the base salary, this position may be eligible for performance-based incentives. We offer a competitive total rewards package that includes comprehensive health and welfare benefits as well as employee assistance programs. Oliver Wyman is an equal-opportunity employer. Our commitment to diversity is genuine, deep, and growing. Were not perfect, but were working hard right now to make our teams balanced, representative, and diverse. Marsh McLennan and its Affiliates are EOE Minority/Female/Disability/Vet/Sexual Orientation/Gender Identity employers.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies