Jobs
Interviews

572 Glue Jobs - Page 17

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

6 - 15 Lacs

Bengaluru

Work from Office

Urgent Hiring _ Azure Data Engineer with a leading Management Consulting Company @ Bangalore Location. Strong expertise in Databricks & Pyspark while dealing with batch processing or live (streaming) data sources. 4+ relevant years of experience in Databricks & Pyspark/Scala 7+ total years of experience Good in data modelling and designing. Ctc- Hike Shall be considered on Current/Last Drawn Pay Apply - rohita.robert@adecco.com Has worked on real data challenges and handled high volume, velocity, and variety of data. Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges. Contributes to community building initiatives like CoE, CoP. Mandatory skills: Azure - Master ELT - Skill Data Modeling - Skill Data Integration & Ingestion - Skill Data Manipulation and Processing - Skill GITHUB, Action, Azure DevOps - Skill Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest - Skill

Posted 1 month ago

Apply

5.0 - 8.0 years

10 - 17 Lacs

Pune

Remote

Were Hiring! | Senior Data Engineer (Remote) Location: Remote | Shift: US - CST Time | Department: Data Engineering Are you a data powerhouse who thrives on solving complex data challenges? Do you love working with Python, AWS, and cutting-edge data tools? If yes, Atidiv wants YOU! Were looking for a Senior Data Engineer to build and scale data pipelines, transform how we manage data lakes and warehouses, and power real-time data experiences across our products. What Youll Do: Architect and develop robust, scalable data pipelines using Python & PySpark Drive real-time & batch data ingestion from diverse data sources Build and manage data lakes and data warehouses using AWS (S3, Glue, Redshift, EMR, Lambda, Kinesis) Write high-performance SQL queries and optimize ETL/ELT jobs Collaborate with data scientists, analysts, and engineers to ensure high data quality and availability Implement monitoring, logging & alerting for workflows Ensure top-tier data security, compliance & governance What We’re Looking For: 5+ years of hands-on experience in Data Engineering Strong skills in Python, DBT, SQL , and working with Snowflake Proven experience with Airflow, Kafka/Kinesis , and AWS ecosystem Deep understanding of CI/CD practices Passion for clean code, automation , and scalable systems Why Join Atidiv? 100% Remote | Flexible Work Culture Opportunity to work with cutting-edge technologies Collaborative, supportive team that values innovation and ownership Work on high-impact, global projects Ready to transform data into impact? Send your resume to: nitish.pati@atidiv.com

Posted 1 month ago

Apply

5.0 - 8.0 years

18 - 30 Lacs

Hyderabad

Work from Office

AWS Data Engineer with Glue, Terraform, Business Intelligence (Tableau) development * Design, develop & maintain AWS data pipelines using Glue, Lambda & Redshift * Collaborate with BI team on ETL processes & dashboard creation with Tableau

Posted 1 month ago

Apply

4.0 - 7.0 years

10 - 20 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Must-Have Qualifications: AWS Expertise: Strong hands-on experience with AWS data services including Glue, Redshift, Athena, S3, Lake Formation, Kinesis, Lambda, Step Functions, EMR , and CloudWatch . ETL/ELT Engineering: Deep proficiency in designing robust ETL/ELT pipelines with AWS Glue (PySpark/Scala), Python, dbt , or other automation frameworks. Data Modeling: Advanced knowledge of dimensional (Star/Snowflake) and normalised data modeling, optimised for Redshift and S3-based lakehouses . Programming Skills: Proficient in Python, SQL, and PySpark , with automation and scripting skills for data workflows. Architecture Leadership: Demonstrated experience leading large-scale AWS data engineering projects across teams and domains. Pre-sales & Consulting: Proven experience working with clients, responding to technical RFPs, and designing cloud-native data solutions. Advanced PySpark Expertise: Deep hands-on experience in writing optimized PySpark code for distributed data processing, including transformation pipelines using DataFrames , RDDs , and Spark SQL , with a strong grasp of lazy evaluation , catalyst optimizer , and Tungsten execution engine . Performance Tuning & Partitioning: Proven ability to debug and optimize Spark jobs through custom partitioning strategies , broadcast joins , caching , and checkpointing , with proficiency in tuning executor memory , shuffle configurations , and leveraging Spark UI for performance diagnostics in large-scale data workloads (>TB scale).

Posted 1 month ago

Apply

8.0 - 10.0 years

30 - 35 Lacs

Pune, Chennai, Bengaluru

Work from Office

Role & responsibilities AWS Architect Primary skills Aws (Redshift, Glue, Lambda, ETL and Aurora), advance SQL and Python , Pyspark Note : -Aurora Database mandatory skill Experience – 8 + yrs Notice period – Immediate joiner Location – Any Brillio location (Preferred is Bangalore) Job Description: year of IT experiences with deep expertise in S3, Redshift, Aurora, Glue and Lambda services. Atleast one instance of proven experience in developing Data platform end to end using AWS Hands-on programming experience with Data Frames, Python, and unit testing the python as well as Glue code. Experience in orchestrating mechanisms like Airflow, Step functions etc. Experience working on AWS redshift is Mandatory. Must have experience writing stored procedures, understanding of Redshift data API and writing federated queries Experience in Redshift performance tunning.Good in communication and problem solving. Very good stakeholder communication and management Preferred candidate profile

Posted 1 month ago

Apply

5.0 - 8.0 years

0 - 3 Lacs

Pune

Hybrid

Knowledge and hands-on experience writing effective SQL queries and statements Understanding of AWS services At least 5 years of experience in a similar capacity At least 3 years of proficiency in using Python to develop and modify scripts At least 3 years of proficiency in managing data ingestion and DAG maintenance in airflow Preferred Requirements Knowledge in Hadoop Ecosystem like Spark or PySpark Knowledge in AWS services like S3, Data Lake, Redshift, EMR, EC2, Lambda, Glue, Aurora, RDS, Airflow.

Posted 1 month ago

Apply

7.0 - 10.0 years

8 - 15 Lacs

Hyderabad, Bengaluru

Hybrid

Key Responsibilities: Use data mappings and models provided by the data modeling team to build robust Snowflake data pipelines . Design and implement pipelines adhering to 2NF/3NF normalization standards . Develop and maintain ETL processes for integrating data from multiple ERP and source systems . Build scalable and secure Snowflake data architecture supporting Data Quality (DQ) needs. Raise CAB requests via Carriers change process and manage production deployments . Provide UAT support and ensure smooth transition of finalized pipelines to support teams. Create and maintain comprehensive technical documentation for traceability and handover. Collaborate with data modelers, business stakeholders, and governance teams to enable DQ integration. Optimize complex SQL queries , perform performance tuning , and ensure data ops best practices . Requirements: Strong hands-on experience with Snowflake Expert-level SQL skills and deep understanding of data transformation Solid grasp of data architecture and 2NF/3NF normalization techniques Experience with cloud-based data platforms and modern data pipeline design Exposure to AWS data services like S3, Glue, Lambda, Step Functions (preferred) Proficiency with ETL tools and working in Agile environments Familiarity with Carrier CAB process or similar structured deployment frameworks Proven ability to debug complex pipeline issues and enhance pipeline scalability Strong communication and collaboration skills Role & responsibilities Preferred candidate profile

Posted 1 month ago

Apply

3.0 - 6.0 years

2 - 6 Lacs

Chennai

Work from Office

AWS Lambda Glue Kafka/Kinesis RDBMS Oracle, MySQL, RedShift, PostgreSQL, Snowflake Gateway Cloudformation / Terraform Step Functions Cloudwatch Python Pyspark Job role & responsibilities: Looking for a Software Engineer/Senior Software engineer with hands on experience in ETL projects and extensive knowledge in building data processing systems with Python, pyspark and Cloud technologies(AWS). Experience in development in AWS Cloud (S3, Redshift, Aurora, Glue, Lambda, Hive, Kinesis, Spark, Hadoop/EMR) Required Skills: Amazon Kinesis, Amazon Aurora, Data Warehouse, SQL, AWS Lambda, Spark, AWS QuickSight Advanced Python Skills Data Engineering ETL and ELT Skills Experience of Cloud Platforms (AWS or GCP or Azure) Mandatory skills- Datawarehouse, ETL, SQL, Python, AWS Lambda, Glue, AWS Redshift.

Posted 1 month ago

Apply

3.0 - 5.0 years

4 - 8 Lacs

Pune

Work from Office

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Has data pipeline implementation experience with any of these cloud providers - AWS, Azure, GCP. Experience with cloud storage, cloud database, cloud data warehousing and Data Lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Has good knowledge of cloud compute services and load balancing. Has good knowledge of cloud identity management, authentication and authorization. Proficiency in using cloud utility functions such as AWS lambda, AWS step functions, Cloud Run, Cloud functions, Azure functions. Experience in using cloud data integration services for structured, semi structured and unstructured data such as Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Your Profile Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling. Able to contribute to making architectural choices using various cloud services and solution methodologies. Expertise in programming using python. Very good knowledge of cloud Dev-ops practices such as infrastructure as code, CI/CD components, and automated deployments on cloud. Must understand networking, security, design principles and best practices in cloud. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.

Posted 1 month ago

Apply

6.0 - 10.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. AWS Data/API Gateway Pipeline Engineer responsible for designing, building, and maintaining real-time, serverless data pipelines and API services. This role requires extensive hands-on experience with Java, Python, Redis, DynamoDB Streams, and PostgreSQL, along with working knowledge of AWS Lambda and AWS Glue for data processing and orchestration. This position involves collaboration with architects, backend developers, and DevOps engineers to deliver scalable, event-driven data solutions and secure API services across cloud-native systems. Key Responsibilities API & Backend Engineering Build and deploy RESTful APIs using AWS API Gateway, Lambda, and Java and Python. Integrate backend APIs with Redis for low-latency caching and pub/sub messaging. Use PostgreSQL for structured data storage and transactional processing. Secure APIs using IAM, OAuth2, and JWT, and implement throttling and versioning strategies. Data Pipeline & Streaming Design and develop event-driven data pipelines using DynamoDB Streams to trigger downstream processing. Use AWS Glue to orchestrate ETL jobs for batch and semi-structured data workflows. Build and maintain Lambda functions to process real-time events and orchestrate data flows. Ensure data consistency and resilience across services, queues, and databases. Cloud Infrastructure & DevOps Deploy and manage cloud infrastructure using CloudFormation, Terraform, or AWS CDK. Monitor system health and service metrics using CloudWatch, SNS and structured logging. Contribute to CI/CD pipeline development for testing and deploying Lambda/API services. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience Bachelor's degree in computer science, Engineering, or a related field. Over 6 years of experience in developing backend or data pipeline services using Java and Python . Strong hands-on experience with: AWS API Gateway , Lambda , DynamoDB Streams Redis (caching, messaging) PostgreSQL (schema design, tuning, SQL) AWS Glue for ETL jobs and data transformation Solid understanding of REST API design principles, serverless computing, and real-time architecture. Preferred Skills and Experience Familiarity with Kafka, Kinesis, or other message streaming systems Swagger/OpenAPI for API documentation Docker and Kubernetes (EKS) Git, CI/CD tools (e.g., GitHub Actions) Experience with asynchronous event processing, retries, and dead-letter queues (DLQs) Exposure to data lake architectures (S3, Glue Data Catalog, Athena) Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 1 month ago

Apply

5.0 - 8.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Roles and Responsibilities: Experience in GLUE AWS Experience with one or more of the followingSpark, Scala, Python, and/or R . Experience in API development with NodeJS Experience with AWS (S3, EC2) or other cloud provider Experience in Data Virtualization tools like Dremio and Athena is a plus Should be technically proficient in Big Data concepts Should be technically proficient in Hadoop and noSQL (MongoDB) Good communication and documentation skills

Posted 1 month ago

Apply

8.0 - 13.0 years

1 - 4 Lacs

Pune

Work from Office

Roles & Responsibilities: Provides expert level development system analysis design and implementation of applications using AWS services specifically using Python for Lambda Translates technical specifications and/or design models into code for new or enhancement projects (for internal or external clients). Develops code that reuses objects is well-structured includes sufficient comments and is easy to maintain Provides follow up Production support when needed. Submits change control requests and documents. Participates in design code and test inspections throughout the life cycle to identify issues and ensure methodology compliance. Participates in systems analysis activities including system requirements analysis and definition e.g. prototyping. Participates in other meetings such as those for use case creation and analysis. Performs unit testing and writes appropriate unit test plans to ensure requirements are satisfied. Assists in integration systems acceptance and other related testing as needed. Ensures developed code is optimized in order to meet client performance specifications associated with page rendering time by completing page performance tests. Technical Skills Required Experience in building large scale batch and data pipelines with data processing frameworks in AWS cloud platform using PySpark (on EMR) & Glue ETL Deep experience in developing data processing data manipulation tasks using PySpark such as reading data from external sources merge data perform data enrichment and load in to target data destinations. Experience in deployment and operationalizing the code using CI/CD tools Bit bucket and Bamboo Strong AWS cloud computing experience. Extensive experience in Lambda S3 EMR Redshift Should have worked on Data Warehouse/Database technologies for at least 8 years. 7. Any AWS certification will be an added advantage.

Posted 1 month ago

Apply

4.0 - 9.0 years

12 - 16 Lacs

Kochi

Work from Office

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies. Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers.

Posted 1 month ago

Apply

5.0 - 7.0 years

12 - 16 Lacs

Bengaluru

Work from Office

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers AWS S3 , Redshift , and EMR for data storage and distributed processing. AWS Lambda , AWS Step Functions , and AWS Glue to build serverless, event-driven data workflows and orchestrate ETL processes

Posted 1 month ago

Apply

9.0 - 12.0 years

35 - 40 Lacs

Bengaluru

Work from Office

We are seeking an experienced AWS Architect with a strong background in designing and implementing cloud-native data platforms. The ideal candidate should possess deep expertise in AWS services such as S3, Redshift, Aurora, Glue, and Lambda, along with hands-on experience in data engineering and orchestration tools. Strong communication and stakeholder management skills are essential for this role. Key Responsibilities Design and implement end-to-end data platforms leveraging AWS services. Lead architecture discussions and ensure scalability, reliability, and cost-effectiveness. Develop and optimize solutions using Redshift, including stored procedures, federated queries, and Redshift Data API. Utilize AWS Glue and Lambda functions to build ETL/ELT pipelines. Write efficient Python code and data frame transformations, along with unit testing. Manage orchestration tools such as AWS Step Functions and Airflow. Perform Redshift performance tuning to ensure optimal query execution. Collaborate with stakeholders to understand requirements and communicate technical solutions clearly. Required Skills & Qualifications Minimum 9 years of IT experience with proven AWS expertise. Hands-on experience with AWS services: S3, Redshift, Aurora, Glue, and Lambda . Mandatory experience working with AWS Redshift , including stored procedures and performance tuning. Experience building end-to-end data platforms on AWS . Proficiency in Python , especially working with data frames and writing testable, production-grade code. Familiarity with orchestration tools like Airflow or AWS Step Functions . Excellent problem-solving skills and a collaborative mindset. Strong verbal and written communication and stakeholder management abilities. Nice to Have Experience with CI/CD for data pipelines. Knowledge of AWS Lake Formation and Data Governance practices.

Posted 1 month ago

Apply

4.0 - 7.0 years

15 - 25 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Role & responsibilities Looking For AWS data Eng-Immediate joiners for Hyderabad,Chennai,Noida,Pune,Bangalore locations. Mandatory Skill-Python,Pyspark,SQL,Aws Glue Strong technical skills in services like S3,Athena, Lambda, RGlue and Glue(Pyspark), SQL, Data Warehousing, Informatica, OracleDesign, develop, and implement custom solutions within the Collibra platform to support data governance initiatives. Preferred candidate profile Snowflake, Agile methodology and Tableau. Proficiency in Python/Scala, Spark architecture, complex SQL, and RDBMS. Hands-on experience with ETL tools (e.g., Informatica) and SCD1, SCD2. 2-6 years of DWH, AWS Services and ETL design knowledge. Develop ETL processes for data ingestion, transformation, and loading into data lakes and warehouses. Collaborate with data scientists and analysts to ensure data availability for analytics and reporting.

Posted 1 month ago

Apply

4.0 - 7.0 years

5 - 9 Lacs

Bengaluru

Work from Office

PySpark, Python, SQL Strong focus on big data processing,which is core to data engineering. AWS Cloud Services (Lambda, Glue, S3, IAM) Indicates working with cloud-based data pipelines. Airflow, GitHub Essential for orchestration and version control in data workflows.

Posted 1 month ago

Apply

3.0 - 5.0 years

10 - 15 Lacs

Pune

Work from Office

Job Description: Sr. Software Engineer (Python) Company: Karini AI Location: Pune (Wakad) Experience Required: 3 - 5 years Compensation: Not Disclosed Role Overview : We are seeking a skilled Sr. Software Engineer with advanced Python skills with a passion for product development and a knowledge of Machine Learning and/or Generative AI. You will collaborate with a talented team of engineers and AI Engineers to design and develop high-quality Generative AI platform on AWS. Key Responsibilities : Designed and developed backend applications and APIs using Python. Work on product development, building robust, scalable, and maintainable solutions. Integrate Generative AI models into production environments to solve real-world problems. Collaborate with cross-functional teams, including data scientists, product managers, and designers, to understand requirements and deliver solutions. Optimize application performance and ensure scalability across cloud environments. Write clean, maintainable, and efficient code while adhering to best practices. Requirements : 3-5 years of hands-on experience in Product development. Demonstrable experience in understanding advanced python concepts for building scalable systems. Demonstrable experience working with FastAPI server in production environment. Familiarity with unit testing, version control and CI/CD Good understanding of Machine Learning concepts and frameworks (e.g., TensorFlow, PyTorch). Experience with integrating and deploying ML models into applications is a plus. Knowledge of database systems (SQL/NoSQL) and RESTful API development. Exposure to containerization (Docker) and cloud platforms (AWS). Strong problem-solving skills and attention to detail. Preferred Qualifications : Bachelor of Engineering in Computer Science, Information Technology, or any other engineering discipline. M.Tech, M.E. & B.E-Computer Science preferred. Hands-on experience in product-focused organizations. Experience working with data pipelines or data engineering tasks. Knowledge of CI/CD pipelines and DevOps practices. Familiarity with version control tools like Git. Interest or experience in Generative AI or NLP applications. What We Offer: Top-tier compensation package, aligned with industry benchmarks. Comprehensive employee benefits including Provident Fund (PF) and medical insurance. Experience working with Ex-AWS founding team with the fastest growing company. Work on innovative AI-driven products that solve complex problems. Collaborate with a talented and passionate team in a dynamic environment. Opportunities for professional growth and skill enhancement in Generative AI A supportive, inclusive, and flexible work culture that values creativity and ownership.

Posted 1 month ago

Apply

5.0 - 8.0 years

10 - 15 Lacs

Kochi

Remote

We are looking for a skilled AWS Cloud Engineer with a minimum of 5 years of hands-on experience in managing and implementing cloud-based solutions on AWS. The ideal candidate will have expertise in AWS core services such as S3, EC2, MSK, Glue, DMS, and SageMaker, along with strong programming and containerization skills using Python and Docker.Design, implement, and manage scalable AWS cloud infrastructure solutions. Hands-on experience with AWS services: S3, EC2, MSK, Glue, DMS, and SageMaker. Develop, deploy, and maintain Python-based applications in cloud environments. Containerize applications using Docker and manage deployment pipelines. Troubleshoot infrastructure and application issues, review designs, and code solutions. Ensure high availability, performance, and security of cloud resources. Collaborate with cross-functional teams to deliver reliable and scalable solutions.

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

Hyderabad, Delhi / NCR

Hybrid

Develop ETL pipelines using SQL and C# and Python Performance Tuning, Design scalable DB architecture Maintain Technical documentation Required Candidate profile 5+ years in SQL T-SQL, performance tuning, developing ETL processes. Hands on C# and WPF will be a plus Experience in AWS and Azure is must

Posted 1 month ago

Apply

5.0 - 8.0 years

15 - 27 Lacs

Hyderabad

Work from Office

Dear Candidate, We are pleased to invite you to participate in the EY GDS face to face hiring Event for the position of AWS Data Engineer. Role: AWS Data Engineer Experience Required: 5-8 Years Location - Hyderabad Mode of interview - Face to Face JD - Technical Skills: • Must have Strong experience in AWS Data Services like Glue , Lambda, Even bridge, Kinesis, S3/ EMR , Redshift , RDS, Step functions, Airflow & Pyspark • Strong exposure to IAM, Cloud Trail , Cluster optimization , Python & SQL • Should have expertise in Data design, STTM, understanding of Data models , Data component design, Automated testing, Code Coverage, UAT support , Deployment and go live • Experience with version control systems like SVN, Git. Create and manage AWS Glue crawlers and jobs to automate data cataloging and ingestion processes across various structured and unstructured data sources. • Strong experience with AWS Glue building ETL pipelines, managing crawlers, and working with Glue data catalogue. • Proficiency in AWS Redshift designing and managing Redshift clusters, writing complex SQL queries, and optimizing query performance. • Enable data consumption from reporting and analytics business applications using AWS services (ex: QuickSight, Sagemaker, JDBC / ODBC connectivity, etc.) Kindly confirm your availability by applying to this Job

Posted 1 month ago

Apply

9.0 - 14.0 years

20 - 30 Lacs

Bengaluru

Hybrid

My profile :- linkedin.com/in/yashsharma1608 Hiring manager profile :- on payroll of - https://www.nyxtech.in/ Clinet : Brillio PAYROLL AWS Architect Primary skills Aws (Redshift, Glue, Lambda, ETL and Aurora), advance SQL and Python , Pyspark Note : -Aurora Database mandatory skill Experience 9 + yrs Notice period Immediate joiner Location Any Brillio location (Preferred is Bangalore) Budget – 30 LPA Job Description: year of IT experiences with deep expertise in S3, Redshift, Aurora, Glue and Lambda services. Atleast one instance of proven experience in developing Data platform end to end using AWS Hands-on programming experience with Data Frames, Python, and unit testing the python as well as Glue code. Experience in orchestrating mechanisms like Airflow, Step functions etc. Experience working on AWS redshift is Mandatory. Must have experience writing stored procedures, understanding of Redshift data API and writing federated queries Experience in Redshift performance tunning.Good in communication and problem solving. Very good stakeholder communication and management

Posted 1 month ago

Apply

12.0 - 17.0 years

14 - 19 Lacs

Hyderabad

Work from Office

W e are seeking a highly skilled , hands -on and technically proficient Test Automation Engineering Manager with strong experience in data quality , data integration , and a specific focus on semantic layer validation . This role combines technical ownership of automated data testing solutions with team leadership responsibilities, ensuring that the data infrastructure across platforms remains accurate , reliable, and high performing . As a leader in the QA and Data Engineering space, you will be responsible for building robust automated testing frameworks, validating GraphQL -based data layers, and driving the teams technical growth. Your work will ensure that all data flows, transformations, and API interactions meet enterprise-grade quality standards across the data lifecycle. Y ou will be responsible for the end-to-end design and development of test automation frameworks, working collaboratively with your team. As the delivery owner for test automation, your primary responsibilities will include building and automating comprehensive validation frameworks for semantic layer testing, GraphQL API validation, and schema compliance , ensuring alignment with data quality, performance, and integration reliability standards. You will also work closely with data engineers, product teams, and platform architects to validate data contracts and integration logic, supporting the integrity and trustworthiness of enterprise data solutions. This is a highly technical and hands-on role, with strong emphasis on automation, data workflow validation , and the seamless integration of testing practices into CI/CD pipelines . Roles & Responsibilities: Design and implement robust data validation frameworks focused on the semantic layer, ensuring accurate data model, schema compliance, and contract adherence across services and platforms. Build and automate end-to-end data pipeline validations across ingestion, transformation, and consumption layers using Databricks, Apache Spark, and AWS services such as S3, Glue, Athena, and Lake Formation. Lead test automation initiatives by developing scalable, modular test frameworks and embedding them into CI/CD pipelines for continuous validation of semantic models, API integrations, and data workflows. Validate GraphQL APIs by testing query/mutation structures, schema compliance, and end-to-end integration accuracy using tools like Postman, Python, and custom test suites. Oversee UI and visualization testing for tools like Tableau, Power BI, and custom front-end dashboards, ensuring consistency with backend data through Selenium with Python and backend validations. Define and drive the overall QA strategy with emphasis on performance, reliability, and semantic data accuracy, while setting up alerting and reporting mechanisms for test failures, schema issues, and data contract violations. Collaborate closely with product managers, data engineers, developers, and DevOps teams to align quality assurance initiatives with business goals and agile release cycles. Actively contribute to architecture and design discussions, ensuring quality and testability are embedded from the earliest stages of development. Mentor and manage QA engineers, fostering a collaborative environment focused on technical excellence, knowledge sharing, and continuous professional growth. Must-Have Skills: Team Leadership Experience is also required. Strong 6+ years of experience in Requested Data Ops/Testing is required 7+ to 12 years of Overall experience is expected in Test Automation. Strong experience in designing and implementing test automation frameworks integrated with CI/CD pipelines. Expertise in validating data pipelines at the syntactic layer, including schema checks, null/duplicate handling, and transformation validation. Hands-on experience with Databricks, Apache Spark, and AWS services (S3, Glue, Athena, Lake Formation). Proficiency in Python, PySpark, and SQL for writing validation scripts and automation logic. Solid understanding of GraphQL APIs, including schema validation and query/mutation testing. Experience with API testing tools like Postman and Python-based test frameworks. Proficient in UI and visualization testing using Selenium with Python, especially for tools like Tableau, Power BI, or custom dashboards. Familiarity with CI/CD tools such as Jenkins, GitHub Actions, or GitLab CI for test orchestration. Ability to implement alerting and reporting for test failures, anomalies, and validation issues. Strong background in defining QA strategies and leading test automation initiatives in data-centric environments. Excellent collaboration and communication skills, with the ability to work closely with cross-functional teams in Agile settings. Mentor and manage QA engineers, fostering a collaborative environment focused on technical excellence, knowledge sharing, and continuous professional growth. Good-to-Have Skills: Experience with data governance tools such as Apache Atlas, Collibra, or Alation Understanding of DataOps methodologies and practices Contributions to internal quality dashboards or data observability systems Awareness of metadata-driven testing approaches and lineage-based validations Experience working with agile Testing methodologies such as Scaled Agile. Familiarity with automated testing frameworks like Selenium, JUnit, TestNG, or PyTest. Education and Professional Certifications Bachelors/Masters degree in computer science and engineering preferred. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT We provide reasonable accommodations for individuals with disabilities during the application, interview process, job functions, and employment benefits. Contact us to request an accommodation .

Posted 1 month ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Hyderabad

Work from Office

Role Description: We are seeking a highly skilled, hands-on Senior QA & Test Automation Specialist (Test Automation Engineer)with strong experience in data validation , ETL testing , test automation , and QA process ownership . This role combines deep technical execution with a solid foundation in QA best practices including test planning, defect tracking, and test lifecycle management . You will be responsible for designing and executing manual and automated test strategies for complex real-time and batch data pipelines , contributing to the design of automation frameworks , and ensuring high-quality data delivery across our AWS and Databricks-based analytics platforms . The role is highly technical and hands-on , with a strong focus on automation, metadata validation , and ensuring data governance practices are seamlessly integrated into development pipelines. Roles & Responsibilities: Collaborate with the QA Manager to design and implement end-to-end test strategies for data validation, semantic layer testing, and GraphQL API validation. Perform manual validation of data pipelines, including source-to-target data mapping, transformation logic, and business rule verification. Develop and maintain automated data validation scripts using Python and PySpark for both real-time and batch pipelines. Contribute to the design and enhancement of reusable automation frameworks, with components for schema validation, data reconciliation, and anomaly detection. Validate semantic layers (e.g., Looker, dbt models) and GraphQL APIs, ensuring data consistency, compliance with contracts, and alignment with business expectations. Write and manage test plans, test cases, and test data for structured, semi-structured, and unstructured data. Track, manage, and report defects using tools like JIRA, ensuring thorough root cause analysis and timely resolution. Collaborate with Data Engineers, Product Managers, and DevOps teams to integrate tests into CI/CD pipelines and enable shift-left testing practices. Ensure comprehensive test coverage for all aspects of the data lifecycle, including ingestion, transformation, delivery, and consumption. Participate in QA ceremonies (standups, planning, retrospectives) and continuously contribute to improving the QA process and culture. Experience building or maintaining test data generators Contributions to internal quality dashboards or data observability systems Awareness of metadata-driven testing approaches and lineage-based validations Experience working with agile Testing methodologies such as Scaled Agile. Familiarity with automated testing frameworks like Selenium, JUnit, TestNG, or PyTest. Must-Have Skills: 69 years of experience in QA roles, with at least 3+ years of strong exposure to data pipeline testing and ETL validation. Strong in SQL, Python, and optionally PySpark comfortable with writing complex queries and validation scripts. Practical experience with manual validation of data pipelines and source-to-target testing. Experience in validating GraphQL APIs, semantic layers (Looker, dbt, etc.), and schema/data contract compliance. Familiarity with data integration tools and platforms such as Databricks, AWS Glue, Redshift, Athena, or BigQuery. Strong understanding of test planning, defect tracking, bug lifecycle management, and QA documentation. Experience working in Agile/Scrum environments with standard QA processes. Knowledge of test case and defect management tools (e.g., JIRA, TestRail, Zephyr). Strong understanding of QA methodologies, test planning, test case design, and defect lifecycle management. Deep hands-on expertise in SQL, Python, and PySpark for testing and automating validation. Proven experience in manual and automated testing of batch and real-time data pipelines. Familiarity with data processing and analytics stacks: Databricks, Spark, AWS (Glue, S3, Athena, Redshift). Experience with bug tracking and test management tools like JIRA, TestRail, or Zephyr. Ability to troubleshoot data issues independently and collaborate with engineering for root cause analysis. Experience integrating automated tests into CI/CD pipelines (e.g., Jenkins, GitHub Actions). Experience validating data from various file formats such as JSON, CSV, Parquet, and Avro Strong ability to validate and automate data quality checks: schema validation, null checks, duplicates, thresholds, and transformation validation Hands-on experience with API testing using Postman, pytest, or custom automation scripts Good-to-Have Skills: Experience with data governance tools such as Apache Atlas, Collibra, or Alation Familiarity with monitoring/observability tools such as Datadog, Prometheus, or Cloud Watch Education and Professional Certifications Bachelors/Masters degree in computer science and engineering preferred. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.

Posted 1 month ago

Apply

3.0 - 7.0 years

4 - 7 Lacs

Hyderabad

Work from Office

What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and driving data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to standard methodologies for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Doctorate degree OR Master’s degree and 4 to 6 years of Computer Science, IT or related field OR Bachelor’s degree and 6 to 8 years of Computer Science, IT or related field OR Diploma and 10 to 12 years of Computer Science, IT or related field Preferred Qualifications: Functional Skills: Must-Have Skills Proficiency in Python, PySpark, and Scala for data processing and ETL (Extract, Transform, Load) workflows, with hands-on experience in using Databricks for building ETL pipelines and handling big data processing Experience with data warehousing platforms such as Amazon Redshift, or Snowflake. Strong knowledge of SQL and experience with relational (e.g., PostgreSQL, MySQL) databases. Familiarity with big data frameworks like Apache Hadoop, Spark, and Kafka for handling large datasets. Experienced with software engineering best-practices, including but not limited to version control (GitLab, Subversion, etc.), CI/CD (Jenkins, GITLab etc.), automated unit testing, and Dev Ops Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Good-to-Have Skills: Experience with cloud platforms such as AWS particularly in data services (e.g., EKS, EC2, S3, EMR, RDS, Redshift/Spectrum, Lambda, Glue, Athena) Strong understanding of data modeling, data warehousing, and data integration concepts Understanding of machine learning pipelines and frameworks for ML/AI models Professional Certifications (please mention if the certification is preferred or required for the role): AWS Certified Data Engineer (preferred) Databricks Certified (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills Equal opportunity statement Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies