Home
Jobs

267 Aws Glue Jobs - Page 2

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

11 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

Stellantis is seeking a passionate, innovative, results-oriented Information Communication Technology (ICT) Manufacturing AWS Cloud Architect to join the team. As a Cloud architect, the selected candidate will leverage business analysis, data management, and data engineering skills to develop sustainable data tools supporting Stellantiss Manufacturing Portfolio Planning. This role will collaborate closely with data analysts and business intelligence developers within the Product Development IT Data Insights team. Job responsibilities include but are not limited to Having deep expertise in the design, creation, management, and business use of large datasets, across a variety of data platforms Assembling large, complex sets of data that meet non-functional and functional business requirements Identifying, designing and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes Building required infrastructure for optimal extraction, transformation and loading of data from various data sources using AWS, cloud and other SQL technologies. Working with stakeholders to support their data infrastructure needs while assisting with data-related technical issues Maintain high-quality ontology and metadata of data systems Establish a strong relationship with the central BI/data engineering COE to ensure alignment in terms of leveraging corporate standard technologies, processes, and reusable data models Ensure data security and develop traceable procedures for user access to data systems Qualifications, Experience and Competency Education Bachelors or Masters degree in Computer Science, or related IT-focused degree Experience Essential Overall 10-15 years of IT experience Develop, automate and maintain the build of AWS components, and operating systems. Work with application and architecture teams to conduct proof of concept (POC) and implement the design in a production environment in AWS. Migrate and transform existing workloads from on premise to AWS Minimum 5 years of experience in the area of data engineering or data architectureconcepts, approach, data lakes, data extraction, data transformation Proficient in ETL optimization, designing, coding, and tuning big data processes using Apache Spark or similar technologies. Experience operating very large data warehouses or data lakes Investigate and develop new micro services and features using the latest technology stacks from AWS Self-starter with the desire and ability to quickly learn new technologies Strong interpersonal skills with ability to communicate & build relationships at all levels Hands-on experience from AWS cloud technologies like S3, AWS glue, Glue Catalog, Athena, AWS Lambda, AWS DMS, pyspark, and snowflake. Experience with building data pipelines and applications to stream and process large datasets at low latencies. Identifying, designing, and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes Desirable Familiarity with data analytics, Engineering processes and technologies Ability to work successfully within a global and cross-functional team A passion for technology. We are looking for someone who is keen to leverage their existing skills while trying new approaches and share that knowledge with others to help grow the data and analytics teams at Stellantis to their full potential! Specific Skill Requirement AWS services (GLUE, DMS, EC2, RDS, S3, VPCs and all core services, Lambda, API Gateway, Cloud Formation, Cloud watch, Route53, Athena, IAM) andSQL, Qlik sense, python/Spark, ETL optimization , If you are interested, please Share below details and Updated Resume Matched First Name Last Name Date of Birth Pass Port No and Expiry Date Alternate Contact Number Total Experience Relevant Experience Current CTC Expected CTC Current Location Preferred Location Current Organization Payroll Company Notice period Holding any offer

Posted 1 day ago

Apply

6.0 - 11.0 years

8 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

Bachelors degree (Computer Science), masters degree or Technical Diploma or equivalent At least 8 years of experience in a similar role At least 5 years of experience on AWS and/or Azure At least 5 years of experience on Databricks At least 5 years of experience on multiples Azure and AWS PaaS solutions: Azure Data Factory, MSSQL, Azure storage, AWS S3, Cognitive search, CosmosDB, Event Hub, AWS glue Strong knowledge of AWS and Azure architecture design best practices Knowledge of ITIL & AGILE methodologies (certifications are a plus) Experience working with DevOps tools such as Git, CI/CD pipelines, Ansible, Azure DevOps Knowledge of Airflow, Kubernetes is an added advantage Solid understanding of Networking/Security and Linux English language on the Business Fluent level is required Curious to continuously learn and explore new approaches/technologies Able to work under pressure in a multi-vendor and multi-cultural team Flexible, agile and adaptive to change Customer-focused approach Good communication skills Analytical mind-set Innovation

Posted 1 day ago

Apply

12.0 - 16.0 years

14 - 20 Lacs

Pune

Work from Office

Naukri logo

AI/ML/GenAI AWS SME Job Description Role Overview: An AWS SME with a Data Science Background is responsible for leveraging Amazon Web Services (AWS) to design, implement, and manage data-driven solutions. This role involves a combination of cloud computing expertise and data science skills to optimize and innovate business processes. Key Responsibilities: Data Analysis and Modelling: Analyzing large datasets to derive actionable insights and building predictive models using AWS services like SageMaker, Bedrock, Textract etc. Cloud Infrastructure Management: Designing, deploying, and maintaining scalable cloud infrastructure on AWS to support data science workflows. Machine Learning Implementation: Developing and deploying machine learning models using AWS ML services. Security and Compliance: Ensuring data security and compliance with industry standards and best practices. Collaboration: Working closely with cross-functional teams, including data engineers, analysts, DevOps and business stakeholders, to deliver data-driven solutions. Performance Optimization: Monitoring and optimizing the performance of data science applications and cloud infrastructure. Documentation and Reporting: Documenting processes, models, and results, and presenting findings to stakeholders. Skills & Qualifications Technical Skills: Proficiency in AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker). Strong programming skills in Python. Experience with AI/ML project life cycle steps. Knowledge of machine learning algorithms and frameworks (e.g., TensorFlow, Scikit-learn). Familiarity with data pipeline tools (e.g., AWS Glue, Apache Airflow). Excellent communication and collaboration abilities.

Posted 1 day ago

Apply

5.0 - 9.0 years

5 - 9 Lacs

Bengaluru

Hybrid

Naukri logo

PF Detection is mandatory : Managing data storage solutions on AWS, such as Amazon S3, Amazon Redshift, and Amazon DynamoDB. Implementing and optimizing data processing workflows using AWS services like AWS Glue, Amazon EMR,and AWS Lambda. Working with Spotfire Engineers and business analysts to ensure data is accessible and usable for analysisand visualization. Collaborating with other engineers, and business stakeholders to understand requirements and deliversolutions. Writing code in languages like SQL, Python, or Scala to build and maintain data pipelines and applications. Using Infrastructure as Code (IaC) tools to automate the deployment and management of datainfrastructure. A strong understanding of core AWS services, cloud concepts, and the AWS Well-Architected Framework Conduct an extensive inventory/evaluation of existing environments workflows. Designing and developing scalable data pipelines using AWS services to ensure efficient data flow andprocessing. Integrating / combining diverse data sources to maintain data consistency and reliability. Working closely with data engineers and other stakeholders to understand data requirements and ensureseamless data integration. Build and maintain CI/CD pipelines. Kindly Acknowledge back to this mail with updated Resume.

Posted 1 day ago

Apply

6.0 - 11.0 years

8 - 15 Lacs

Pune

Hybrid

Naukri logo

- Exp in developing applications using Python, Glue(ETL), Lambda, step functions services in AWS EKS, S3, Glue, EMR, RDS Data Stores, CloudFront, API Gateway - Exp in AWS services such as Amazon Elastic Compute (EC2), Glue, Amazon S3, EKS, Lambda Required Candidate profile - 7+ years of exp in software development and technical leadership, preferably having a strong financial knowledge in building complex trading applications. - Research and evaluate new technologies

Posted 1 day ago

Apply

5.0 - 10.0 years

4 - 7 Lacs

Mumbai

Hybrid

Naukri logo

PF Detection is mandatory Minimum 5 years of experience in database development and ETL tools. 2. Strong expertise in SQL and database platforms (e.g. SQL Server Oracle PostgreSQL). 3. Proficiency in ETL tools (e.g. Informatica SSIS Talend DataStage) and scripting languages (e.g. Python Shell). 4. Experience with data modeling and schema design. 5. Familiarity with cloud databases and ETL tools (e.g. AWS Glue Azure Data Factory Snowflake). 6. Understanding of data warehousing concepts and best practices

Posted 1 day ago

Apply

3.0 - 7.0 years

2 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

Pyspark SparkSQL SQL and Glue. ii. AWS cloud experience iii. Good understanding of dimensional modelling iv. Good understanding DevOps CloudOps DataOps CI/CD & with a SRE mindset v. Understanding of Lakehouse and DW architecture vi. strong analysis and analytical skills vii. understanding of version control systems specifically Git viii. strong in software engineering APIs Microservices etc. Soft skills i. written and oral communication skills ii. ability to translate business needs to system.

Posted 1 day ago

Apply

6.0 - 11.0 years

10 - 20 Lacs

Chennai

Hybrid

Naukri logo

Data Engineer, AWS We're looking for a skilled Data Engineer with 5+ years of experience to join our team. You'll play a crucial role in building and optimizing our data infrastructure on AWS, transforming raw data into actionable insights that drive our business forward. What You'll Do: Design & Build Data Pipelines: Develop robust, scalable, and efficient ETL/ELT pipelines using AWS services and Informatica Cloud to move and transform data from diverse sources into our data lake (S3) and data warehouse (Redshift). Optimize Data Models & Architecture: Create and maintain performant data models and contribute to the overall data architecture to support both analytical and operational needs. Ensure Data Quality & Availability: Monitor and manage data flows, ensuring data accuracy, security, and consistent availability for all stakeholders. Collaborate for Impact: Work closely with data scientists, analysts, and business teams to understand requirements and deliver data solutions that drive business value. What You'll Bring: 5+ years of experience as a Data Engineer, with a strong focus on AWS. Proficiency in SQL for complex data manipulation and querying. Hands-on experience with core AWS data services: Storage: Amazon S3 (data lakes, partitioning). Data Warehousing: Amazon Redshift. ETL/ELT: AWS Glue (Data Catalog, crawlers). Serverless & Orchestration: AWS Lambda, AWS Step Functions. Security: AWS IAM. Reports : Looker and Power Bi Experience with big data technologies like PySpark. Experience with Informatica Cloud or similar ETL tools. Strong problem-solving skills and the ability to optimize data processes. Excellent communication skills and a collaborative approach. Added Advantage: Experience with Python for data manipulation and scripting. Looker or PowerBi expirence is desirable Role & responsibilities Preferred candidate profile

Posted 1 day ago

Apply

8.0 - 13.0 years

0 - 0 Lacs

Hyderabad

Work from Office

Naukri logo

Position: Python Developer + AWS ; Location: Hyderabad ; Job type: Contract to hire On payrolls of Randstad Digital ; Experience: 8+ years ; Face To Face interview: 2nd July-25 ; Number of positions: 5 ; Need only immediate joiners ; 4 days WFO Primary Skills (Mandatory top 3 skills) AWS working experience AWS Glue or equivalent product experience Lambda functions Python programming Kubernetes knowledge Secondary Skills (Good to have) Data quality Data Governance knowledge Migration experience CI/CD Jules working knowledge

Posted 1 day ago

Apply

7.0 - 12.0 years

30 - 40 Lacs

Indore, Pune, Bengaluru

Hybrid

Naukri logo

Support enhancements to the MDM platform Develop pipelines using snowflake python SQL and airflow Track System Performance Troubleshoot issues Resolve production issues Required Candidate profile 5+ years of hands on expert level Snowflake, Python, orchestration tools like Airflow Good understanding of investment domain Experience with dbt, Cloud experience (AWS, Azure) DevOps

Posted 1 day ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Gurugram

Hybrid

Naukri logo

Role & responsibilities Skill - Data Engineer, Pyspark, Athena, AWS Glue Notice period - Immediate joiners Location - Gurgaon Exp - 5 to 12 Yrs. Responsibilities: Designing and implementing cloud-based solutions, with a focus on AWS, and operationalizing development in production. Building and managing infrastructures on AWS using infrastructure-as-code tools like Terraform. Developing efficient and clean automation scripts using languages such as Python. Designing and building the reporting layer for various data sources. Leading key data architecture decisions throughout the development lifecycle. Developing data pipelines and ETL processes, utilizing tools such as AWS Lambda, Redshift, and Glue. Collaborating with cross-functional teams to support the productionalization of ML/AI models. Identifying, designing, and implementing internal process improvements, with a focus on automating manual tasks.

Posted 1 day ago

Apply

3.0 - 5.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries,Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Mandatory Skills: AWS Glue. Experience3-5 Years.

Posted 2 days ago

Apply

5.0 - 8.0 years

5 - 15 Lacs

Kolkata, Pune

Work from Office

Naukri logo

Data Engineer Mandotory skills - Python,Pyspark & AWS Glue Location- Pune & Kolkatta Share cv at Muktai.S@alphacom.in

Posted 2 days ago

Apply

4.0 - 8.0 years

18 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Python Data Engineer: 4+ years of experience in backend development with Python. Strong experience with AWS services and cloud architecture. Proficiency in developing RESTful APIs and microservices. Experience with database technologies such as SQL, PostgreSQL, and NoSQL databases. Familiarity with containerization and orchestration tools like Docker and Kubernetes. Knowledge of CI/CD pipelines and tools such as Jenkins, GitLab CI, or AWS CodePipeline. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills.

Posted 2 days ago

Apply

3.0 - 8.0 years

5 - 11 Lacs

Pune, Mumbai (All Areas)

Hybrid

Naukri logo

Overview: TresVista is looking to hire an Associate in its Data Intelligence Group team, who will be primarily responsible for managing clients as well as monitor/execute projects both for the clients as well as internal teams. The Associate may be directly managing a team of up to 3-4 Data Engineers & Analysts across multiple data engineering efforts for our clients with varied technologies. They would be joining the current team of 70+ members, which is a mix of Data Engineers, Data Visualization Experts, and Data Scientists. Roles and Responsibilities: Interacting with the client (internal or external) to understand their problems and work on solutions that address their needs Driving projects and working closely with a team of individuals to ensure proper requirements are identified, useful user stories are created, and work is planned logically and efficiently to deliver solutions that support changing business requirements Managing the various activities within the team, strategizing how to approach tasks, creating timelines and goals, distributing information/tasks to the various team members Conducting meetings, documenting, and communicating findings effectively to clients, management and cross-functional teams Creating Ad-hoc reports for multiple internal requests across departments Automating the process using data transformation tools Prerequisites Strong analytical, problem-solving, interpersonal, and communication skills Advanced knowledge of DBMS, Data Modelling along with advanced querying capabilities using SQL Working experience in cloud technologies (GCP/ AWS/Azure/Snowflake) Prior experience in building and deploying ETL/ELT pipelines using CI/CD, and orchestration tools such as Apache Airflow, GCP workflows, etc. Proficiency in Python for building ETL/ELT processes and data modeling Proficiency in Reporting and Dashboards creation using Power BI/Tableau Knowledge in building ML models and leveraging Gen AI for modern architectures. Experience working with version control platforms like GitHub Familiarity with IaC tools like Terraform and Ansible is good to have Stakeholder Management and client communication experience would be preferred Experience in the Financial Services domain will be an added plus Experience in Machine Learning tools and techniques will be good to have Experience 3-7 years Education BTech/MTech/BE/ME/MBA in Analytics Compensation The compensation structure will be as per industry standards

Posted 3 days ago

Apply

8.0 - 13.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Role: Senior Data Engineer Location: Bangalore - Hybrid Experience : 10+ Years Job Requirements: ETL & Data Pipelines: Experience building and maintaining ETL pipelines with large data sets using AWS Glue, EMR, Kinesis, Kafka, CloudWatch Programming & Data Processing: Strong Python development experience with proficiency in Spark or PySpark Experience in using APIs Database Management: Strong skills in writing SQL queries and performance tuning in AWS Redshift Proficient with other industry-leading RDBMS such as MS SQL Server and PostgreSQL AWS Services: Proficient in working with AWS services including AWS Lambda, Event Bridge, Step Functions, SNS, SQS, S3, and MI models Interested candidates can share their resume at Neesha1@damcogroup.com

Posted 3 days ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Python (Programming Language) Good to have skills : AWS S3 (Simple Storage Service), AWS Lambda Administration, AWS GlueMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve application performance and user experience. Professional & Technical Skills: - Must To Have Skills: Proficiency in Python (Programming Language), AWS S3 (Simple Storage Service), AWS Lambda Administration, AWS Glue.- Good To Have Skills: Experience with AWS S3 (Simple Storage Service), AWS Lambda Administration, AWS Glue.- Strong understanding of software development life cycle methodologies.- Experience with version control systems such as Git.- Familiarity with RESTful APIs and web services. Additional Information:- The candidate should have minimum 5 years of experience in Python (Programming Language).- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 days ago

Apply

6.0 - 8.0 years

22 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for energetic, self-motivated and exceptional Data engineers to work on extraordinary enterprise products based on AI and Big Data engineering leveraging AWS/Databricks tech stack. He/she will work with a star team of Architects, Data Scientists/AI Specialists, Data Engineers and Integration. Skills and Qualifications: 5+ years of experience in DWH/ETL Domain; Databricks/AWS tech stack 2+ years of experience in building data pipelines with Databricks/ PySpark /SQL Experience in writing and interpreting SQL queries, designing data models and data standards. Experience in SQL Server databases, Oracle and/or cloud databases. Experience in data warehousing and data mart, Star and Snowflake model. Experience in loading data into databases from databases and files. Experience in analyzing and drawing design conclusions from data profiling results. Understanding business processes and relationship of systems and applications. Must be comfortable conversing with the end-users. Must have the ability to manage multiple projects/clients simultaneously. Excellent analytical, verbal and communication skills. Role and Responsibilities: Work with business stakeholders and build data solutions to address analytical & reporting requirements. Work with application developers and business analysts to implement and optimise Databricks/AWS-based implementations meeting data requirements. Design, develop, and optimize data pipelines using Databricks (Delta Lake, Spark SQL, PySpark), AWS Glue, and Apache Airflow Implement and manage ETL workflows using Databricks notebooks, PySpark and AWS Glue for efficient data transformation Develop/ optimize SQL scripts, queries, views, and stored procedures to enhance data models and improve query performance on managed databases. Conduct root cause analysis and resolve production problems and data issues. Create and maintain up to date documentation of the data model, data flow and field level mappings. Provide support for production problems and daily batch processing. Provide ongoing maintenance and optimization of database schemas, data lake structures (Delta Tables, Parquet), and views to ensure data integrity and performance

Posted 3 days ago

Apply

6.0 - 8.0 years

22 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for energetic, self-motivated and exceptional Data engineer to work on extraordinary enterprise products based on AI and Big Data engineering leveraging AWS/Databricks tech stack. He/she will work with star team of Architects, Data Scientists/AI Specialists, Data Engineers and Integration. Skills and Qualifications: 5+ years of experience in DWH/ETL Domain; Databricks/AWS tech stack 2+ years of experience in building data pipelines with Databricks/ PySpark /SQL Experience in writing and interpreting SQL queries, designing data models and data standards. Experience in SQL Server databases, Oracle and/or cloud databases. Experience in data warehousing and data mart, Star and Snowflake model. Experience in loading data into database from databases and files. Experience in analyzing and drawing design conclusions from data profiling results. Understanding business process and relationship of systems and applications. Must be comfortable conversing with the end-users. Must have ability to manage multiple projects/clients simultaneously. Excellent analytical, verbal and communication skills. Role and Responsibilities: Work with business stakeholders and build data solutions to address analytical & reporting requirements. Work with application developers and business analysts to implement and optimise Databricks/AWS-based implementations meeting data requirements. Design, develop, and optimize data pipelines using Databricks (Delta Lake, Spark SQL, PySpark), AWS Glue, and Apache Airflow Implement and manage ETL workflows using Databricks notebooks, PySpark and AWS Glue for efficient data transformation Develop/ optimize SQL scripts, queries, views, and stored procedures to enhance data models and improve query performance on managed databases. Conduct root cause analysis and resolve production problems and data issues. Create and maintain up to date documentation of the data model, data flow and field level mappings. Provide support for production problems and daily batch processing. Provide ongoing maintenance and optimization of database schemas, data lake structures (Delta Tables, Parquet), and views to ensure data integrity and performance. Immediate Joiners.

Posted 3 days ago

Apply

7.0 - 9.0 years

15 - 30 Lacs

Thiruvananthapuram

Work from Office

Naukri logo

Job Title: Senior Data Associate - Cloud Data Engineering Experience: 7+ Years Employment Type: Full-Time Industry: Information Technology / Data Engineering / Cloud Platforms Job Summary: We are seeking a highly skilled and experienced Senior Data Associate to join our data engineering team. The ideal candidate will have a strong background in cloud data platforms, big data processing, and enterprise data systems, with hands-on experience across both AWS and Azure ecosystems. This role involves building and optimizing data pipelines, managing large-scale data lakes and warehouses, and enabling advanced analytics and reporting. Key Responsibilities: Design, develop, and maintain scalable data pipelines using AWS Glue, PySpark, and Azure Data Factory. Work with AWS Redshift, Athena, Azure Synapse, and Databricks to support data warehousing and analytics solutions. Integrate and manage data across MongoDB, Oracle, and cloud-native storage like Azure Data Lake and S3. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality datasets. Implement data quality checks, monitoring, and governance practices. Optimize data workflows for performance, scalability, and cost-efficiency. Support data migration and modernization initiatives across cloud platforms. Document data flows, architecture, and technical specifications. Required Skills & Qualifications: 7+ years of experience in data engineering, data integration, or related roles. Strong hands-on experience with: AWS Redshift, Athena, Glue, S3 Azure Data Lake, Synapse Analytics, Databricks PySpark for distributed data processing MongoDB and Oracle databases Proficiency in SQL, Python, and data modeling. Experience with ETL/ELT design and implementation. Familiarity with data governance, security, and compliance standards. Strong problem-solving and communication skills. Preferred Qualifications: Certifications in AWS (e.g., Data Analytics Specialty) or Azure (e.g., Azure Data Engineer Associate). Experience with CI/CD pipelines and DevOps for data workflows. Knowledge of data cataloging tools (e.g., AWS Glue Data Catalog, Azure Purview). Exposure to real-time data processing and streaming technologies. Required Skills Azure,AWS REDSHIFT,Athena,Azure Data Lake

Posted 3 days ago

Apply

12.0 - 15.0 years

5 - 5 Lacs

Thiruvananthapuram

Work from Office

Naukri logo

Senior Data Architect - Big Data & Cloud Solutions Experience: 10+ Years Industry: Information Technology / Data Engineering / Cloud Computing Job Summary: We are seeking a highly experienced and visionary Data Architect to lead the design and implementation of scalable, high-performance data solutions. The ideal candidate will have deep expertise in Apache Kafka, Apache Spark, AWS Glue, PySpark, and cloud-native architectures, with a strong background in solution architecture and enterprise data strategy. Key Responsibilities: Design and implement end-to-end data architecture solutions on AWS using Glue, S3, Redshift, and other services. Architect and optimize real-time data pipelines using Apache Kafka and Spark Streaming. Lead the development of ETL/ELT workflows using PySpark and AWS Glue. Collaborate with stakeholders to define data strategies, governance, and best practices. Ensure data quality, security, and compliance across all data platforms. Provide technical leadership and mentorship to data engineers and developers. Evaluate and recommend new tools and technologies to improve data infrastructure. Translate business requirements into scalable and maintainable data solutions. Required Skills & Qualifications: 10+ years of experience in data engineering, architecture, or related roles. Strong hands-on experience with: Apache Kafka (event streaming, topic design, schema registry) Apache Spark (batch and streaming) AWS Glue, S3, Redshift, Lambda, CloudFormation/Terraform PySpark for large-scale data processing Proven experience in solution architecture and designing cloud-native data platforms. Deep understanding of data modeling, data lakes, and data warehousing concepts. Strong programming skills in Python and SQL. Experience with CI/CD pipelines and DevOps practices for data workflows. Excellent communication and stakeholder management skills. Preferred Qualifications: AWS Certified Solutions Architect or Big Data Specialty certification. Experience with data governance tools and frameworks. Familiarity with containerization (Docker, Kubernetes) and orchestration tools (Airflow, Step Functions). Exposure to machine learning pipelines and MLOps is a plus. Required Skills Apache,Pyspark,Aws Cloud,Kafka

Posted 3 days ago

Apply

7.0 - 9.0 years

5 - 5 Lacs

Thiruvananthapuram

Work from Office

Naukri logo

Azure Infrastructure Consultant - Cloud & Data Integration Experience: 8+ Years Employment Type: Full-Time Industry: Information Technology / Cloud Infrastructure / Data Engineering Job Summary: We are looking for a seasoned Azure Infrastructure Consultant with a strong foundation in cloud infrastructure, data integration, and real-time data processing. The ideal candidate will have hands-on experience across Azure and AWS platforms, with deep knowledge of Apache NiFi, Kafka, AWS Glue, and PySpark. This role involves designing and implementing secure, scalable, and high-performance cloud infrastructure and data pipelines. Key Responsibilities: Design and implement Azure-based infrastructure solutions, ensuring scalability, security, and performance. Lead hybrid cloud integration projects involving Azure and AWS services. Develop and manage ETL/ELT pipelines using AWS Glue, Apache NiFi, and PySpark. Architect and support real-time data streaming solutions using Apache Kafka. Collaborate with cross-functional teams to gather requirements and deliver infrastructure and data solutions. Implement infrastructure automation using tools like Terraform, ARM templates, or Bicep. Monitor and optimize cloud infrastructure and data workflows for cost and performance. Ensure compliance with security and governance standards across cloud environments. Required Skills & Qualifications: 8+ years of experience in IT infrastructure and cloud consulting. Strong hands-on experience with: Azure IaaS/PaaS (VMs, VNets, Azure AD, App Services, etc.) AWS services including Glue, S3, Lambda Apache NiFi for data ingestion and flow management Apache Kafka for real-time data streaming PySpark for distributed data processing Proficiency in scripting (PowerShell, Python) and Infrastructure as Code (IaC). Solid understanding of networking, security, and identity management in cloud environments. Strong communication and client-facing skills. Preferred Qualifications: Azure or AWS certifications (e.g., Azure Solutions Architect, AWS Data Analytics Specialty). Experience with CI/CD pipelines and DevOps practices. Familiarity with containerization (Docker, Kubernetes) and orchestration. Exposure to data governance tools and frameworks. Required Skills Azure,Microsoft Azure,Azure Paas,aws glue

Posted 3 days ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Hybrid

Naukri logo

The resource should have a strong background in working with cloud platforms, APIs, and data processing. Experience with tools like AWS Glue, Athena, and Databricks will be highly beneficial. AWS Glue Jobs: The QE should have familiarity with AWS Glue jobs for ETL processes. We expect them to validate the successful execution of around Glue jobs, ensuring that transformations and data ingestion tasks are working smoothly without errors. Athena Querying: Experience with querying data using AWS Athena is a must, as the QE will be required to validate queries across multiple datasets. We expect the resource to run and validate Athena queries for data accuracy and integrity. Databricks Testing: The candidate should also have experience with Databricks, particularly in validating data pipelines and transformations within the Databricks environment. The QE will need to test Databricks notebooks or jobs, ensuring data accuracy in the Bronze, Silver, and Gold layers. Boomi integrations

Posted 3 days ago

Apply

2.0 - 7.0 years

4 - 8 Lacs

Ahmedabad

Work from Office

Naukri logo

Travel Designer Group Founded in 1999, Travel Designer Group has consistently achieved remarkable milestones in a relatively short span of time. While we embody the agility, growth mindset, and entrepreneurial energy typical of start-ups, we bring with us over 24 years of deep-rooted expertise in the travel trade industry. As a leading global travel wholesaler, we serve as a vital bridge connecting hotels, travel service providers, and an expansive network of travel agents worldwide. Our core strength lies in sourcing, curating, and distributing high-quality travel inventory through our award-winning B2B reservation platform, RezLive.com. This enables travel trade professionals to access real-time availability and competitive pricing to meet the diverse needs of travelers globally. Our expanding portfolio includes innovative products such as: * Rez.Tez * Affiliate.Travel * Designer Voyages * Designer Indya * RezRewards * RezVault With a presence in over 32+ countries and a growing team of 300+ professionals, we continue to redefine travel distribution through technology, innovation, and a partner-first approach. Website : https://www.traveldesignergroup.com/ Profile :- ETL Developer ETL Tools - any 1 -Talend / Apache NiFi /Pentaho/ AWS Glue / Azure Data Factory /Google Dataflow Workflow & Orchestration : any 1 good to have- not mandatory Apache Airflow/dbt (Data Build Tool)/Luigi/Dagster / Prefect / Control-M Programming & Scripting : SQL (Advanced) Python ( mandatory ) Bash/Shell (mandatory) Java or Scala (optional for Spark) -optional Databases & Data Warehousing MySQL / PostgreSQL / SQL Server / Oracle mandatory Snowflake - good to have Amazon Redshift - good to have Google BigQuery - good to have Azure Synapse Analytics - good to have MongoDB / Cassandra - good to have Cloud & Data Storage : any 1 -2 AWS S3 / Azure Blob Storage / Google Cloud Storage - mandatory Kafka / Kinesis / Pub/Sub Interested candidate also share your resume in shivani.p@rezlive.com

Posted 4 days ago

Apply

3.0 - 6.0 years

20 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title: Data Engineer II (Python, SQL) Experience: 3 to 6 years Location: Bangalore, Karnataka (Work from office, 5 days a week) Role: Data Engineer II (Python, SQL) As a Data Engineer II, you will work on designing, building, and maintaining scalable data pipelines. Youll collaborate across data analytics, marketing, data science, and product teams to drive insights and AI/ML integration using robust and efficient data infrastructure. Key Responsibilities: Design, develop and maintain end-to-end data pipelines (ETL/ELT). Ingest, clean, transform, and curate data for analytics and ML usage. Work with orchestration tools like Airflow to schedule and manage workflows. Implement data extraction using batch, CDC, and real-time tools (e.g., Debezium, Kafka Connect). Build data models and enable real-time and batch processing using Spark and AWS services. Collaborate with DevOps and architects for system scalability and performance. Optimize Redshift-based data solutions for performance and reliability. Must-Have Skills & Experience: 3+ years in Data Engineering or Data Science with strong ETL and pipeline experience. Expertise in Python and SQL . Strong experience in Data Warehousing , Data Lakes , Data Modeling , and Ingestion . Working knowledge of Airflow or similar orchestration tools. Hands-on with data extraction techniques like CDC , batch-based, using Debezium, Kafka Connect, AWS DMS . Experience with AWS Services : Glue, Redshift, Lambda, EMR, Athena, MWAA, SQS, etc. Knowledge of Spark or similar distributed systems. Experience with queuing/messaging systems like SQS , Kinesis , RabbitMQ .

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies