Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, We are hiring a Cloud Architect to design and oversee scalable, secure, and cost-efficient cloud solutions. Great for architects who bridge technical vision with business needs. Key Responsibilities: Design cloud-native solutions using AWS, Azure, or GCP Lead cloud migration and transformation projects Define cloud governance, cost control, and security strategies Collaborate with DevOps and engineering teams for implementation Required Skills & Qualifications: Deep expertise in cloud architecture and multi-cloud environments Experience with containers, serverless, and microservices Proficiency in Terraform, CloudFormation, or equivalent Bonus: Cloud certification (AWS/Azure/GCP Architect) Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 2 months ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, Looking for a Cloud Data Engineer to build cloud-based data pipelines and analytics platforms. Key Responsibilities: Develop ETL workflows using cloud data services. Manage data storage, lakes, and warehouses. Ensure data quality and pipeline reliability. Required Skills & Qualifications: Experience with BigQuery, Redshift, or Azure Synapse. Proficiency in SQL, Python, or Spark. Familiarity with data lake architecture and batch/streaming. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 2 months ago
5 - 7 years
16 - 25 Lacs
Gurugram
Work from Office
Key responsibilities: 1. Understand, implement, and automate ETL pipelines with better industry standards 2. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, design infrastructure for greater scalability, etc 3. Developing, integrating, testing, and maintaining existing and new applications 4. Design, and create data pipelines (data lake / data warehouses) for real world energy analytical solutions 5. Expert-level proficiency in Python (preferred) for automating everyday tasks 6. Strong understanding and experience in distributed computing frameworks, particularly Spark, Spark-SQL, Kafka, Spark Streaming, Hive, Azure Databricks etc 7. Limited experience in using other leading cloud platforms preferably Azure. 8. Hands on experience on Azure data factory, logic app, Analysis service, Azure blob storage etc. 9. Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works 10. Must have 5-7 years of experience
Posted 2 months ago
3 - 8 years
4 - 9 Lacs
Kolkata
Work from Office
Role Overview: We are seeking a skilled and analytical Climate Risk Analyst to join our sustainability team. In this role, you will apply NGFS and IPCC frameworks to evaluate and quantify both physical and transition climate risks across diverse client portfolios. You'll build probabilistic models, design stress-testing scenarios, and integrate datasets ranging from GIS layers to supply-chain metrics. Partnering with finance, operations, and data engineering teams, you'll translate complex climate trajectories such as carbon pricing, regulatory shifts, and technology adoption into clear, actionable insights and robust risk-management strategies that help our clients enhance resilience and capitalize on emerging opportunities. Key Responsibilities: Conduct climate risk assessments using NGFS and IPCC frameworks to identify client-specific risks and opportunities Develop and execute stress-testing scenarios to evaluate climate impacts on financial portfolios Collect, clean, and integrate diverse datasets (e.g., geospatial layers, historical weather records, supply-chain metrics) Build probabilistic models to quantify hazard frequency, exposure, and vulnerability at the asset and portfolio levels Parameterize transition-risk factors (carbon-pricing trajectories, regulatory changes, technology adoption rates) across multiple climate pathways Collaborate with cross-functional teams to produce comprehensive reports and presentations on climate risk management strategies Translate quantitative findings into clear recommendations for risk mitigation and opportunity capture Required Skills: Strong understanding of physical and transition climate-risk concepts and frameworks (NGFS, IPCC) Proven experience in stress-testing and scenario analysis within a financial or risk-management context Proficiency in statistical and probabilistic modeling techniques (e.g., Monte Carlo simulation) Hands-on experience with data integration and analysis tools (Python/R, SQL, GIS software) Familiarity with cloud-based data platforms and ETL pipelines Excellent quantitative and analytical abilities, with strong attention to detail Exceptional communication skills, able to distill complex analyses into concise, client-ready insights Proven ability to work collaboratively in cross-disciplinary teams Perks and Benefits: Work on a ground-breaking product that significantly contributes to sustainability Exposure to advanced AI tools and methodologies to enhance your development skills and productivity Competitive salary with a comprehensive benefits package Flexible work arrangements A vibrant, inclusive, and supportive team environment Opportunities for professional growth and continuous learning
Posted 2 months ago
5 - 9 years
7 - 11 Lacs
Kochi, Coimbatore, Thiruvananthapuram
Work from Office
Job Title - Senior Data Engineer (Graph DB specialist)+ Specialist + Global Song Management Level :9,Specialist Location:Kochi, Coimbatore Must have skills: Data Modeling Techniques and Methodologies Good to have skills:Proficiency in Python and PySpark programming. Job Summary :We are seeking a highly skilled Data Engineer with expertise in graph databases to join our dynamic team. The ideal candidate will have a strong background in data engineering, graph querying languages, and data modeling, with a keen interest in leveraging cutting-edge technologies like vector databases and LLMs to drive functional objectives. Your responsibilities will include: Design, implement, and maintain ETL pipelines to prepare data for graph-based structures. Develop and optimize graph database solutions using querying languages such as Cypher, SPARQL, or GQL. Neo4J DB experience is preferred. Build and maintain ontologies and knowledge graphs, ensuring efficient and scalable data modeling. Integrate vector databases and implement similarity search techniques, with a focus on Retrieval-Augmented Generation (RAG) methodologies and GraphRAG. Collaborate with data scientists and engineers to operationalize machine learning models and integrate with graph databases. Work with Large Language Models (LLMs) to achieve functional and business objectives. Ensure data quality, integrity, and security while delivering robust and scalable solutions. Communicate effectively with stakeholders to understand business requirements and deliver solutions that meet objectives. Roles & Responsibilities: Experience:At least 5 years of hands-on experience in data engineering. With 2 years of experience working with Graph DB. Programming: Querying:Advanced knowledge of Cypher, SPARQL, or GQL querying languages. ETL Processes:Expertise in designing and optimizing ETL processes for graph structures. Data Modeling:Strong skills in creating ontologies and knowledge graphs.Presenting data for Graph RAG based solutions Vector Databases:Understanding of similarity search techniques and RAG implementations. LLMs:Experience working with Large Language Models for functional objectives. Communication:Excellent verbal and written communication skills. Cloud Platforms:Experience with Azure analytics platforms, including Function Apps, Logic Apps, and Azure Data Lake Storage (ADLS). Graph Analytics:Familiarity with graph algorithms and analytics. Agile Methodology:Hands-on experience working in Agile teams and processes. Machine Learning:Understanding of machine learning models and their implementation. Professional & Technical Skills: Additional Information: (do not remove the hyperlink) Qualifications Experience: Minimum 5-10 year(s) of experience is required Educational Qualification: Any graduation / BE / B Tech
Posted 2 months ago
6 - 8 years
25 - 27 Lacs
Noida
Work from Office
We are seeking a highly skilled Python AI/ML Professional to join our growing data science and machine learning team. The ideal candidate will have solid experience in developing, deploying, and optimizing AI/ML models using Python and related tools. You will work on real-world problems, build intelligent systems, and contribute to cutting-edge projects across various domains. Key Responsibilities: Design, build, and deploy machine learning models and AI solutions using Python Clean, preprocess, and analyze large datasets to extract meaningful insights Implement models using libraries such as Scikit-learn, TensorFlow, PyTorch, or similar frameworks Build scalable data pipelines and APIs for ML model deployment Collaborate with data engineers, analysts, and product teams to deliver business-driven AI solutions Python AI /ML,Integration,machine learning model deployment ,Moniter ML, ETL Pipelines,FastAPI, Flask,cloud platforms,Docker, Kubernetes, Git, and CI/CD tools Education : Bachelors degree in Computer Science, Information Technology, or a related field
Posted 2 months ago
7 - 10 years
15 - 25 Lacs
Bengaluru
Hybrid
Job Title: Python , PySpark Exp. :- 8+ yrs Location Workplace: Hyderabad/Bangalore ( Hybrid mode ) Job Type: Full-time Job Description: 8+ years of experience in data engineering with supply chain analytics Strong knowledge on data pipelines, data modelling, metadata management Experience with data lakes, data warehouses , data hubs and ETL Tools like PySpark, Golden Gate Replication, ETL. Languages and technologies : SQL, Python, PySpark, Scala, Spark, SQL/NoSQL databases, Develop and maintain scalable PySpark-based ETL pipelines for big data processing. Optimize Spark jobs through partitioning, caching, and performance tuning techniques. Ensure data quality with validation frameworks and error-handling mechanisms. Work with structured and unstructured data, handling transformations efficiently. Implement CI/CD pipelines for automated data pipeline deployment and monitoring. Relational Databases: Experience with managing and optimizing relational databases (e.g., Oracle SQL , PostgreSQL, SQL Server). NoSQL Databases: Experience with managing and optimizing relational databases (e.g., MongoDB, Cassandra) for handling unstructured and semi-structured data. Expertise in testing and deployment of data applications (e.g., provisioning resources, deploying and monitoring workflows and data quality) Expertise apache airflow
Posted 2 months ago
3 - 6 years
7 - 11 Lacs
Hyderabad
Work from Office
Sr Semantic Engineer – Research Data and Analytics What you will do Let’s do this. Let’s change the world. In this vital role you will be part of Research’s Semantic Graph Team is seeking a dedicated and skilled Semantic Data Engineer to build and optimize knowledge graph-based software and data resources. This role primarily focuses on working with technologies such as RDF, SPARQL, and Python. In addition, the position involves semantic data integration and cloud-based data engineering. The ideal candidate should possess experience in the pharmaceutical or biotech industry, demonstrate deep technical skills, and be proficient with big data technologies and demonstrate experience in semantic modeling. A deep understanding of data architecture and ETL processes is also essential for this role. In this role, you will be responsible for constructing semantic data pipelines, integrating both relational and graph-based data sources, ensuring seamless data interoperability, and leveraging cloud platforms to scale data solutions effectively. Roles & Responsibilities: Develop and maintain semantic data pipelines using Python, RDF, SPARQL, and linked data technologies. Develop and maintain semantic data models for biopharma scientific data Integrate relational databases (SQL, PostgreSQL, MySQL, Oracle, etc.) with semantic frameworks. Ensure interoperability across federated data sources, linking relational and graph-based data. Implement and optimize CI/CD pipelines using GitLab and AWS. Leverage cloud services (AWS Lambda, S3, Databricks, etc.) to support scalable knowledge graph solutions. Collaborate with global multi-functional teams, including research scientists, Data Architects, Business SMEs, Software Engineers, and Data Scientists to understand data requirements, design solutions, and develop end-to-end data pipelines to meet fast-paced business needs across geographic regions. Collaborate with data scientists, engineers, and domain experts to improve research data accessibility. Adhere to standard processes for coding, testing, and designing reusable code/components. Explore new tools and technologies to improve ETL platform performance. Participate in sprint planning meetings and provide estimations on technical implementation. Maintain comprehensive documentation of processes, systems, and solutions. Harmonize research data to appropriate taxonomies, ontologies, and controlled vocabularies for context and reference knowledge. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications and Experience: Doctorate Degree OR Master’s degree with 4 - 6 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelor’s degree with 6 - 8 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications and Experience: 6+ years of experience in designing and supporting biopharma scientific research data analytics (software platforms) Functional Skills: Must-Have Skills: Advanced Semantic and Relational Data Skills: Proficiency in Python, RDF, SPARQL, Graph Databases (e.g. Allegrograph), SQL, relational databases, ETL pipelines, big data technologies (e.g. Databricks), semantic data standards (OWL, W3C, FAIR principles), ontology development and semantic modeling practices. Cloud and Automation ExpertiseGood experience in using cloud platforms (preferably AWS) for data engineering, along with Python for automation, data federation techniques, and model-driven architecture for scalable solutions. Technical Problem-SolvingExcellent problem-solving skills with hands-on experience in test automation frameworks (pytest), scripting tasks, and handling large, complex datasets. Good-to-Have Skills: Experience in biotech/drug discovery data engineering Experience applying knowledge graphs, taxonomy and ontology concepts in life sciences and chemistry domains Experience with graph databases (Allegrograph, Neo4j, GraphDB, Amazon Neptune) Familiarity with Cypher, GraphQL, or other graph query languages Experience with big data tools (e.g. Databricks) Experience in biomedical or life sciences research data management Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 months ago
2 - 4 years
4 - 7 Lacs
Hyderabad
Work from Office
Associate Data Engineer Graph – Research Data and Analytics What you will do Let’s do this. Let’s change the world. In this vital role you will be part of Research’s Semantic Graph. Team is seeking a dedicated and skilled Data Engineer to design, build and maintain solutions for scientific data that drive business decisions for Research. You will build scalable and high-performance, graph-based, data engineering solutions for large scientific datasets and collaborate with Research partners. The ideal candidate possesses experience in the pharmaceutical or biotech industry, demonstrates deep technical skills, has experience with semantic data modeling and graph databases, and understands data architecture and ETL processes. Roles & Responsibilities: Design, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions Contribute to data pipeline projects from inception to deployment, manage scope, timelines, and risks Contribute to data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency Optimize large datasets for query performance Collaborate with global multi-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Maintain documentation of processes, systems, and solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications and Experience: Bachelor’s degree and 1to 3 years of Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field experience OR Diploma and 4 to 7 years of Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field experience Functional Skills: Must-Have Skills: Advanced Semantic and Relational Data Skills: Proficiency in Python, RDF, SPARQL, Graph Databases (e.g. Allegrograph), SQL, relational databases, ETL pipelines, big data technologies (e.g. Databricks), semantic data standards (OWL, W3C, FAIR principles), ontology development and semantic modeling practices. Hands on experience with big data technologies and platforms, such as Databricks, workflow orchestration, performance tuning on data processing. Excellent problem-solving skills and the ability to work with large, complex datasets Good-to-Have Skills: A passion for tackling complex challenges in drug discovery with technology and data Experience with system administration skills, such as managing Linux and Windows servers, configuring network infrastructure, and automating tasks with shell scripting. Examples include setting up and maintaining virtual machines, troubleshooting server issues, and ensuring data security through regular updates and backups. Solid understanding of data modeling, data warehousing, and data integration concepts Solid experience using RDBMS (e.g. Oracle, MySQL, SQL server, PostgreSQL) Knowledge of cloud data platforms (AWS preferred) Experience with data visualization tools (e.g. Dash, Plotly, Spotfire) Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming Experience writing and maintaining user documentation in Confluence Professional Certifications: Databricks Certified Data Engineer Professional preferred Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 months ago
3 - 8 years
6 - 15 Lacs
Hyderabad
Work from Office
Dear Tech Aspirants, Greetings of the day, We are conducting a walk-in drive for AWS Data Engineers(3-6 Years) on :- Walk-In Date: Saturday, 17-May-2025. Time : 9:30 AM - 3:30 PM Kindly fill this form to confirm you presence - https://forms.office.com/r/CEcuYGsFPS Walk-In Venue: Ground Floor, Sri Sai Towers, Plot No. 91A & 91B, Vittal Rao Nagar, Madhapur, Hyderabad 500081. Gogle Map Location : https://maps.app.goo.gl/dKkAm4EgF1q1CKqc8 *Carry your updated CV Job Title: AWS Data Engineer (SQL Mandatory) Location: Hyderabad, India Experience: 3 to 6 years We are seeking a skilled AWS Data Engineer with 3 to 6 years of experience to join our team. The ideal candidate will be responsible for implementing and maintaining AWS data services, processing and transforming raw data, and optimizing data workflows using AWS Glue to ensure seamless integration with business processes. This role requires a deep understanding of AWS cloud technologies, Apache Spark, and data engineering best practices. You will work closely with data scientists, analysts, and business stakeholders to ensure data is accessible, scalable, and efficient. Roles & Responsibilities : Implement & Maintain AWS Data Services : Deploy, configure, and manage AWS Glue and associated workflows. Data Processing & Transformation : Clean, process, and transform raw data into structured and usable formats for analytics and machine learning. Develop ETL Pipelines : Design and build ETL/ELT pipelines using Apache Spark, AWS Glue, and AWS Data Pipeline. Data Governance & Security : Ensure data quality, integrity, and security in compliance with organizational standards. Performance Optimization : Continuously improve data processing pipelines for efficiency and scalability. Collaboration : Work closely with data scientists, analysts, and software engineers to enable seamless data accessibility. Documentation & Best Practices : Maintain technical documentation and enforce best practices in data engineering. Modern Data Transformation : Develop and manage data transformation workflows using dbt (Data Build Tool) to ensure modular, testable, and version-controlled data pipelines. Data Mesh & Governance : Contribute to the implementation of data mesh architecture to promote domain-oriented data ownership, decentralized data management, and enhanced data governance. Workflow Orchestration : Design and implement data orchestration pipelines using tools like Apache Airflow for managing complex workflows and dependencies. Good hands-on experience with SQL programming. Technical Requirements & Skills : Essential Skills : Proficiency in AWS: Strong hands-on experience with AWS cloud services, including Amazon S3, AWS Glue, Amazon Redshift, and Amazon RDS. Expertise in AWS Glue: Deep understanding of AWS Glue, Apache Spark, and AWS Lake Formation. Programming Skills: Proficiency in Python, Scala for data engineering and processing. SQL Expertise: Strong knowledge of SQL for querying and managing structured data. ETL & Data Pipelines: Experience in designing and maintaining ETL/ELT workflows. Big Data Technologies: Knowledge of Hadoop, Spark, and distributed computing frameworks. Orchestration Tools : Experience with Apache Airflow or similar tools for scheduling and monitoring data workflows. Data Transformation Frameworks : Familiarity with DBT (Data Build Tool) for building reliable, version-controlled data transformations. Data Mesh Concepts : Understanding of data mesh architecture and its role in scaling data across decentralized domains. Version Control & CI/CD: Experience with Git, AWS CodeCommit, and CI/CD pipelines for automated data deployment. Nice to Have : AWS Certified Data Analytics Specialty. Machine Learning Familiarity: Understanding of machine learning concepts and integration with AWS SageMaker. Streaming Data Processing: Experience with Amazon Kinesis or Spark Streaming. Qualifications : Bachelors or Masters degree in Computer Science, Information Technology, Data Science, or a related field. 4+ years of experience in data engineering, cloud technologies, and AWS Glue. Strong problem-solving skills and ability to work in fast-paced environment. If interested please appear and Come with your updated CV*
Posted 2 months ago
12 - 16 years
35 - 37 Lacs
Chandigarh
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 2 months ago
12 - 16 years
35 - 37 Lacs
Vadodara
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 2 months ago
12 - 16 years
35 - 37 Lacs
Visakhapatnam
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 2 months ago
12 - 16 years
35 - 37 Lacs
Thiruvananthapuram
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 2 months ago
12 - 16 years
35 - 37 Lacs
Coimbatore
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 2 months ago
12 - 16 years
35 - 37 Lacs
Hyderabad
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 2 months ago
12 - 16 years
35 - 37 Lacs
Nagpur
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 2 months ago
12 - 16 years
35 - 37 Lacs
Jaipur
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 2 months ago
12 - 16 years
35 - 37 Lacs
Lucknow
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 2 months ago
12 - 16 years
35 - 37 Lacs
Kanpur
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 2 months ago
12 - 16 years
35 - 37 Lacs
Pune
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 2 months ago
12 - 16 years
35 - 37 Lacs
Ahmedabad
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 2 months ago
12 - 16 years
35 - 37 Lacs
Surat
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 2 months ago
3 - 6 years
9 - 15 Lacs
Hyderabad
Work from Office
Dear Tech Aspirants, Greetings of the day, We are conducting a walk-in drive for AWS Data Engineers(3-6 Years) on :- Walk-In Date: Saturday, 17-May-2025 Time : 9:30 AM - 3:30 PM Kindly fill this form to confirm you presence - https://forms.office.com/r/CEcuYGsFPS Walk-In Venue: Ground Floor, Sri Sai Towers, Plot No. 91A & 91B, Vittal Rao Nagar, Madhapur, Hyderabad 500081. Gogle Map Location : https://maps.app.goo.gl/dKkAm4EgF1q1CKqc8 *Carry your updated CV Job Title: AWS Data Engineer (SQL Mandatory) Location: Hyderabad, India Experience: 3 to 6 years We are seeking a skilled AWS Data Engineer with 3 to 6 years of experience to join our team. The ideal candidate will be responsible for implementing and maintaining AWS data services, processing and transforming raw data, and optimizing data workflows using AWS Glue to ensure seamless integration with business processes. This role requires a deep understanding of AWS cloud technologies, Apache Spark, and data engineering best practices. You will work closely with data scientists, analysts, and business stakeholders to ensure data is accessible, scalable, and efficient. Roles & Responsibilities : Implement & Maintain AWS Data Services : Deploy, configure, and manage AWS Glue and associated workflows. Data Processing & Transformation : Clean, process, and transform raw data into structured and usable formats for analytics and machine learning. Develop ETL Pipelines : Design and build ETL/ELT pipelines using Apache Spark, AWS Glue, and AWS Data Pipeline. Data Governance & Security : Ensure data quality, integrity, and security in compliance with organizational standards. Performance Optimization : Continuously improve data processing pipelines for efficiency and scalability. Collaboration : Work closely with data scientists, analysts, and software engineers to enable seamless data accessibility. Documentation & Best Practices : Maintain technical documentation and enforce best practices in data engineering. Modern Data Transformation : Develop and manage data transformation workflows using dbt (Data Build Tool) to ensure modular, testable, and version-controlled data pipelines. Data Mesh & Governance : Contribute to the implementation of data mesh architecture to promote domain-oriented data ownership, decentralized data management, and enhanced data governance. Workflow Orchestration : Design and implement data orchestration pipelines using tools like Apache Airflow for managing complex workflows and dependencies. Good hands-on experience with SQL programming. Technical Requirements & Skills : Essential Skills : Proficiency in AWS: Strong hands-on experience with AWS cloud services, including Amazon S3, AWS Glue, Amazon Redshift, and Amazon RDS. Expertise in AWS Glue: Deep understanding of AWS Glue, Apache Spark, and AWS Lake Formation. Programming Skills: Proficiency in Python, Scala for data engineering and processing. SQL Expertise: Strong knowledge of SQL for querying and managing structured data. ETL & Data Pipelines: Experience in designing and maintaining ETL/ELT workflows. Big Data Technologies: Knowledge of Hadoop, Spark, and distributed computing frameworks. Orchestration Tools : Experience with Apache Airflow or similar tools for scheduling and monitoring data workflows. Data Transformation Frameworks : Familiarity with DBT (Data Build Tool) for building reliable, version-controlled data transformations. Data Mesh Concepts : Understanding of data mesh architecture and its role in scaling data across decentralized domains. Version Control & CI/CD: Experience with Git, AWS CodeCommit, and CI/CD pipelines for automated data deployment. Nice to Have : AWS Certified Data Analytics Specialty. Machine Learning Familiarity: Understanding of machine learning concepts and integration with AWS SageMaker. Streaming Data Processing: Experience with Amazon Kinesis or Spark Streaming. Qualifications : Bachelors or Masters degree in Computer Science, Information Technology, Data Science, or a related field. 4+ years of experience in data engineering, cloud technologies, and AWS Glue. Strong problem-solving skills and ability to work in fast-paced environment. If interested please appear and Come with your updated CV*
Posted 2 months ago
4 - 9 years
6 - 11 Lacs
Mumbai
Work from Office
Job Title - Sales Excellence -Client Success - Data Engineering Specialist - CF Management Level :ML9 Location:Open Must have skills:GCP, SQL, Data Engineering, Python Good to have skills:managing ETL pipelines. Job Summary : We are: Sales Excellence. Sales Excellence at Accenture empowers our people to compete, win and grow. We provide everything they need to grow their client portfolios, optimize their deals and enable their sales talent, all driven by sales intelligence. The team will be aligned to the Client Success, which is a new function to support Accenture's approach to putting client value and client experience at the heart of everything we do to foster client love. Our ambition is that every client loves working with Accenture and believes we're the ideal partner to help them create and realize their vision for the future – beyond their expectations. You are: A builder at heart – curious about new tools and their usefulness, eager to create prototypes, and adaptable to changing paths. You enjoy sharing your experiments with a small team and are responsive to the needs of your clients. The work: The Center of Excellence (COE) enables Sales Excellence to deliver best-in-class service offerings to Accenture leaders, practitioners, and sales teams. As a member of the COE Analytics Tools & Reporting team, you will help in building and enhancing data foundation for reporting tools and Analytics tool to provide insights on underlying trends and key drivers of the business. Roles & Responsibilities: Collaborate with the Client Success, Analytics COE, CIO Engineering/DevOps team, and stakeholders to build and enhance Client success data lake. Write complex SQL scripts to transform data for the creation of dashboards or reports and validate the accuracy and completeness of the data. Build automated solutions to support any business operation or data transfer. Document and build efficient data model for reporting and analytics use case. Assure the Data Lake data accuracy, consistency, and timeliness while ensuring user acceptance and satisfaction. Work with the Client Success, Sales Excellence COE members, CIO Engineering/DevOps team and Analytics Leads to standardize Data in data lake. Professional & Technical Skills: Bachelor's degree or equivalent experience in Data Engineering, analytics, or similar field. At least 4 years of professional experience in developing and managing ETL pipelines. A minimum of 2 years of GCP experience. Ability to write complex SQL and prepare data for dashboarding. Experience in managing and documenting data models. Understanding of Data governance and policies. Proficiency in Python and SQL scripting language. Ability to translate business requirements into technical specification for engineering team. Curiosity, creativity, a collaborative attitude, and attention to detail. Ability to explain technical information to technical as well as non-technical users. Ability to work remotely with minimal supervision in a global environment. Proficiency with Microsoft office tools. Additional Information: Master's degree in analytics or similar field. Data visualization or reporting using text data as well as sales, pricing, and finance data. Ability to prioritize workload and manage downstream stakeholders. About Our Company | Accenture Qualifications Experience: Minimum 5+ year(s) of experience is required Educational Qualification: Bachelor’s degree or equivalent experience in Data Engineering, analytics, or similar field
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough