Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6 - 8 years
25 - 27 Lacs
Noida
Work from Office
We are seeking a highly skilled Python AI/ML Professional to join our growing data science and machine learning team. The ideal candidate will have solid experience in developing, deploying, and optimizing AI/ML models using Python and related tools. You will work on real-world problems, build intelligent systems, and contribute to cutting-edge projects across various domains. Key Responsibilities: Design, build, and deploy machine learning models and AI solutions using Python Clean, preprocess, and analyze large datasets to extract meaningful insights Implement models using libraries such as Scikit-learn, TensorFlow, PyTorch, or similar frameworks Build scalable data pipelines and APIs for ML model deployment Collaborate with data engineers, analysts, and product teams to deliver business-driven AI solutions Python AI /ML,Integration,machine learning model deployment ,Moniter ML, ETL Pipelines,FastAPI, Flask,cloud platforms,Docker, Kubernetes, Git, and CI/CD tools Education : Bachelors degree in Computer Science, Information Technology, or a related field
Posted 1 month ago
7 - 10 years
15 - 25 Lacs
Bengaluru
Hybrid
Job Title: Python , PySpark Exp. :- 8+ yrs Location Workplace: Hyderabad/Bangalore ( Hybrid mode ) Job Type: Full-time Job Description: 8+ years of experience in data engineering with supply chain analytics Strong knowledge on data pipelines, data modelling, metadata management Experience with data lakes, data warehouses , data hubs and ETL Tools like PySpark, Golden Gate Replication, ETL. Languages and technologies : SQL, Python, PySpark, Scala, Spark, SQL/NoSQL databases, Develop and maintain scalable PySpark-based ETL pipelines for big data processing. Optimize Spark jobs through partitioning, caching, and performance tuning techniques. Ensure data quality with validation frameworks and error-handling mechanisms. Work with structured and unstructured data, handling transformations efficiently. Implement CI/CD pipelines for automated data pipeline deployment and monitoring. Relational Databases: Experience with managing and optimizing relational databases (e.g., Oracle SQL , PostgreSQL, SQL Server). NoSQL Databases: Experience with managing and optimizing relational databases (e.g., MongoDB, Cassandra) for handling unstructured and semi-structured data. Expertise in testing and deployment of data applications (e.g., provisioning resources, deploying and monitoring workflows and data quality) Expertise apache airflow
Posted 1 month ago
3 - 6 years
7 - 11 Lacs
Hyderabad
Work from Office
Sr Semantic Engineer – Research Data and Analytics What you will do Let’s do this. Let’s change the world. In this vital role you will be part of Research’s Semantic Graph Team is seeking a dedicated and skilled Semantic Data Engineer to build and optimize knowledge graph-based software and data resources. This role primarily focuses on working with technologies such as RDF, SPARQL, and Python. In addition, the position involves semantic data integration and cloud-based data engineering. The ideal candidate should possess experience in the pharmaceutical or biotech industry, demonstrate deep technical skills, and be proficient with big data technologies and demonstrate experience in semantic modeling. A deep understanding of data architecture and ETL processes is also essential for this role. In this role, you will be responsible for constructing semantic data pipelines, integrating both relational and graph-based data sources, ensuring seamless data interoperability, and leveraging cloud platforms to scale data solutions effectively. Roles & Responsibilities: Develop and maintain semantic data pipelines using Python, RDF, SPARQL, and linked data technologies. Develop and maintain semantic data models for biopharma scientific data Integrate relational databases (SQL, PostgreSQL, MySQL, Oracle, etc.) with semantic frameworks. Ensure interoperability across federated data sources, linking relational and graph-based data. Implement and optimize CI/CD pipelines using GitLab and AWS. Leverage cloud services (AWS Lambda, S3, Databricks, etc.) to support scalable knowledge graph solutions. Collaborate with global multi-functional teams, including research scientists, Data Architects, Business SMEs, Software Engineers, and Data Scientists to understand data requirements, design solutions, and develop end-to-end data pipelines to meet fast-paced business needs across geographic regions. Collaborate with data scientists, engineers, and domain experts to improve research data accessibility. Adhere to standard processes for coding, testing, and designing reusable code/components. Explore new tools and technologies to improve ETL platform performance. Participate in sprint planning meetings and provide estimations on technical implementation. Maintain comprehensive documentation of processes, systems, and solutions. Harmonize research data to appropriate taxonomies, ontologies, and controlled vocabularies for context and reference knowledge. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications and Experience: Doctorate Degree OR Master’s degree with 4 - 6 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelor’s degree with 6 - 8 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications and Experience: 6+ years of experience in designing and supporting biopharma scientific research data analytics (software platforms) Functional Skills: Must-Have Skills: Advanced Semantic and Relational Data Skills: Proficiency in Python, RDF, SPARQL, Graph Databases (e.g. Allegrograph), SQL, relational databases, ETL pipelines, big data technologies (e.g. Databricks), semantic data standards (OWL, W3C, FAIR principles), ontology development and semantic modeling practices. Cloud and Automation ExpertiseGood experience in using cloud platforms (preferably AWS) for data engineering, along with Python for automation, data federation techniques, and model-driven architecture for scalable solutions. Technical Problem-SolvingExcellent problem-solving skills with hands-on experience in test automation frameworks (pytest), scripting tasks, and handling large, complex datasets. Good-to-Have Skills: Experience in biotech/drug discovery data engineering Experience applying knowledge graphs, taxonomy and ontology concepts in life sciences and chemistry domains Experience with graph databases (Allegrograph, Neo4j, GraphDB, Amazon Neptune) Familiarity with Cypher, GraphQL, or other graph query languages Experience with big data tools (e.g. Databricks) Experience in biomedical or life sciences research data management Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 month ago
2 - 4 years
4 - 7 Lacs
Hyderabad
Work from Office
Associate Data Engineer Graph – Research Data and Analytics What you will do Let’s do this. Let’s change the world. In this vital role you will be part of Research’s Semantic Graph. Team is seeking a dedicated and skilled Data Engineer to design, build and maintain solutions for scientific data that drive business decisions for Research. You will build scalable and high-performance, graph-based, data engineering solutions for large scientific datasets and collaborate with Research partners. The ideal candidate possesses experience in the pharmaceutical or biotech industry, demonstrates deep technical skills, has experience with semantic data modeling and graph databases, and understands data architecture and ETL processes. Roles & Responsibilities: Design, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions Contribute to data pipeline projects from inception to deployment, manage scope, timelines, and risks Contribute to data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency Optimize large datasets for query performance Collaborate with global multi-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Maintain documentation of processes, systems, and solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications and Experience: Bachelor’s degree and 1to 3 years of Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field experience OR Diploma and 4 to 7 years of Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field experience Functional Skills: Must-Have Skills: Advanced Semantic and Relational Data Skills: Proficiency in Python, RDF, SPARQL, Graph Databases (e.g. Allegrograph), SQL, relational databases, ETL pipelines, big data technologies (e.g. Databricks), semantic data standards (OWL, W3C, FAIR principles), ontology development and semantic modeling practices. Hands on experience with big data technologies and platforms, such as Databricks, workflow orchestration, performance tuning on data processing. Excellent problem-solving skills and the ability to work with large, complex datasets Good-to-Have Skills: A passion for tackling complex challenges in drug discovery with technology and data Experience with system administration skills, such as managing Linux and Windows servers, configuring network infrastructure, and automating tasks with shell scripting. Examples include setting up and maintaining virtual machines, troubleshooting server issues, and ensuring data security through regular updates and backups. Solid understanding of data modeling, data warehousing, and data integration concepts Solid experience using RDBMS (e.g. Oracle, MySQL, SQL server, PostgreSQL) Knowledge of cloud data platforms (AWS preferred) Experience with data visualization tools (e.g. Dash, Plotly, Spotfire) Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming Experience writing and maintaining user documentation in Confluence Professional Certifications: Databricks Certified Data Engineer Professional preferred Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 month ago
3 - 8 years
6 - 15 Lacs
Hyderabad
Work from Office
Dear Tech Aspirants, Greetings of the day, We are conducting a walk-in drive for AWS Data Engineers(3-6 Years) on :- Walk-In Date: Saturday, 17-May-2025. Time : 9:30 AM - 3:30 PM Kindly fill this form to confirm you presence - https://forms.office.com/r/CEcuYGsFPS Walk-In Venue: Ground Floor, Sri Sai Towers, Plot No. 91A & 91B, Vittal Rao Nagar, Madhapur, Hyderabad 500081. Gogle Map Location : https://maps.app.goo.gl/dKkAm4EgF1q1CKqc8 *Carry your updated CV Job Title: AWS Data Engineer (SQL Mandatory) Location: Hyderabad, India Experience: 3 to 6 years We are seeking a skilled AWS Data Engineer with 3 to 6 years of experience to join our team. The ideal candidate will be responsible for implementing and maintaining AWS data services, processing and transforming raw data, and optimizing data workflows using AWS Glue to ensure seamless integration with business processes. This role requires a deep understanding of AWS cloud technologies, Apache Spark, and data engineering best practices. You will work closely with data scientists, analysts, and business stakeholders to ensure data is accessible, scalable, and efficient. Roles & Responsibilities : Implement & Maintain AWS Data Services : Deploy, configure, and manage AWS Glue and associated workflows. Data Processing & Transformation : Clean, process, and transform raw data into structured and usable formats for analytics and machine learning. Develop ETL Pipelines : Design and build ETL/ELT pipelines using Apache Spark, AWS Glue, and AWS Data Pipeline. Data Governance & Security : Ensure data quality, integrity, and security in compliance with organizational standards. Performance Optimization : Continuously improve data processing pipelines for efficiency and scalability. Collaboration : Work closely with data scientists, analysts, and software engineers to enable seamless data accessibility. Documentation & Best Practices : Maintain technical documentation and enforce best practices in data engineering. Modern Data Transformation : Develop and manage data transformation workflows using dbt (Data Build Tool) to ensure modular, testable, and version-controlled data pipelines. Data Mesh & Governance : Contribute to the implementation of data mesh architecture to promote domain-oriented data ownership, decentralized data management, and enhanced data governance. Workflow Orchestration : Design and implement data orchestration pipelines using tools like Apache Airflow for managing complex workflows and dependencies. Good hands-on experience with SQL programming. Technical Requirements & Skills : Essential Skills : Proficiency in AWS: Strong hands-on experience with AWS cloud services, including Amazon S3, AWS Glue, Amazon Redshift, and Amazon RDS. Expertise in AWS Glue: Deep understanding of AWS Glue, Apache Spark, and AWS Lake Formation. Programming Skills: Proficiency in Python, Scala for data engineering and processing. SQL Expertise: Strong knowledge of SQL for querying and managing structured data. ETL & Data Pipelines: Experience in designing and maintaining ETL/ELT workflows. Big Data Technologies: Knowledge of Hadoop, Spark, and distributed computing frameworks. Orchestration Tools : Experience with Apache Airflow or similar tools for scheduling and monitoring data workflows. Data Transformation Frameworks : Familiarity with DBT (Data Build Tool) for building reliable, version-controlled data transformations. Data Mesh Concepts : Understanding of data mesh architecture and its role in scaling data across decentralized domains. Version Control & CI/CD: Experience with Git, AWS CodeCommit, and CI/CD pipelines for automated data deployment. Nice to Have : AWS Certified Data Analytics Specialty. Machine Learning Familiarity: Understanding of machine learning concepts and integration with AWS SageMaker. Streaming Data Processing: Experience with Amazon Kinesis or Spark Streaming. Qualifications : Bachelors or Masters degree in Computer Science, Information Technology, Data Science, or a related field. 4+ years of experience in data engineering, cloud technologies, and AWS Glue. Strong problem-solving skills and ability to work in fast-paced environment. If interested please appear and Come with your updated CV*
Posted 1 month ago
12 - 16 years
35 - 37 Lacs
Chandigarh
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 1 month ago
12 - 16 years
35 - 37 Lacs
Vadodara
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 1 month ago
12 - 16 years
35 - 37 Lacs
Visakhapatnam
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 1 month ago
12 - 16 years
35 - 37 Lacs
Thiruvananthapuram
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 1 month ago
12 - 16 years
35 - 37 Lacs
Coimbatore
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 1 month ago
12 - 16 years
35 - 37 Lacs
Hyderabad
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 1 month ago
12 - 16 years
35 - 37 Lacs
Nagpur
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 1 month ago
12 - 16 years
35 - 37 Lacs
Jaipur
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 1 month ago
12 - 16 years
35 - 37 Lacs
Lucknow
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 1 month ago
12 - 16 years
35 - 37 Lacs
Kanpur
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 1 month ago
12 - 16 years
35 - 37 Lacs
Pune
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 1 month ago
12 - 16 years
35 - 37 Lacs
Ahmedabad
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 1 month ago
12 - 16 years
35 - 37 Lacs
Surat
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 1 month ago
3 - 6 years
9 - 15 Lacs
Hyderabad
Work from Office
Dear Tech Aspirants, Greetings of the day, We are conducting a walk-in drive for AWS Data Engineers(3-6 Years) on :- Walk-In Date: Saturday, 17-May-2025 Time : 9:30 AM - 3:30 PM Kindly fill this form to confirm you presence - https://forms.office.com/r/CEcuYGsFPS Walk-In Venue: Ground Floor, Sri Sai Towers, Plot No. 91A & 91B, Vittal Rao Nagar, Madhapur, Hyderabad 500081. Gogle Map Location : https://maps.app.goo.gl/dKkAm4EgF1q1CKqc8 *Carry your updated CV Job Title: AWS Data Engineer (SQL Mandatory) Location: Hyderabad, India Experience: 3 to 6 years We are seeking a skilled AWS Data Engineer with 3 to 6 years of experience to join our team. The ideal candidate will be responsible for implementing and maintaining AWS data services, processing and transforming raw data, and optimizing data workflows using AWS Glue to ensure seamless integration with business processes. This role requires a deep understanding of AWS cloud technologies, Apache Spark, and data engineering best practices. You will work closely with data scientists, analysts, and business stakeholders to ensure data is accessible, scalable, and efficient. Roles & Responsibilities : Implement & Maintain AWS Data Services : Deploy, configure, and manage AWS Glue and associated workflows. Data Processing & Transformation : Clean, process, and transform raw data into structured and usable formats for analytics and machine learning. Develop ETL Pipelines : Design and build ETL/ELT pipelines using Apache Spark, AWS Glue, and AWS Data Pipeline. Data Governance & Security : Ensure data quality, integrity, and security in compliance with organizational standards. Performance Optimization : Continuously improve data processing pipelines for efficiency and scalability. Collaboration : Work closely with data scientists, analysts, and software engineers to enable seamless data accessibility. Documentation & Best Practices : Maintain technical documentation and enforce best practices in data engineering. Modern Data Transformation : Develop and manage data transformation workflows using dbt (Data Build Tool) to ensure modular, testable, and version-controlled data pipelines. Data Mesh & Governance : Contribute to the implementation of data mesh architecture to promote domain-oriented data ownership, decentralized data management, and enhanced data governance. Workflow Orchestration : Design and implement data orchestration pipelines using tools like Apache Airflow for managing complex workflows and dependencies. Good hands-on experience with SQL programming. Technical Requirements & Skills : Essential Skills : Proficiency in AWS: Strong hands-on experience with AWS cloud services, including Amazon S3, AWS Glue, Amazon Redshift, and Amazon RDS. Expertise in AWS Glue: Deep understanding of AWS Glue, Apache Spark, and AWS Lake Formation. Programming Skills: Proficiency in Python, Scala for data engineering and processing. SQL Expertise: Strong knowledge of SQL for querying and managing structured data. ETL & Data Pipelines: Experience in designing and maintaining ETL/ELT workflows. Big Data Technologies: Knowledge of Hadoop, Spark, and distributed computing frameworks. Orchestration Tools : Experience with Apache Airflow or similar tools for scheduling and monitoring data workflows. Data Transformation Frameworks : Familiarity with DBT (Data Build Tool) for building reliable, version-controlled data transformations. Data Mesh Concepts : Understanding of data mesh architecture and its role in scaling data across decentralized domains. Version Control & CI/CD: Experience with Git, AWS CodeCommit, and CI/CD pipelines for automated data deployment. Nice to Have : AWS Certified Data Analytics Specialty. Machine Learning Familiarity: Understanding of machine learning concepts and integration with AWS SageMaker. Streaming Data Processing: Experience with Amazon Kinesis or Spark Streaming. Qualifications : Bachelors or Masters degree in Computer Science, Information Technology, Data Science, or a related field. 4+ years of experience in data engineering, cloud technologies, and AWS Glue. Strong problem-solving skills and ability to work in fast-paced environment. If interested please appear and Come with your updated CV*
Posted 1 month ago
4 - 9 years
6 - 11 Lacs
Mumbai
Work from Office
Job Title - Sales Excellence -Client Success - Data Engineering Specialist - CF Management Level :ML9 Location:Open Must have skills:GCP, SQL, Data Engineering, Python Good to have skills:managing ETL pipelines. Job Summary : We are: Sales Excellence. Sales Excellence at Accenture empowers our people to compete, win and grow. We provide everything they need to grow their client portfolios, optimize their deals and enable their sales talent, all driven by sales intelligence. The team will be aligned to the Client Success, which is a new function to support Accenture's approach to putting client value and client experience at the heart of everything we do to foster client love. Our ambition is that every client loves working with Accenture and believes we're the ideal partner to help them create and realize their vision for the future – beyond their expectations. You are: A builder at heart – curious about new tools and their usefulness, eager to create prototypes, and adaptable to changing paths. You enjoy sharing your experiments with a small team and are responsive to the needs of your clients. The work: The Center of Excellence (COE) enables Sales Excellence to deliver best-in-class service offerings to Accenture leaders, practitioners, and sales teams. As a member of the COE Analytics Tools & Reporting team, you will help in building and enhancing data foundation for reporting tools and Analytics tool to provide insights on underlying trends and key drivers of the business. Roles & Responsibilities: Collaborate with the Client Success, Analytics COE, CIO Engineering/DevOps team, and stakeholders to build and enhance Client success data lake. Write complex SQL scripts to transform data for the creation of dashboards or reports and validate the accuracy and completeness of the data. Build automated solutions to support any business operation or data transfer. Document and build efficient data model for reporting and analytics use case. Assure the Data Lake data accuracy, consistency, and timeliness while ensuring user acceptance and satisfaction. Work with the Client Success, Sales Excellence COE members, CIO Engineering/DevOps team and Analytics Leads to standardize Data in data lake. Professional & Technical Skills: Bachelor's degree or equivalent experience in Data Engineering, analytics, or similar field. At least 4 years of professional experience in developing and managing ETL pipelines. A minimum of 2 years of GCP experience. Ability to write complex SQL and prepare data for dashboarding. Experience in managing and documenting data models. Understanding of Data governance and policies. Proficiency in Python and SQL scripting language. Ability to translate business requirements into technical specification for engineering team. Curiosity, creativity, a collaborative attitude, and attention to detail. Ability to explain technical information to technical as well as non-technical users. Ability to work remotely with minimal supervision in a global environment. Proficiency with Microsoft office tools. Additional Information: Master's degree in analytics or similar field. Data visualization or reporting using text data as well as sales, pricing, and finance data. Ability to prioritize workload and manage downstream stakeholders. About Our Company | Accenture Qualifications Experience: Minimum 5+ year(s) of experience is required Educational Qualification: Bachelor’s degree or equivalent experience in Data Engineering, analytics, or similar field
Posted 1 month ago
4 - 8 years
10 - 15 Lacs
Pune
Remote
Position: AWS Data Engineer About bluCognition: bluCognition is an AI/ML based start-up specializing in developing data products leveraging alternative data sources and providing servicing support to our clients in financial services sector. Founded in2017, by some very named senior professionals from the financial services industry, the company is headquartered in the US, with the delivery centre based in Pune. We build all our solutions while leveraging the latest technology stack in AI, ML and NLP combined with decades of experience in risk management at some of the largest financial services firms in the world. Our clients are some of the biggest and the most progressive names in the financial services industry. We are entering a significant growth phase and are looking for individuals with entrepreneurial mindset who wants us to join in this exciting journey. https://www.blucognition.com The Role: We are seeking an experienced AWS Data Engineer to design, build, and manage scalable data pipelines and cloud-based solutions. In this role, you will work closely with data scientists, analysts, and software engineers to develop systems that support data-driven decision-making. Key Responsibilities: 1) Design, implement, and maintain robust, scalable, and efficient data pipelines using AWS services. 2) Develop ETL/ELT processes and automate data workflows for real-time and batch data ingestion. 3) Optimize data storage solutions (e.g., S3, Redshift, RDS, DynamoDB) for performance and cost-efficiency. 4) Build and maintain data lakes and data warehouses following best practices for security, governance, and compliance. 5) Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. 6) Monitor, troubleshoot, and improve the reliability and quality of data systems. 7) Implement data quality checks, logging, and error handling in data pipelines. 8) Use Infrastructure as Code (IaC) tools like AWS Cloud Formation or Terraform for environment management. 9) Stay up-to-date with the latest developments in AWS services and big data technologies. Required Qualifications: 1) Bachelors degree in Computer Science, Information Systems, Engineering, or a related field. 2) 4+ years of experience working as a data engineer or in a similar role. 3) Strong experience with AWS services such as: AWS Glue, AWS Lambda, Amazon S3, Amazon Redshift, Amazon RDS, Amazon EMR, AWS Step Functions 4) Proficiency in SQL and Python. 5) Solid understanding of data modeling, ETL processes, and data warehouse architecture. 6) Experience with orchestration tools like Apache Airflow or AWS Managed Workflows. 7) Knowledge of security best practices for cloud environments (IAM, KMS, VPC, etc.). 8) Experience with monitoring and logging tools (CloudWatch, X-Ray, etc.). Preferred Qualifications: 1) Good to have - AWS Certified Data Analytics Specialty or AWS Certified Solutions Architect certification. 2) Experience with real-time data streaming technologies like Kinesis or Kafka. 3) Familiarity with DevOps practices and CI/CD pipelines. 4) Knowledge of machine learning data preparation and MLOps workflows. Soft Skills: 1) Excellent problem-solving and analytical skills. 2) Strong communication skills with both technical and non-technical stakeholders. 3) Ability to work independently and collaboratively in a team environment.
Posted 1 month ago
2 - 3 years
4 - 5 Lacs
Bengaluru
Work from Office
As a skilled Developer, you are responsible for building tools and applications that utilize the data held within company databases. The primary responsibility will be to design and develop these layers of our applications and to coordinate with the rest of the team working on different layers of IT infrastructure. A commitment to collaborative problem solving, sophisticated design and quality product is essential Python Developer Necessary Skills: Have experience in data wrangling and manipulation with Python/Pandas. Experience with Docker containers. Knowledge of data structures, algorithms and data modeling. Experience with versioning (Git, Azure DevOps). Design and implementation of ETL/ELT pipelines. Should have good knowledge and experience on web scrapping (Scrapy, BeautifulSoup, Selenium) Expertise in at least one popular Python framework (like Django, Flask or Pyramid) Design, build, and maintain efficient, reusable, and reliable Python code. (SOLID, Design principles) Have experience in SQL database (Views, Stored Procedure, etc.) Responsibilities and Activities Aside from the core development role this job position includes auxiliary roles that are not related to development. The role includes but is not limited to: Support and maintenance of customs and previously developed tools, as well as excellence of performance and responsiveness of new applications. Deliver high quality and reliable applications, including Development and Front-End. In addition, you will maintain code quality, prioritize organization, and drive automatization. Participate in the peer review of plans, technical solutions, and related documentation (Map/document technical procedures). Identify security issues, bottlenecks, and bugs, implementing solutions to mitigate and address issues of service data security and data breaches. Work with SQL / Postgres databases: installing and maintaining database systems, supporting server management, including Backups. In addition to troubleshooting issues raised by the Data Processing team.
Posted 1 month ago
3 - 5 years
6 - 8 Lacs
Pune
Work from Office
Job Title: Senior Data Engineer Experience Required: 3 to 5 Years Location: Baner, Pune Job Type: Full-Time (WFO) Job Summary We are seeking a highly skilled and motivated Senior Data Engineer to join our dynamic team. The ideal candidate will have extensive experience in building and managing scalable data pipelines, working with cloud platforms like Microsoft Azure, AWS and utilizing advanced tools such as Datalakes, PySpark, and Azure Data Factory. The role involves collaborating with cross-functional teams to design and implement robust data solutions that support business intelligence, analytics, and decision-making processes. Key Responsibilities Design, develop, and maintain scalable ETL pipelines to ingest, transform, and process large datasets from various sources. Build and optimize data pipelines and architectures for efficient and secure data processing. Work extensively with Azure Data Lake , Azure Data Factory , and Azure Synapse Analytics for cloud data integration and management. Utilize Databricks and PySpark for advanced big data processing and analytics. Implement data modelling and design data warehouses to support business intelligence tools like Power BI . Ensure data quality, governance, and security using Azure DevOps and Azure Functions . Develop and maintain SQL Server databases and write optimized SQL queries for analytics and reporting. Collaborate with stakeholders to gather requirements and translate them into effective data engineering solutions. Implement Data architecture best practices to support big data initiatives and analytics use cases. Monitor, troubleshoot, and improve data workflows and processes to ensure seamless data flow. Required Skills and Qualifications Educational Background : Bachelor's or master's degree in computer science, Information Systems, or a related field. Technical Skills : Strong expertise in ETL development , Data Engineering , and Data Pipeline -Development . Proficiency in Azure Data Lake , Azure Data Factory , and Azure Synapse Analytics . Advanced knowledge of Databricks , PySpark , and Python for data processing. Hands-on experience with SQL Azure , SQL Server , and data warehousing solutions. Knowledge of Power BI for reporting and dashboard creation. Familiarity with Azure Functions , Azure DevOps , and cloud computing in Microsoft Azure . Understanding of data architecture and data modelling principles. Experience with Big Data tools and frameworks. Experience : Proven experience in designing and implementing large-scale data processing systems. Hands-on experience with DWH and handling big data workloads. Ability to work with both structured and unstructured datasets. Soft Skills : Strong problem-solving and analytical skills. Excellent communication and collaboration abilities to work effectively in a team environment. A proactive mindset with a passion for learning and adopting new technologies. Preferred Skills Experience with Azure Data Warehouse technologies. Knowledge of Azure Machine Learning or similar AI/ML frameworks. Familiarity with Data Governance and Data Compliance practices.
Posted 1 month ago
6 - 10 years
15 - 20 Lacs
Gurugram
Remote
Title: Looker Developer Team: Data Engineering Work Mode: Remote Shift Time: 3:00 PM - 12:00AM IST Contract: 12 months Key Responsibilities Collaborate closely with engineers, architects, business analysts, product owners, and other team members to understand the requirements and develop test strategies. LookML Proficiency: LookML is Looker's proprietary language for defining data models. Looker developers need to be able to write, debug, and maintain LookML code to create and manage data models, explores, and dashboards. Data Modeling Expertise:Understanding how to structure and organize data within Looker is essential. This involves mapping database schemas to LookML, creating views, and defining measures and dimensions. SQL Knowledge: Looker leverages SQL queries under the hood. Developers need to be able to write SQL to understand the data, debug queries, and potentially extend LookML with custom SQL. Looker Environment: Familiarity with the Looker interface, including the IDE, LookML Validator, and SQL Runner, is necessary for efficient development. Education and/or Experience Bachelor's degree in MIS, Computer Science, Information Technology or equivalent required 6+ Years of IT Industry experience in Data management field.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane