Jobs
Interviews

491 Data Pipeline Jobs - Page 19

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

11 - 20 years

20 - 30 Lacs

Bengaluru

Work from Office

Responsibilities : 1. Strategic Leadership : a. Define and drive the overall ML Ops strategy and roadmap for the organization, aligning it with business objectives and technical capabilities. b. Oversee the design, development, and implementation of ML Ops platforms, frameworks, and processes. c. Foster a culture of innovation and continuous improvement within the ML Ops team. 2. Technical Architecture : a. Design and implement scalable, reliable, and efficient ML Ops architectures. b. Select and integrate appropriate tools, technologies, and frameworks to support the ML lifecycle. c. Ensure compliance with industry best practices and standards for ML Ops. 3. Team Management : a. Lead and mentor a team of ML Ops engineers and architects. b. Foster collaboration and knowledge sharing among team members. c. Provide technical guidance and support to data scientists and engineers. 4. Innovation and Research : a. Stay up-to-date with emerging ML Ops trends and technologies. b. Research and evaluate new tools and techniques to enhance ML Ops capabilities. c. Contribute to the development of innovative ML Ops solutions. Minimum Required Skills : - 11+ years of experience preferred. - Proven track record of designing and implementing large-scale ML pipelines and infrastructure. - Experience with distributed computing frameworks (Spark, Hadoop) - Knowledge of graph databases and auto ML libraries - Bachelor's / Master's degree in computer science, analytics, mathematics, statistics - Strong experience in Python, SQL. - Solid understanding and knowledge of containerization technologies (Docker, Kubernetes). - Proficient in Experience in CI/CD pipelines, model monitoring, and MLOps platforms (Kubeflow, MLFlow) - Proficiency in cloud platforms, containerization, and ML frameworks (TensorFlow, PyTorch). - Certifications in cloud platforms or ML technologies can be a plus. - Extensive experience with cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes). - Strong problem-solving and analytical skills. - Ability to plan, execute and take ownership of task. Keywords : - ML Ops / MLOps Architect - Azure DevOps - Docker - Kubernetes - TensorFlow - MLFlow - Pipeline - Machine Learning Platform Engineer - Data Science Platform Engineer - DevOps Engineer (with ML focus) - AI Engineer - Data Engineer - Cloud Engineer (with ML focus) - Software Engineer (with ML focus) - Model Deployment Specialist - MLOps Architect - CI/CD - PyTorch - Scikit-learn - Cloud Computing - Big Data - Azure - Azure Machine Learning - GCP - Vertex AI - AWS - Amazon SageMaker

Posted 2 months ago

Apply

11 - 20 years

20 - 30 Lacs

Lucknow

Work from Office

Responsibilities : 1. Strategic Leadership : a. Define and drive the overall ML Ops strategy and roadmap for the organization, aligning it with business objectives and technical capabilities. b. Oversee the design, development, and implementation of ML Ops platforms, frameworks, and processes. c. Foster a culture of innovation and continuous improvement within the ML Ops team. 2. Technical Architecture : a. Design and implement scalable, reliable, and efficient ML Ops architectures. b. Select and integrate appropriate tools, technologies, and frameworks to support the ML lifecycle. c. Ensure compliance with industry best practices and standards for ML Ops. 3. Team Management : a. Lead and mentor a team of ML Ops engineers and architects. b. Foster collaboration and knowledge sharing among team members. c. Provide technical guidance and support to data scientists and engineers. 4. Innovation and Research : a. Stay up-to-date with emerging ML Ops trends and technologies. b. Research and evaluate new tools and techniques to enhance ML Ops capabilities. c. Contribute to the development of innovative ML Ops solutions. Minimum Required Skills : - 11+ years of experience preferred. - Proven track record of designing and implementing large-scale ML pipelines and infrastructure. - Experience with distributed computing frameworks (Spark, Hadoop) - Knowledge of graph databases and auto ML libraries - Bachelor's / Master's degree in computer science, analytics, mathematics, statistics - Strong experience in Python, SQL. - Solid understanding and knowledge of containerization technologies (Docker, Kubernetes). - Proficient in Experience in CI/CD pipelines, model monitoring, and MLOps platforms (Kubeflow, MLFlow) - Proficiency in cloud platforms, containerization, and ML frameworks (TensorFlow, PyTorch). - Certifications in cloud platforms or ML technologies can be a plus. - Extensive experience with cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes). - Strong problem-solving and analytical skills. - Ability to plan, execute and take ownership of task. Keywords : - ML Ops / MLOps Architect - Azure DevOps - Docker - Kubernetes - TensorFlow - MLFlow - Pipeline - Machine Learning Platform Engineer - Data Science Platform Engineer - DevOps Engineer (with ML focus) - AI Engineer - Data Engineer - Cloud Engineer (with ML focus) - Software Engineer (with ML focus) - Model Deployment Specialist - MLOps Architect - CI/CD - PyTorch - Scikit-learn - Cloud Computing - Big Data - Azure - Azure Machine Learning - GCP - Vertex AI - AWS - Amazon SageMaker

Posted 2 months ago

Apply

11 - 20 years

20 - 30 Lacs

Kolkata

Work from Office

Role : Principal ML Ops ArchitectResponsibilities :1. Strategic Leadership :a. Define and drive the overall ML Ops strategy and roadmap for the organization, aligning it with business objectives and technical capabilities.b. Oversee the design, development, and implementation of ML Ops platforms, frameworks, and processes.c. Foster a culture of innovation and continuous improvement within the ML Ops team. 2. Technical Architecture :a. Design and implement scalable, reliable, and efficient ML Ops architectures.b. Select and integrate appropriate tools, technologies, and frameworks to support the ML lifecycle.c. Ensure compliance with industry best practices and standards for ML Ops. 3. Team Management :a. Lead and mentor a team of ML Ops engineers and architects.b. Foster collaboration and knowledge sharing among team members.c. Provide technical guidance and support to data scientists and engineers. 4. Innovation and Research :a. Stay up-to-date with emerging ML Ops trends and technologies.b. Research and evaluate new tools and techniques to enhance ML Ops capabilities.c. Contribute to the development of innovative ML Ops solutions. Minimum Required Skills :- 11+ years of experience preferred. - Proven track record of designing and implementing large-scale ML pipelines and infrastructure.- Experience with distributed computing frameworks (Spark, Hadoop)- Knowledge of graph databases and auto ML libraries- Bachelor's / Master's degree in computer science, analytics, mathematics, statistics- Strong experience in Python, SQL.- Solid understanding and knowledge of containerization technologies (Docker, Kubernetes).- Proficient in Experience in CI/CD pipelines, model monitoring, and MLOps platforms (Kubeflow, MLFlow)- Proficiency in cloud platforms, containerization, and ML frameworks (TensorFlow, PyTorch).- Certifications in cloud platforms or ML technologies can be a plus. - Extensive experience with cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes).- Strong problem-solving and analytical skills.- Ability to plan, execute and take ownership of task.Keywords :- ML Ops / MLOps Architect- Azure DevOps- Docker- Kubernetes- TensorFlow- MLFlow- Pipeline- Machine Learning Platform Engineer- Data Science Platform Engineer- DevOps Engineer (with ML focus)- AI Engineer- Data Engineer- Cloud Engineer (with ML focus)- Software Engineer (with ML focus)- Model Deployment Specialist- MLOps Architect- CI/CD- PyTorch- Scikit-learn- Cloud Computing- Big Data- Azure- Azure Machine Learning- GCP- Vertex AI- AWS- Amazon SageMaker

Posted 2 months ago

Apply

11 - 20 years

20 - 30 Lacs

Pune

Work from Office

Role : Principal ML Ops Architect Responsibilities : 1. Strategic Leadership : a. Define and drive the overall ML Ops strategy and roadmap for the organization, aligning it with business objectives and technical capabilities. b. Oversee the design, development, and implementation of ML Ops platforms, frameworks, and processes. c. Foster a culture of innovation and continuous improvement within the ML Ops team. 2. Technical Architecture : a. Design and implement scalable, reliable, and efficient ML Ops architectures. b. Select and integrate appropriate tools, technologies, and frameworks to support the ML lifecycle. c. Ensure compliance with industry best practices and standards for ML Ops. 3. Team Management : a. Lead and mentor a team of ML Ops engineers and architects. b. Foster collaboration and knowledge sharing among team members. c. Provide technical guidance and support to data scientists and engineers. 4. Innovation and Research : a. Stay up-to-date with emerging ML Ops trends and technologies. b. Research and evaluate new tools and techniques to enhance ML Ops capabilities. c. Contribute to the development of innovative ML Ops solutions. Minimum Required Skills : - 11+ years of experience preferred. - Proven track record of designing and implementing large-scale ML pipelines and infrastructure. - Experience with distributed computing frameworks (Spark, Hadoop) - Knowledge of graph databases and auto ML libraries - Bachelor's / Master's degree in computer science, analytics, mathematics, statistics - Strong experience in Python, SQL. - Solid understanding and knowledge of containerization technologies (Docker, Kubernetes). - Proficient in Experience in CI/CD pipelines, model monitoring, and MLOps platforms (Kubeflow, MLFlow) - Proficiency in cloud platforms, containerization, and ML frameworks (TensorFlow, PyTorch). - Certifications in cloud platforms or ML technologies can be a plus. - Extensive experience with cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes). - Strong problem-solving and analytical skills. - Ability to plan, execute and take ownership of task. Keywords : - ML Ops / MLOps Architect - Azure DevOps - Docker - Kubernetes - TensorFlow - MLFlow - Pipeline - Machine Learning Platform Engineer - Data Science Platform Engineer - DevOps Engineer (with ML focus) - AI Engineer - Data Engineer - Cloud Engineer (with ML focus) - Software Engineer (with ML focus) - Model Deployment Specialist - MLOps Architect - CI/CD - PyTorch - Scikit-learn - Cloud Computing - Big Data - Azure - Azure Machine Learning - GCP - Vertex AI - AWS - Amazon SageMaker

Posted 2 months ago

Apply

11 - 20 years

20 - 30 Lacs

Patna

Work from Office

Role : Principal ML Ops Architect Responsibilities : 1. Strategic Leadership : a. Define and drive the overall ML Ops strategy and roadmap for the organization, aligning it with business objectives and technical capabilities. b. Oversee the design, development, and implementation of ML Ops platforms, frameworks, and processes. c. Foster a culture of innovation and continuous improvement within the ML Ops team. 2. Technical Architecture : a. Design and implement scalable, reliable, and efficient ML Ops architectures. b. Select and integrate appropriate tools, technologies, and frameworks to support the ML lifecycle. c. Ensure compliance with industry best practices and standards for ML Ops. 3. Team Management : a. Lead and mentor a team of ML Ops engineers and architects. b. Foster collaboration and knowledge sharing among team members. c. Provide technical guidance and support to data scientists and engineers. 4. Innovation and Research : a. Stay up-to-date with emerging ML Ops trends and technologies. b. Research and evaluate new tools and techniques to enhance ML Ops capabilities. c. Contribute to the development of innovative ML Ops solutions. Minimum Required Skills : - 11+ years of experience preferred. - Proven track record of designing and implementing large-scale ML pipelines and infrastructure. - Experience with distributed computing frameworks (Spark, Hadoop) - Knowledge of graph databases and auto ML libraries - Bachelor's / Master's degree in computer science, analytics, mathematics, statistics - Strong experience in Python, SQL. - Solid understanding and knowledge of containerization technologies (Docker, Kubernetes). - Proficient in Experience in CI/CD pipelines, model monitoring, and MLOps platforms (Kubeflow, MLFlow) - Proficiency in cloud platforms, containerization, and ML frameworks (TensorFlow, PyTorch). - Certifications in cloud platforms or ML technologies can be a plus. - Extensive experience with cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes). - Strong problem-solving and analytical skills. - Ability to plan, execute and take ownership of task. Keywords : - ML Ops / MLOps Architect - Azure DevOps - Docker - Kubernetes - TensorFlow - MLFlow - Pipeline - Machine Learning Platform Engineer - Data Science Platform Engineer - DevOps Engineer (with ML focus) - AI Engineer - Data Engineer - Cloud Engineer (with ML focus) - Software Engineer (with ML focus) - Model Deployment Specialist - MLOps Architect - CI/CD - PyTorch - Scikit-learn - Cloud Computing - Big Data - Azure - Azure Machine Learning - GCP - Vertex AI - AWS - Amazon SageMaker

Posted 2 months ago

Apply

11 - 20 years

20 - 30 Lacs

Ahmedabad

Work from Office

Responsibilities : 1. Strategic Leadership :a. Define and drive the overall ML Ops strategy and roadmap for the organization, aligning it with business objectives and technical capabilities.b. Oversee the design, development, and implementation of ML Ops platforms, frameworks, and processes.c. Foster a culture of innovation and continuous improvement within the ML Ops team. 2. Technical Architecture :a. Design and implement scalable, reliable, and efficient ML Ops architectures.b. Select and integrate appropriate tools, technologies, and frameworks to support the ML lifecycle.c. Ensure compliance with industry best practices and standards for ML Ops. 3. Team Management :a. Lead and mentor a team of ML Ops engineers and architects.b. Foster collaboration and knowledge sharing among team members.c. Provide technical guidance and support to data scientists and engineers. 4. Innovation and Research :a. Stay up-to-date with emerging ML Ops trends and technologies.b. Research and evaluate new tools and techniques to enhance ML Ops capabilities.c. Contribute to the development of innovative ML Ops solutions.Minimum Required Skills :- 11+ years of experience preferred. - Proven track record of designing and implementing large-scale ML pipelines and infrastructure.- Experience with distributed computing frameworks (Spark, Hadoop)- Knowledge of graph databases and auto ML libraries - Bachelor's / Master's degree in computer science, analytics, mathematics, statistics- Strong experience in Python, SQL.- Solid understanding and knowledge of containerization technologies (Docker, Kubernetes).- Proficient in Experience in CI/CD pipelines, model monitoring, and MLOps platforms (Kubeflow, MLFlow)- Proficiency in cloud platforms, containerization, and ML frameworks (TensorFlow, PyTorch).- Certifications in cloud platforms or ML technologies can be a plus.- Extensive experience with cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes).- Strong problem-solving and analytical skills.- Ability to plan, execute and take ownership of task. Keywords :- ML Ops / MLOps Architect- Azure DevOps- Docker- Kubernetes- TensorFlow- MLFlow- Pipeline- Machine Learning Platform Engineer- Data Science Platform Engineer- DevOps Engineer (with ML focus)- AI Engineer- Data Engineer- Cloud Engineer (with ML focus)- Software Engineer (with ML focus)- Model Deployment Specialist- MLOps Architect- CI/CD- PyTorch- Scikit-learn- Cloud Computing- Big Data- Azure- Azure Machine Learning- GCP- Vertex AI- AWS- Amazon SageMaker

Posted 2 months ago

Apply

11 - 20 years

20 - 30 Lacs

Kanpur

Work from Office

Responsibilities : 1. Strategic Leadership : a. Define and drive the overall ML Ops strategy and roadmap for the organization, aligning it with business objectives and technical capabilities. b. Oversee the design, development, and implementation of ML Ops platforms, frameworks, and processes. c. Foster a culture of innovation and continuous improvement within the ML Ops team. 2. Technical Architecture : a. Design and implement scalable, reliable, and efficient ML Ops architectures. b. Select and integrate appropriate tools, technologies, and frameworks to support the ML lifecycle. c. Ensure compliance with industry best practices and standards for ML Ops. 3. Team Management : a. Lead and mentor a team of ML Ops engineers and architects. b. Foster collaboration and knowledge sharing among team members. c. Provide technical guidance and support to data scientists and engineers. 4. Innovation and Research : a. Stay up-to-date with emerging ML Ops trends and technologies. b. Research and evaluate new tools and techniques to enhance ML Ops capabilities. c. Contribute to the development of innovative ML Ops solutions. Minimum Required Skills : - 11+ years of experience preferred. - Proven track record of designing and implementing large-scale ML pipelines and infrastructure. - Experience with distributed computing frameworks (Spark, Hadoop) - Knowledge of graph databases and auto ML libraries - Bachelor's / Master's degree in computer science, analytics, mathematics, statistics - Strong experience in Python, SQL. - Solid understanding and knowledge of containerization technologies (Docker, Kubernetes). - Proficient in Experience in CI/CD pipelines, model monitoring, and MLOps platforms (Kubeflow, MLFlow) - Proficiency in cloud platforms, containerization, and ML frameworks (TensorFlow, PyTorch). - Certifications in cloud platforms or ML technologies can be a plus. - Extensive experience with cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes). - Strong problem-solving and analytical skills. - Ability to plan, execute and take ownership of task. Keywords : - ML Ops / MLOps Architect - Azure DevOps - Docker - Kubernetes - TensorFlow - MLFlow - Pipeline - Machine Learning Platform Engineer - Data Science Platform Engineer - DevOps Engineer (with ML focus) - AI Engineer - Data Engineer - Cloud Engineer (with ML focus) - Software Engineer (with ML focus) - Model Deployment Specialist - MLOps Architect - CI/CD - PyTorch - Scikit-learn - Cloud Computing - Big Data - Azure - Azure Machine Learning - GCP - Vertex AI - AWS - Amazon SageMaker

Posted 2 months ago

Apply

11 - 20 years

20 - 30 Lacs

Hyderabad

Work from Office

Responsibilities : 1. Strategic Leadership : a. Define and drive the overall ML Ops strategy and roadmap for the organization, aligning it with business objectives and technical capabilities. b. Oversee the design, development, and implementation of ML Ops platforms, frameworks, and processes. c. Foster a culture of innovation and continuous improvement within the ML Ops team. 2. Technical Architecture : a. Design and implement scalable, reliable, and efficient ML Ops architectures. b. Select and integrate appropriate tools, technologies, and frameworks to support the ML lifecycle. c. Ensure compliance with industry best practices and standards for ML Ops. 3. Team Management : a. Lead and mentor a team of ML Ops engineers and architects. b. Foster collaboration and knowledge sharing among team members. c. Provide technical guidance and support to data scientists and engineers. 4. Innovation and Research : a. Stay up-to-date with emerging ML Ops trends and technologies. b. Research and evaluate new tools and techniques to enhance ML Ops capabilities. c. Contribute to the development of innovative ML Ops solutions. Minimum Required Skills : - 11+ years of experience preferred. - Proven track record of designing and implementing large-scale ML pipelines and infrastructure. - Experience with distributed computing frameworks (Spark, Hadoop) - Knowledge of graph databases and auto ML libraries - Bachelor's / Master's degree in computer science, analytics, mathematics, statistics - Strong experience in Python, SQL. - Solid understanding and knowledge of containerization technologies (Docker, Kubernetes). - Proficient in Experience in CI/CD pipelines, model monitoring, and MLOps platforms (Kubeflow, MLFlow) - Proficiency in cloud platforms, containerization, and ML frameworks (TensorFlow, PyTorch). - Certifications in cloud platforms or ML technologies can be a plus. - Extensive experience with cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes). - Strong problem-solving and analytical skills. - Ability to plan, execute and take ownership of task. Keywords : - ML Ops / MLOps Architect - Azure DevOps - Docker - Kubernetes - TensorFlow - MLFlow - Pipeline - Machine Learning Platform Engineer - Data Science Platform Engineer - DevOps Engineer (with ML focus) - AI Engineer - Data Engineer - Cloud Engineer (with ML focus) - Software Engineer (with ML focus) - Model Deployment Specialist - MLOps Architect - CI/CD - PyTorch - Scikit-learn - Cloud Computing - Big Data - Azure - Azure Machine Learning - GCP - Vertex AI - AWS - Amazon SageMaker

Posted 2 months ago

Apply

5 - 10 years

15 - 27 Lacs

Bangalore Rural, Chennai, Bengaluru

Hybrid

Design, implement, and maintain RDBMS databases ensuring high performance, scalability, and integrity. Develop and maintain data pipelines using SQL, Python, and Shell scripting. Automate database operations including ETL processes, monitoring, and maintenance tasks. Optimize and troubleshoot queries and data loads to ensure efficient performance. Write complex SQL queries for data analysis and reporting purposes. Implement robust data security and access control mechanisms. Required Skills & Qualifications: Bachelors degree in Computer Science, Software Engineering, IT, or related discipline. 57 years of experience in complex data environments handling large volumes of data. 5+ years of experience with SQL, including writing complex and ad-hoc queries. 5+ years of Python programming experience. Proficiency in Shell scripting and automation of data workflows. Interested candidates share your CV at himani.girnar@alikethoughts.com with below details Candidate's name- Email and Alternate Email ID- Contact and Alternate Contact no- Total exp- Relevant experience- Current Org- Notice period- CCTC- ECTC- Current Location- Preferred Location- Pancard No

Posted 2 months ago

Apply

3 - 8 years

15 - 30 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Salary: 15 to 30 LPA Exp: 3 to 8 years Location : Gurgaon/Bangalore/Pune/Chennai Notice: Immediate to 30 days..!! Key Responsibilities & Skillsets: Common Skillsets : 3+ years of experience in analytics, Pyspark, Python, Spark, SQL and associated data engineering jobs. Must have experience with managing and transforming big data sets using pyspark, spark-scala, Numpy pandas Excellent communication & presentation skills Experience in managing Python codes and collaborating with customer on model evolution Good knowledge of data base management and Hadoop/Spark, SQL, HIVE, Python (expertise). Superior analytical and problem solving skills Should be able to work on a problem independently and prepare client ready deliverable with minimal or no supervision Good communication skill for client interaction Data Management Skillsets: Ability to understand data models and identify ETL optimization opportunities. Exposure to ETL tools is preferred Should have strong grasp of advanced SQL functionalities (joins, nested query, and procedures). Strong ability to translate functional specifications / requirements to technical requirements

Posted 2 months ago

Apply

10 - 17 years

20 - 35 Lacs

Pune

Work from Office

Role & responsibilities Data Engineer - Banking experience Primary skills: Azure data engineering, Databricks hands on experience , Confluent Self Managed Services hands on experience, SQL/Python, data pipelines, modeling, integration, governance, performance tuning, and cross-functional collaboration. Data Product Development, Data Mart design Understanding of Machine Learning / Deep Learning / Time-Series / Optimization Banking Retail, Business & Wholesale Product Knowledge and understanding Communication & Inter-personal skills People Management & Multicultural awareness Technical Skills: Strong skills using Python (+ R, SAS, Spark) , SQL, DAX Strong coding skills in a data programming language: Python, R, Java Cloud Data technologies Competencies: Expert-level proficiency in SQL, Python with SAS/R/Spark as plus Deployment experience (various databases, server/cloud environment: AWS, Azure, and APIs, ODBCs, web apps) Excellent knowledge of Banking Functional Knowledge (major plus) Good written, oral communication, documentation skills with ability to communicate effectively with stakeholders Understanding of banking products, functional knowledge is a major plus Experience in working with tools like Power BI, Tableau or Qlik Secondary skills: Java/Scala, data warehousing, metadata/MIS reporting, vendor coordination, cloud certifications, and banking domain experience. Soft skills: Analytical thinking, attention to detail, documentation, time management, and teamwork. Key Responsibilities / Accountabilities Building user interfaces and dashboards that allow users to interact with and visualize data Testing and validating software and data product to ensure accuracy and reliability Collaborating with other developers, data analysts, and stakeholders to ensure that the software meets the needs of the business or organization Support Business Performance & Strategic Analytics, business departments and decision-makers.

Posted 2 months ago

Apply

10 - 18 years

12 - 22 Lacs

Pune, Bengaluru

Hybrid

Hi, We are hiring for the role of AWS Data Engineer with one of the leading organization for Bangalore & Pune. Experience - 10+ Years Location - Bangalore & Pune Ctc - Best in the industry Job Description Technical Skills PySpark coding skill Proficient in AWS Data Engineering Services Experience in Designing Data Pipeline & Data Lake If interested kindly share your resume at nupur.tyagi@mounttalent.com

Posted 2 months ago

Apply

3 - 8 years

15 - 30 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Salary: 15 to 30 LPA Exp: 3 to 8 years Location : Gurgaon/Bangalore/Pune/Chennai Notice: Immediate to 30 days..!! Key Responsibilities & Skillsets: Common Skillsets : 3+ years of experience in analytics, Pyspark, Python, Spark, SQL and associated data engineering jobs. Must have experience with managing and transforming big data sets using pyspark, spark-scala, Numpy pandas Excellent communication & presentation skills Experience in managing Python codes and collaborating with customer on model evolution Good knowledge of data base management and Hadoop/Spark, SQL, HIVE, Python (expertise). Superior analytical and problem solving skills Should be able to work on a problem independently and prepare client ready deliverable with minimal or no supervision Good communication skill for client interaction Data Management Skillsets: Ability to understand data models and identify ETL optimization opportunities. Exposure to ETL tools is preferred Should have strong grasp of advanced SQL functionalities (joins, nested query, and procedures). Strong ability to translate functional specifications / requirements to technical requirements

Posted 2 months ago

Apply

5 - 8 years

15 - 25 Lacs

Pune

Hybrid

Role & responsibilities Data Pipeline Development: Design, develop, and maintain data pipelines utilizing Google Cloud Platform (GCP) services like Dataflow, Dataproc, and Pub/Sub. Data Ingestion & Transformation: Build and implement data ingestion and transformation processes using tools such as Apache Beam and Apache Spark. Data Storage Management: Optimize and manage data storage solutions on GCP, including BigQuery, Cloud Storage, and Cloud SQL. Security Implementation: Implement data security protocols and access controls with GCP's Identity and Access Management (IAM) and Cloud Security Command Center. System Monitoring & Troubleshooting: Monitor and troubleshoot data pipelines and storage solutions using GCP's Stackdriver and Cloud Monitoring tools. Generative AI Systems: Develop and maintain scalable systems for deploying and operating generative AI models, ensuring efficient use of computational resources. Gen AI Capability Building: Build generative AI capabilities among engineers, covering areas such as knowledge engineering, prompt engineering, and platform engineering. Knowledge Engineering: Gather and structure domain-specific knowledge to be utilized by large language models (LLMs) effectively. Prompt Engineering: Design effective prompts to guide generative AI models, ensuring relevant, accurate, and creative text output. Collaboration: Work with data experts, analysts, and product teams to understand data requirements and deliver tailored solutions. Automation: Automate data processing tasks using scripting languages such as Python. Best Practices: Participate in code reviews and contribute to establishing best practices for data engineering within GCP. Continuous Learning: Stay current with GCP service innovations and advancements. Core data services (GCS, BigQuery, Cloud Storage, Dataflow, etc.). Skills and Experience: Experience: 5+ years of experience in Data Engineering or similar roles. Proficiency in GCP: Expertise in designing, developing, and deploying data pipelines, with strong knowledge of GCP core data services (GCS, BigQuery, Cloud Storage, Dataflow, etc.). Generative AI & LLMs: Hands-on experience with Generative AI models and large language models (LLMs) such as GPT-4, LLAMA3, and Gemini 1.5, with the ability to integrate these models into data pipelines and processes. Experience in Webscraping Technical Skills: Strong proficiency in Python and SQL for data manipulation and querying. Experience with distributed data processing frameworks like Apache Beam or Apache Spark is a plus. Security Knowledge: Familiarity with data security and access control best practices. • Collaboration: Excellent communication and problem-solving skills, with a demonstrated ability to collaborate across teams. Project Management: Ability to work independently, manage multiple projects, and meet deadlines. Preferred Knowledge: Familiarity with Sustainable Finance, ESG Risk, CSRD, Regulatory Reporting, cloud infrastructure, and data governance best practices. Bonus Skills: Knowledge of Terraform is a plus. Education: Degree: Bachelors or masters degree in computer science, Information Technology, or a related field. Experience: 3-5 years of hands-on experience in data engineering. Certification: Google Professional Data Engineer

Posted 2 months ago

Apply

7 - 10 years

20 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Skills: Python, SQL, PySpark, Azure Databricks, Data Pipelines SQL: Great skills on T-SQL, stored procedures troubleshooting and development, schema management, data issues analysis, query performance analysis. Python: Intermediate development knowledge skillful in data frames, Pandas library, parquets management, deployment on cloud. • Databricks: PySpark and data frames, azure databricks notebooks management and troubleshooting, Azure databricks architecture. • Azure Data Factory/ADF/Synapse/ Data Explorer: Data pipelines design and troubleshooting, Azure Linked services management.

Posted 2 months ago

Apply

5 - 7 years

0 - 0 Lacs

Kolkata

Work from Office

Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes: Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures of Outcomes: Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected: Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation: Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration: Define and govern the configuration management plan. Ensure compliance within the team. Testing: Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance: Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management: Manage the delivery of modules effectively. Defect Management: Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation: Create and provide input for effort and size estimation for projects. Knowledge Management: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management: Execute and monitor the release process to ensure smooth transitions. Design Contribution: Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface: Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management: Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications: Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples: Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments: Required Skills & Qualifications: - A degree (preferably an advanced degree) in Computer Science, Engineering or a related field - Senior developer having 8+ years of hands on development experience in Azure using ASB and ADF: Extensive experience in designing, developing, and maintaining data solutions/pipelines in the Azure ecosystem, including Azure Service Bus, & ADF. - Familiarity with MongoDB and Python is added advantage. Required Skills Azure Data Factory,Azure Service Bus,Azure,Mongodb

Posted 2 months ago

Apply

3 - 7 years

8 - 11 Lacs

Gurugram

Work from Office

KDataScience (USA & INDIA) is looking for Senior Data Engineer to join our dynamic team and embark on a rewarding career journey Designing and implementing scalable and reliable data pipelines, data models, and data infrastructure for processing large and complex datasets. Developing and maintaining databases, data warehouses, and data lakes that store and manage the organization's data. Developing and implementing data integration and ETL (Extract, Transform, Load) processes to ensure that data flows smoothly and accurately between different systems and data sources. Ensuring data quality, consistency, and accuracy through data profiling, cleansing, and validation. Building and maintaining data processing and analytics systems that support business intelligence, machine learning, and other data-driven applications. Optimizing the performance and scalability of data systems and infrastructure to ensure that they can handle the organization's growing data needs.

Posted 2 months ago

Apply

3 - 5 years

8 - 11 Lacs

Pune, Gurugram, Bengaluru

Work from Office

Job Title: Data Engineer – Snowflake & Python About the Role: We are seeking a skilled and proactive Data Developer with 3-5 years of hands-on experience in Snowflake , Python , Streamlit , and SQL , along with expertise in consuming REST APIs and working with modern ETL tools like Matillion, Fivetran etc. The ideal candidate will have a strong foundation in data modeling , data warehousing , and data profiling , and will play a key role in designing and implementing robust data solutions that drive business insights and innovation. Key Responsibilities: Design, develop, and maintain data pipelines and workflows using Snowflake and an ETL tool (e.g., Matillion, dbt, Fivetran, or similar). Develop data applications and dashboards using Python and Streamlit. Create and optimize complex SQL queries for data extraction, transformation, and loading. Integrate REST APIs for data access and process automation. Perform data profiling, quality checks, and troubleshooting to ensure data accuracy and integrity. Design and implement scalable and efficient data models aligned with business requirements. Collaborate with data analysts, data scientists, and business stakeholders to understand data needs and deliver actionable solutions. Implement best practices in data governance, security, and compliance. Required Skills and Qualifications: 3–5 years of professional experience in a data engineering or development role. Strong expertise in Snowflake , including performance tuning and warehouse optimization. Proficient in Python , including data manipulation with libraries like Pandas. Experience building web-based data tools using Streamlit . Solid understanding and experience with RESTful APIs and JSON data structures. Strong SQL skills and experience with advanced data transformation logic. Experience with an ETL tool commonly used with Snowflake (e.g., dbt , Matillion , Fivetran , Airflow ). Hands-on experience in data modeling (dimensional and normalized), data warehousing concepts , and data profiling techniques . Familiarity with version control (e.g., Git) and CI/CD processes is a plus. Preferred Qualifications: Experience working in cloud environments (AWS, Azure, or GCP). Knowledge of data governance and cataloging tools. Experience with agile methodologies and working in cross-functional teams.

Posted 2 months ago

Apply

2 - 6 years

15 - 30 Lacs

Pune

Hybrid

We are on a mission to rid the world of bad customer service by mobilizing the way help is delivered. Todays consumers want an always-available customer service experience that leaves them feeling valued and respected. Helpshift helps B2B brands deliver this modern customer service experience through a mobile-first approach. We have changed how conversations take place, moving the conversation away from a slow, outdated email and desktop experience to an in-app chat experience that allows users to interact with brands in their own time. Through our market-leading AI-powered chatbots and automation, we help brands deliver instant and rapid resolutions. Because agents play a key role in delivering help, our platform gives agents superpowers with automation and AI that simply works. Companies such as Scopely, Supercell, Brex, EA, Square along with hundreds of other leading brands use the Helpshift platform to mobilize customer service delivery. Over 900 million active monthly consumers are enabled on 2B+ devices worldwide with Helpshift. Some numbers that illustrate our scale: 85k/rps 30ms response time 300 GB data transfer/hour 1000 VMs deployed at peak Role & responsibilities Building maintainable data pipelines both for data ingestion and operational analytics for data collected from 2 billion devices and 900M Monthly active users Building customer-facing analytics products that deliver actionable insights and data, easily detect anomalies Collaborating with data stakeholders to see what their data needs are and being a part of the analysis process Write design specifications, test, deployment, and scaling plans for the data pipelines Mentor people in the team & organization Preferred candidate profile 3+ years of experience in building and running data pipelines that scale for TBs of data Proficiency in high-level object-oriented programming language (Python or Java) is must Experience in Cloud data platforms like Snowflake and AWS, EMR/Athena is a must Experience in building modern data lakehouse architectures using Snowflake and columnar formats like Apache Iceberg/Hudi, Parquet, etc Proficiency in Data modeling, SQL query profiling, and data warehousing skills is a must Experience in distributed data processing engines like Apache Spark, Apache Flink, Datalfow/Apache Beam, etc Knowledge of workflow orchestrators like Airflow, Dasgter, etc is a plus Data visualization skills are a plus (PowerBI, Metabase, Tableau, Hex, Sigma, etc) Excellent verbal and written communication skills Bachelors Degree in Computer Science (or equivalent) Perks and benefits Hybrid setup Worker's insurance Paid Time Offs Other employee benefits to be discussed by our Talent Acquisition team in India. Helpshift embraces diversity. We are proud to be an equal opportunity workplace and do not discriminate on the basis of sex, race, color, age, sexual orientation, gender identity, religion, national origin, citizenship, marital status, veteran status, or disability status. Privacy Notice By providing your information in this application, you understand that we will collect and process your information in accordance with our Applicant Privacy Notice. For more information, please see our Applicant Privacy Notice at https://www.keywordsstudios.com/en/applicant-privacy-notice.

Posted 2 months ago

Apply

3 - 7 years

5 - 9 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

About Emperen Technologies : Emperen Technologies is a leading consulting firm committed to delivering tangible results for clients through a relationship-driven approach. With successful implementations for Fortune 500 companies, non-profits, and startups, Emperen Technologies exemplifies a client-centric model that prioritizes values and scalable, flexible solutions. Emperen specializes in navigating complex technological landscapes, empowering clients to achieve growth and success. Role Description : Emperen Technologies is seeking a highly skilled Senior Master Data Management (MDM) Engineer to join our team on a contract basis. This is a remote position where the Senior MDM Engineer will be responsible for a variety of key tasks including data engineering, data modeling, ETL processes, data warehousing, and data analytics. The role demands a strong understanding of MDM platforms, cloud technologies, and data integration, as well as the ability to work collaboratively in a dynamic environment. Key Responsibilities : - Design, implement, and manage Master Data Management (MDM) solutions to ensure data consistency and accuracy across the organization. - Oversee the architecture and operation of data modeling, ETL processes, and data warehousing. - Develop and execute data quality strategies to maintain high-quality data in line with business needs. - Build and integrate data pipelines using Microsoft Azure, DevOps, and GitLab technologies. - Implement data governance policies and ensure compliance with data security and privacy regulations. - Collaborate with cross-functional teams to define and execute business and technical requirements. - Analyze data to support business intelligence and decision-making processes. - Provide ongoing support for data integration, ensuring smooth operation and optimal performance. - Troubleshoot and resolve technical issues related to MDM, data integration, and related processes. - Work on continuous improvements of the MDM platform and related data processes. Qualifications : Required Skills & Experience : - Proven experience in Master Data Management (MDM), with hands-on experience on platforms like Profisee MDM and Microsoft Master Data Services (MDS). - Solid experience in Microsoft Azure cloud technologies. - Expertise in DevOps processes and using GitLab for version control and deployment. - Strong background in Data Warehousing, Azure Data Lakes, and Business Intelligence (BI) tools. - Expertise in Data Governance, data architecture, data modeling, and data integration (particularly using REST APIs). - Knowledge and experience in data quality, data security, and privacy best practices. - Experience working with business stakeholders and technical teams to analyze business requirements and translate them into effective data solutions. - Basic Business Analysis Skills - Ability to assess business needs, translate those into technical requirements, and ensure alignment between data management systems and business goals. Preferred Qualifications : - Experience with big data technologies and advanced analytics platforms. - Familiarity with data integration tools such as Talend or Informatica. - Knowledge of data visualization tools such as Power BI or Tableau. - Certifications in relevant MDM, cloud technologies, or data management platforms are a plus Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote

Posted 2 months ago

Apply

4 years

15 - 23 Lacs

Pune

Work from Office

The Role We are seeking an experienced Senior Software Data Engineer to join the Data Integrations Team, a critical component of the Addepar Platform team. The Addepar Platform is a comprehensive data fabric that provides a single source of truth for our product set, encompassing a centralized and self-describing repository, API driven data services, integration pipeline, analytics infrastructure, warehousing solutions, and operating tools. The Data Integrations team is responsible for the acquisition, conversion, cleansing, reconciliation, modeling, tooling, and infrastructure related to the integration of market and security master data from third-party data providers. This team plays a crucial role in our core business, enabling alignment across public and alternative investment data products and empowering clients to effectively manage their investment portfolios. As a Senior Software Data Engineer you will collaborate closely with product counterparts in an agile environment to drive business outcomes. Your responsibilities will include contributing to complex engineering projects using a modern and diverse technology stack, including PySpark, Python, AWS, Terraform, Java, Kubernetes and more. What You’ll Do Partner with multi-functional teams to design, develop, and deploy scalable data solutions that meet business requirements Build pipelines that support the ingestion, analysis, and enrichment of financial data by collaborating with business data analysts Advocate for standard methodologies, find opportunities for automation and optimizations in code and processes to increase the throughput and accuracy of data Develop and maintain efficient process controls and accurate metrics that improve data quality as well as increase operational efficiency Working in a fast-paced, dynamic environment to deliver high-quality results and drive continuous improvement Who You Are Minimum 5+ years of professional software data engineering experience A computer science degree or equivalent experience Proficiency with at least one object oriented programming language (Python OR Java) Proficiency with Pyspark,relational databases, SQL and data pipelines Rapid learner with strong problem solving skills Knowledge of financial concepts (e.g., stocks, bonds, etc.) is helpful but not necessary Experience in data modeling and visualisation is a plus Passion for the world of FinTech and solving previously intractable problems at the heart of investment management is a plus Experience with any public cloud is highly desired (AWS preferred). Experience with data-lake or data platforms like Databricks highly preferred. Important Note - This role requires working from our Pune office 3 days a week (Hybrid work model)

Posted 2 months ago

Apply

5 - 10 years

20 - 27 Lacs

Hyderabad, Bengaluru

Hybrid

Location: Bnagalore, Hyderabad Notice Period: Immediate to 20 days Experience: 6+ years Relevant Experience: 6+ years Skills: Python, SQL, PySpark, Azure Databricks, Data Pipelines

Posted 2 months ago

Apply

7 - 10 years

16 - 21 Lacs

Mumbai

Work from Office

Position Overview: The Google Cloud Data Engineering Lead role is ideal for an experienced Google Cloud Data Engineer who will drive the design, development, and optimization of data solutions on the Google Cloud Platform (GCP). The role requires the candidate to lead a team of data engineers and collaborate with data scientists, analysts, and business stakeholders to enable scalable, secure, and high-performance data pipelines and analytics platforms. Key Responsibilities: Lead and manage a team of data engineers delivering end-to-end data pipelines and platforms on GCP. Design and implement robust, scalable, and secure data architectures using services like BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud Storage. Develop and maintain batch and real-time ETL/ELT workflows using tools such as Apache Beam, Dataflow, or Composer (Airflow). Collaborate with data scientists, analysts, and application teams to gather requirements and ensure data availability and quality. Define and enforce data engineering best practices including version control, testing, code reviews, and documentation. Drive automation and infrastructure-as-code approaches using Terraform or Deployment Manager for provisioning GCP resources. Implement and monitor data quality, lineage, and governance frameworks across the data platform. Optimize query performance and storage strategies, particularly within BigQuery and other GCP analytics tools. Mentor team members and contribute to the growth of technical capabilities across the organization. Qualifications: Education : Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field. Experience : 7+ years of experience in data engineering, including 3+ years working with GCP data services. Proven leadership experience in managing and mentoring data engineering teams. Skills : Expert-level understanding of BigQuery, Dataflow (Apache Beam), Cloud Storage, and Pub/Sub. Strong SQL and Python skills for data processing and orchestration. Experience with workflow orchestration tools (Airflow/Composer). Hands-on experience with CI/CD, Git, and infrastructure-as-code tools (e.g., Terraform). Familiarity with data security, governance, and compliance practices in cloud environments. Certifications : GCP Professional Data Engineer certification.

Posted 2 months ago

Apply

3 - 6 years

9 - 13 Lacs

Mohali, Gurugram, Bengaluru

Work from Office

Job Title: Data Engineer – Snowflake & Python About the Role: We are seeking a skilled and proactive Data Developer with 3-5 years of hands-on experience in Snowflake , Python , Streamlit , and SQL , along with expertise in consuming REST APIs and working with modern ETL tools like Matillion, Fivetran etc. The ideal candidate will have a strong foundation in data modeling , data warehousing , and data profiling , and will play a key role in designing and implementing robust data solutions that drive business insights and innovation. Key Responsibilities: Design, develop, and maintain data pipelines and workflows using Snowflake and an ETL tool (e.g., Matillion, dbt, Fivetran, or similar). Develop data applications and dashboards using Python and Streamlit. Create and optimize complex SQL queries for data extraction, transformation, and loading. Integrate REST APIs for data access and process automation. Perform data profiling, quality checks, and troubleshooting to ensure data accuracy and integrity. Design and implement scalable and efficient data models aligned with business requirements. Collaborate with data analysts, data scientists, and business stakeholders to understand data needs and deliver actionable solutions. Implement best practices in data governance, security, and compliance. Required Skills and Qualifications: Experience in HR Data and databases is a must. 3–5 years of professional experience in a data engineering or development role. Strong expertise in Snowflake , including performance tuning and warehouse optimization. Proficient in Python , including data manipulation with libraries like Pandas. Experience building web-based data tools using Streamlit . Solid understanding and experience with RESTful APIs and JSON data structures. Strong SQL skills and experience with advanced data transformation logic. Experience with an ETL tool commonly used with Snowflake (e.g., dbt , Matillion , Fivetran , Airflow ). Hands-on experience in data modeling (dimensional and normalized), data warehousing concepts , and data profiling techniques . Familiarity with version control (e.g., Git) and CI/CD processes is a plus. Preferred Qualifications: Experience working in cloud environments (AWS, Azure, or GCP). Knowledge of data governance and cataloging tools. Experience with agile methodologies and working in cross-functional teams. Experience in HR Data and databases. Experience in Azure Data Factory

Posted 2 months ago

Apply

5 - 10 years

15 - 25 Lacs

Bengaluru

Hybrid

Location: Bengaluru(Embassy Tech Village) Work Mode: Hybrid(2-3 days a week from office) Experience: 5-10 Years Technical capabilities - 5+ yrs of experience as SDET in the data testing of any form (data extract, transform, load) Test Automation experience with non-GUI applications (Backend, API, Data) Experience with testing frameworks like PyTest, JIRA XRAY, Selenium, Cypress, Mable, Postman, etc. Hands-on experience in one or more programming languages like Python, Nodejs Expertise in end-to-end test automation and automation framework using Python Hands-on experience in test, develop and deploy frameworks in AWS Cloud, for data related use cases like data ingestion, data curation, data wrangling etc. Responsibilities - Ensure data validation and integrity during migration and data pipeline workflows. - ETL automation testing by validating end to end data pipelines. - Automate workflows in AWS for scalability and performance optimization. - Leverage CICD for continuous testing and deployment of data pipelines. - Write complex SQL queries for debugging and data retrieval. - Develop framework to support data testing for regression. Desired qualifications - Experience with ETL testing and data pipelines. - Proficiency in programming and scripting languages (e.g., Python, SQL). - Strong background in automation testing and developing test cases. - Working knowledge of AWS. - Understanding of DevOps practices. - Experience in database testing and data migration. - Excellent communication and collaboration skills. - Experience with API testing and testing frameworks. - Familiarity with container technologies (e.g., Docker, Kubernetes). - Strong analytical and problem-solving skill

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies