Senior Data Engineer - AI

5 - 10 years

10 - 14 Lacs

Posted:1 week ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

 
The Senior Data Engineer- AI at Danaher Enterprise AI team, Molecular Design supporting the development of scalable data pipelines and infrastructure to enable AI driven molecular design across the organization. This role is based in India, Bangalore working onsite and will report through the aligned business and organizational structure.
In this role, you will have the opportunity to:
  • Support data integration efforts by collaborating with scientists, bioinformaticians, and AI engineers to ensure data is accessible, well-structured, and ready for downstream modeling and analysis.
  • Build and maintain data pipelines to ingest, transform, and store structured and unstructured biological data from various sources including wet lab experiments, public databases, and internal platforms.
  • Implement data quality checks and validation routines to ensure reliability and reproducibility of datasets used in molecular design workflows.
  • Contribute to infrastructure development by assisting in the setup and optimization of cloud-based data platforms and tools for scalable storage and compute.
  • Document workflows and processes to support reproducibility, collaboration, and knowledge sharing across teams.
The essential requirements of the job include:
  • Bachelor s or Master with 8+ years of experience in Computer Science, Data Science, Bioinformatic, Information Technology, Engineering or related field
  • 5+ years of experience in data engineering or data platform development or bioinformatics, preferably supporting AI/ML workloads
  • Proficiency in Python and SQL, with experience in data pipeline frameworks (e.g., Argo, NextFlow, Valohai, Airflow, Luigi), parallelization in Python (Joblib, Dask, Ray, etc.)
  • Experience with cloud data platforms (AWS, Azure, or GCP, Snowflake) and related services (e.g., S3, Redshift, BigQuery, Synapse)
  • Hands-on experience with data pipeline orchestration tools (e.g., Airflow, Prefect, Azure Data Factory).
  • Familiarity with data Lakehouse architectures and distributed systems
  • Working knowledge of containerization and CI/CD (Docker, Kubernetes, GitHub Actions, etc.).
  • Experience with APIs, data integration, and real-time streaming pipelines (Kafka, Kinesis, Pub/Sub)
It would be a plus if you also possess previous experience in:
  • Experience working with biological or molecular data (e.g., genomic sequences, protein structures, assay results).
  • Familiarity with data formats and standards used in life sciences (e.g., FASTA, VCF, PDB).
  • Exposure to AI/ML workflows and collaboration with data science teams.
  • Strong understanding of data security, compliance, and governance frameworks
  • Excellent collaboration and communication skills with the ability to work across technical and business teams

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

bengaluru, karnataka, india

hubli, mangaluru, mysuru, bengaluru, belgaum

hubli, mangaluru, mysuru, bengaluru, belgaum