Home
Jobs

Posted:2 days ago| Platform: Foundit logo

Apply

Skills Required

Work Mode

On-site

Job Type

Full Time

Job Description

Let's change the world! We're looking for a skilled Data Engineer to join our Enterprise Data RunOps Team in Hyderabad. In this role, you'll be instrumental in developing, supporting, and optimizing data pipelines and operational workflows that empower our enterprise data teams. You'll ensure seamless data access, integration, and governance across the organization. We're seeking a hands-on engineer with a deep understanding of modern data architectures, strong experience in cloud-native technologies, and a passion for delivering reliable, well-governed, and high-performing data infrastructure within a regulated biotech environment. Roles & Responsibilities: Design, build, and support data ingestion, transformation, and delivery pipelines across structured and unstructured sources within enterprise data engineering. Manage and monitor day-to-day operations of the data engineering environment, ensuring high availability, performance, and data integrity. Collaborate with data architects, data governance, platform engineering, and business teams to support data integration use cases across R&D, Clinical, Regulatory, and Commercial functions. Integrate data from laboratory systems, clinical platforms, regulatory systems, and third-party data sources into enterprise data repositories. Implement and maintain metadata capture, data lineage, and data quality checks across pipelines to meet governance and compliance requirements. Support real-time and batch data flows using technologies such as Databricks, Kafka, Delta Lake , or similar. Work within GxP-aligned environments , ensuring compliance with data privacy, audit, and quality control standards. Partner with data stewards and business analysts to support self-service data access, reporting, and analytics enablement. Maintain operational documentation, runbooks, and process automation scripts for continuous improvement of data fabric operations. Participate in incident resolution and root cause analysis, ensuring timely and effective remediation of data pipeline issues. Create documentation, playbooks, and best practices for metadata ingestion, data lineage, and catalog usage. Work in an Agile and Scaled Agile (SAFe) environment , collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value. Use JIRA, Confluence , and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle. Collaborate and communicate effectively with product teams and cross-functional teams to understand business requirements and translate them into technical solutions. Must-Have Skills: Build and maintain data pipelines to ingest and update metadata into enterprise data catalog platforms, preferably in biotech, life sciences, or pharma . Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL, Apache Spark, AWS, Python, SQL , and Scaled Agile methodologies . Proficiency in workflow orchestration and performance tuning on big data processing. 2+ years of experience in data engineering, data operations, or related roles, with at least 2+ years in life sciences, biotech, or pharmaceutical environments . Experience with cloud platforms (e.g., AWS, Azure, or GCP) for data pipeline and storage solutions. Understanding of data governance frameworks, metadata management, and data lineage tracking . Strong problem-solving skills, attention to detail, and ability to manage multiple priorities in a dynamic environment. Effective communication and collaboration skills to work across technical and business stakeholders. Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices . Good-to-Have Skills: Data Engineering experience in the Biotechnology or pharma industry. Experience in writing APIs to make data available to consumers. Experienced with SQL/NoSQL databases , and vector databases for large language models. Experienced with data modeling and performance tuning for both OLAP and OLTP databases. Experienced with software engineering best practices , including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven, etc.), automated unit testing, and DevOps. Education and Professional Certifications: Master's degree and 3 to 4+ years of Computer Science, IT, or related field experience OR Bachelor's degree and 5 to 8+ years of Computer Science, IT, or related field experience Preferred: AWS Certified Data Engineer Preferred: Databricks Certificate Preferred: Scaled Agile SAFe certification Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to work effectively with global, virtual teams. High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized, and detail-oriented. Strong presentation and public speaking skills.

Mock Interview

Practice Video Interview with JobPe AI

Start Snowflake Interview Now
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now

RecommendedJobs for You

Pune, Chennai, Bengaluru