Home
Jobs

Data Engineer--Operations

3 - 5 years

3 - 7 Lacs

Posted:6 days ago| Platform: Foundit logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

As a Data engineer in Operations, you will work on the operational management, monitoring, and support of scalable data pipelines running in Azure Databricks, Hadoop and Radium. You will ensure the reliability, performance, and availability of data workflows and maintain production environments. You will collaborate closely with data engineers, architects, and platform teams to implement best practices in data pipeline operations and incident management to ensure data availability and data completeness. Primary responsibilities: Operational support and incident management for Azure Databricks, Hadoop, Radium data pipelines. Collaborating with data engineering and platform teams to define and enforce operational standards, SLAs, and best practices. Designing and implementing monitoring, alerting, and logging solutions for Azure Databricks pipelines. Coordinating with central teams to ensure compliance with organizational operational standards and security policies. Developing and maintaining runbooks, SOPs, and troubleshooting guides for pipeline issues. Managing the end-to-end lifecycle of data pipeline incidents, including root cause analysis and remediation. Overseeing pipeline deployments, rollbacks, and change management using CI/CD tools such as Azure DevOps. Ensuring data quality and validation checks are effectively monitored in production. Working closely with platform and infrastructure teams to address pipeline and environment-related issues. Providing technical feedback and mentoring junior operations engineers. Conducting peer reviews of operational scripts and automation code. Automating manual operational tasks using Scala and Python scripts. Managing escalations and coordinating critical production issue resolution. Participating in post-mortem reviews and continuous improvement initiatives for data pipeline operations. Qualifications Bachelor's degree in Computer Science, Computer Engineering, or a relevant technical field 3+ years experience in data engineering, ETL tools, and working with large-scale data sets in Operations. Proven experience with cloud platforms, particularly Azure Databricks. Minimum 3 years of hands-on experience working with distributed cluster environments (e.g., Spark clusters). Strong operational experience in managing and supporting data pipelines in production environments. Additional Information Key Competencies: Experience in Azure Databricks operations or data pipeline support. Understanding of Scala/ Python programming for troubleshooting in Spark environments. Hands-on experience with Delta Lake, Azure Data Lake Storage (ADLS), DBFS, Azure Data Factory (ADF). Solid understanding of distributed data processing frameworks and streaming data operations. Understanding and hands-on usage of Kafka as message broker Experience with Azure SQL Database and cloud-based data services. Strong skills in monitoring tools like Splunk, ELK and Grafana, alerting frameworks, and incident management. Experience working with CI/CD pipelines using Azure DevOps or equivalent. Excellent problem-solving, investigative, and troubleshooting skills in large-scale data environments. Experience defining operational SLAs and implementing proactive monitoring solutions. Familiarity with data governance, security, and compliance best practices in cloud data platforms. Strong communication skills and ability to work independently under pressure. Soft Skills: Good Communication Skills, extensive usage of MS-Teams Experience in using Azure board and JIRA Decent Level in English as Business Language

Mock Interview

Practice Video Interview with JobPe AI

Start Scala Python Interview Now

My Connections Bosch India

Download Chrome Extension (See your connection in the Bosch India )

chrome image
Download Now
Bosch India

158 Jobs

RecommendedJobs for You

Bengaluru / Bangalore, Karnataka, India