Sr. Data Engineer

3 - 5 years

22 - 25 Lacs

Posted:1 day ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

As a Data Engineer II, you will play a crucial role in developing, optimizing, and managing several large data lakes and data warehouses, comprising data from multiple disparate sources.
 

Responsibilities

Data Pipeline Operations

  • Design, build, and maintain robust and scalable data pipelines to ingest, transform, and deliver structured and unstructured data from multiple sources.
  • Ensure high-quality data by implementing monitoring, validation, and error-handling processes.

Platform Engineering Optimization

  • Create and update data models to represent the structure of the data.
  • Design, implement, and maintain database systems. Optimize database performance and ensure data integrity. Troubleshoot and resolve database issues.
  • Build and manage data warehouses for storage and analysis of large datasets.
  • Collaborate on data modeling, schema design, and performance optimization for large-scale datasets.

Data Quality and Governance: Implement and enforce data quality standards. Contribute to data governance processes and policies.

Scripting and Programming: Develop and automate data processes through programming languages (eg, Python, Java, SQL). Implement data validation scripts and error handling mechanisms.

Version Control: Use version control systems (eg, Git) to manage codebase changes for data pipelines.

Monitoring and Optimization: Implement monitoring solutions to track the performance and health of data systems. Optimize data processes for efficiency and scalability.

Cloud Platforms: Work with cloud platforms (eg, AWS, Azure, GCP) to deploy and manage data infrastructure. Utilize cloud-based services for data storage, processing, and analytics.

Security: Implement and adhere to data security best practices. Ensure compliance with data protection regulations.

Troubleshooting and Support: Provide support for data-related issues and participate in root cause analysis.

Skills

  • Expertise in data modeling, database design, and data warehousing. Proficient in SQL and programming languages such as Python, Java, or Scala.
  • Cloud-native architecture expertise (AWS, GCP, or Azure), including containerization (Docker, Kubernetes) and infrastructure-as-code (Terraform, CloudFormation).

Experience and Qualifications

  • bachelors/masters degree in engineering (computer science, information systems) with 3-5 years of experience in data engineering, BI engineering, and data warehouse development.
  • Excellent command on SQL and one or more programming languages, preferably Python or Java.
  • Knowledge of Flink, Airflow, Apache Spark, DBT (good to have), Athena / Presto
  • Experience working with Kubernetes.

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Zeta Inc. logo
Zeta Inc.

Financial Technology (FinTech)

Los Angeles

RecommendedJobs for You

bengaluru, karnataka, india

hyderabad, telangana, india