Data Engineer

8 - 10 years

8 - 10 Lacs

Posted:1 month ago| Platform: Foundit logo

Apply

Skills Required

Work Mode

On-site

Job Type

Full Time

Job Description

Job Summary

  • Lead the design and implementation of scalable ETL/ELT data pipelines using Python or C# for efficient data processing.
  • Architect data solutions for large-scale batch and real-time processing using cloud services (AWS, Azure, Google Cloud).
  • Craft and manage cloud-based data architectures with services like AWS Redshift, Google BigQuery, Azure Data Lake, and Snowflake.
  • Implement cloud data solutions using Azure services such as Azure Data Lake, Blob Storage, SQL Database, Synapse Analytics, and Data Factory. Develop and automate data workflows for seamless integration into Azure platforms for analysis and reporting.
  • Manage and optimize Azure SQL Database, Cosmos DB, and other databases for high availability and performance. Supervise and optimize data pipelines for performance and cost efficiency.
  • Implement data security and governance practices in compliance with regulations (GDPR, HIPAA) using Azure security features.
  • Collaborate with data scientists and analysts to deliver data solutions that meet business analytics needs. Mentor junior data engineers on standard processes in data engineering and pipeline design. Set up supervising and alerting systems for data pipeline reliability.
  • Ensure data accuracy and security through strong governance policies and access controls. Maintain documentation for data pipelines and workflows for transparency and onboarding.

What You Bring

  • 8-10 years of proven experience in data engineering with a focus on large-scale data pipelines and cloud infrastructure.
  • Strong expertise in Python (Pandas, NumPy, ETL frameworks) or C# for efficient data processing solutions. Extensive experience with cloud platforms (AWS, Azure, Google Cloud) and their data services.
  • Sophisticated knowledge of relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra). Familiarity with big data technologies (Apache Spark, Hadoop, Kafka).
  • Strong background in data modeling and ETL/ELT development for large datasets. Experience with version control (Git) and CI/CD pipelines for data solution deployment.
  • Excellent problem-solving skills for troubleshooting data pipeline issues. Experience in optimizing queries and data processing for speed and cost-efficiency.
  • Preferred: Experience integrating data pipelines with machine learning or AI models. Preferred: Knowledge of Docker, Kubernetes, or containerized services for data workflows.Preferred: Familiarity with automation tools (Apache Airflow, Luigi, DBT) for managing data workflow.Preferred: Understanding of data privacy regulations (GDPR, HIPAA) and governance practices

Mock Interview

Practice Video Interview with JobPe AI

Start Job-Specific Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now
Siemens Energy logo
Siemens Energy

Renewable Energy Semiconductor Manufacturing

Munich

RecommendedJobs for You

noida, new delhi, gurugram