Home
Jobs

Senior Data Engineer

5 years

0 Lacs

Posted:9 hours ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

For quick Response, please fill out the form

Job Application Form

  • 34043 - Data Scientist - Senior I - Udaipur


https://docs.google.com/forms/d/e/1FAIpQLSeBy7r7b48Yrqz4Ap6-2g_O7BuhIjPhcj-5_3ClsRAkYrQtiA/viewform


  • 3–5 years of experience in Data Engineering or similar roles
  • Strong foundation in cloud-native data infrastructure and scalable architecture design
  • Build and maintain reliable, scalable ETL/ELT pipelines using modern cloud-based tools
  • Design and optimize Data Lakes and Data Warehouses for real-time and batch processing
  • Ingest, transform, and organize large volumes of structured and unstructured data
  • Collaborate with analysts, data scientists, and backend engineers to define data needs
  • Monitor, troubleshoot, and improve pipeline performance, cost-efficiency, and reliability
  • Implement data validation, consistency checks, and quality frameworks
  • Apply data governance best practices and ensure compliance with privacy and security standards
  • Use CI/CD tools to deploy workflows and automate pipeline deployments
  • Automate repetitive tasks using scripting, workflow tools, and scheduling systems
  • Translate business logic into data logic while working cross-functionally
  • Strong in Python and familiar with libraries like pandas and PySpark
  • Hands-on experience with at least one major cloud provider (AWS, Azure, GCP)
  • Experience with ETL tools like AWS Glue, Azure Data Factory, GCP Dataflow, or Apache NiFi
  • Proficient with storage systems like S3, Azure Blob Storage, GCP Cloud Storage, or HDFS
  • Familiar with data warehouses like Redshift, BigQuery, Snowflake, or Synapse
  • Experience with serverless computing like AWS Lambda, Azure Functions, or GCP Cloud Functions
  • Familiar with data streaming tools like Kafka, Kinesis, Pub/Sub, or Event Hubs
  • Proficient in SQL, and knowledge of relational (PostgreSQL, MySQL) and NoSQL (MongoDB, DynamoDB) databases
  • Familiar with big data frameworks like Hadoop or Apache Spark
  • Experience with orchestration tools like Apache Airflow, Prefect, GCP Workflows, or ADF Pipelines
  • Familiarity with CI/CD tools like GitLab CI, Jenkins, Azure DevOps
  • Proficient with Git, GitHub, or GitLab workflows
  • Strong communication, collaboration, and problem-solving mindset
  • Experience with data observability or monitoring tools (bonus points)
  • Contributions to internal data platform development (bonus points)
  • Comfort working in data mesh or distributed data ownership environments (bonus points)
  • Experience building data validation pipelines with Great Expectations or similar tools (bonus points)

Mock Interview

Practice Video Interview with JobPe AI

Start Data Interview Now
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Zigsaw
Zigsaw

280 Jobs

RecommendedJobs for You

Udaipur, Rajasthan, India

Mumbai Metropolitan Region