Data Engineer

5 - 10 years

4 - 8 Lacs

Posted:-1 days ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

 
Project Role :Data Engineer
Project Role Description :Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems.
Must have skills :Databricks Unified Data Analytics Platform
Good to have skills :NA
Minimum 5 year(s) of experience is requiredEducational Qualification:15 years full time education
Summary:The ideal candidate will have experience building:Reusable Python/PySpark frameworks for standardizing data engineering workflowsTest frameworks to ensure pipeline reliability and correctnessData quality frameworks for monitoring and validationAdditionally, hands-on experience with Datadog or similar observability tools is required to monitor pipeline performance, optimize resource usage, and ensure system reliability.You will work within a cross-functional team, building scalable, production-grade data pipelines on cloud platforms such as AWS, Azure, or GCP.Roles & Responsibilities:- Data Engineering & Framework DevelopmentDevelop and maintain ETL/ELT pipelines in Databricks using PySpark and Python.Build reusable, modular frameworks to accelerate development and enforce standards across pipelines.Implement test frameworks for automated unit, integration, and regression testing of pipelines.Design and maintain data quality frameworks to validate ingestion, transformation, and output.Optimize Spark jobs for performance, scalability, and cost-efficiency.Collaborate with data architects to define robust data models and design patterns.Cloud & Platform IntegrationProfessional & Technical Skills:
  • - Integrate Databricks pipelines with cloud-native storage services (e.g., S3, ADLS, Snowflake).Implement CI/CD pipelines for Databricks notebooks and jobs using Git, Jenkins, or Azure DevOps.Ensure pipelines follow best practices for modularity, reusability, and maintainability.Monitoring, Observability & OptimizationUse Datadog to monitor pipeline performance, resource utilization, and system health.Build dashboards and alerts for proactive monitoring and troubleshooting.Analyze metrics and logs to identify bottlenecks and improve reliability.Collaboration & DeliveryPartner with data scientists, analysts, and business stakeholders to translate requirements into scalable solutions.Conduct code reviews, enforce best practices, and mentor junior engineers.Promote knowledge-sharing of reusable frameworks, testing practices, and data quality approaches.Required Skills &
    QualificationsBachelor’s or Master’s degree in Computer Science, Engineering, or related field.5–8 years of experience in data engineering or software development.3+ years hands-on experience with Databricks and PySpark.Strong Python programming skills, including writing reusable libraries and frameworks.Experience designing and implementing test frameworks for ETL/ELT pipelines.Experience building data quality frameworks for validation, monitoring, and anomaly detection.Proficiency in SQL and experience with cloud data warehouses (Snowflake, Redshift, BigQuery).Familiarity with Datadog or similar monitoring tools for metrics, dashboards, and alerts.Experience integrating Databricks with AWS, Azure, or GCP services.Working knowledge of CI/CD, Git, Docker/Kubernetes, and automated testing.Strong understanding of data architecture patterns — medallion/lakehouse architectures preferred.Nice to HaveExperience with Airflow, Prefect, or Azure Data Factory for orchestration.Exposure to infrastructure-as-code tools (Terraform, CloudFormation).Familiarity with MLflow, Delta Live Tables, or Unity Catalog.Experience designing frameworks for logging, error handling, or observability.Knowledge of data security, access control, and compliance standards.Soft SkillsStrong problem-solving and analytical skills.Excellent verbal and written communication.Additional Information:- The candidate should have a minimum of 5 years of experience in Large Language Models.
  • This position is based at our Bengaluru office.
  • A 15 years full-time education is requiredAbility to work in agile, cross-functional teams.Ownership mindset, proactive, and self-driven.
     Qualification 15 years full time education
  • Mock Interview

    Practice Video Interview with JobPe AI

    Start Data Engineer Interview
    cta

    Start Your Job Search Today

    Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

    Job Application AI Bot

    Job Application AI Bot

    Apply to 20+ Portals in one click

    Download Now

    Download the Mobile App

    Instantly access job listings, apply easily, and track applications.

    coding practice

    Enhance Your Python Skills

    Practice Python coding challenges to boost your skills

    Start Practicing Python Now
    Accenture logo
    Accenture

    Professional Services

    Dublin

    RecommendedJobs for You

    hyderabad, chennai, bengaluru