5 - 8 years

15 - 25 Lacs

Posted:1 month ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Key Responsibilities:

  • Data Pipeline Development:

    Design, develop, and maintain robust and scalable data pipelines to collect, clean, and process large datasets from various sources.

  • Data Integration:

    Integrate and optimize data from multiple platforms and systems, ensuring smooth flow and accessibility across the organization.

  • Data Quality:

    Ensure high-quality, accurate, and consistent data is made available for analysis and reporting, applying appropriate data validation and cleaning techniques.

  • ETL Process:

    Develop, deploy, and maintain Extract, Transform, Load (ETL) processes to move data between systems.

  • Data Warehouse Management:

    Work on the design and optimization of data warehouse architectures for structured and unstructured data.

  • Collaboration:

    Work closely with data analysts, data scientists, and business stakeholders to understand data needs and deliver data solutions that support business goals.

  • Performance Optimization:

    Monitor and optimize the performance of data pipelines and workflows, ensuring efficient processing of large-scale data.

  • Documentation:

    Create and maintain clear and comprehensive documentation for data processes, workflows, and systems.

  • Automation:

    Implement automation in data workflows and processes to streamline operations and reduce manual intervention.

Required Qualifications:

  • Bachelors degree in Computer Science, Engineering, Information Technology, or a related field(or equivalent work experience).

  • Proven experience as a Data Engineer or in a similar data-focused role.

  • Proficiency in SQL and other data query languages.

  • Experience with data pipeline tools like Apache Airflow, Luigi, or similar.

  • Solid understanding of ETL processes and data transformation techniques.

  • Strong experience working with data storage technologies (e.g., SQL/NoSQL databases, cloud data warehouses like Databricks).

  • Familiarity with programming languages such as Python, Pyspark for data manipulation and automation.

  • Experience working with cloud platforms such as AWS, Azure, or Google Cloud.

  • Knowledge of data warehousing and big data technologies like Hadoop, Spark, or Kafka is a plus.

  • Strong problem-solving skills and the ability to troubleshoot data issues efficiently.

  • Excellent communication skills and the ability to work collaboratively with cross-functional teams.

Preferred(Not Mandatory) Qualifications:

  • Masters degree in a related field.

  • Experience with containerization technologies like Docker and Kubernetes, Kafka, AI/ML

  • Familiarity with data visualization tools (e.g., Tableau, Power BI) for generating business insights.

  • Familiarity with machine earning frameworks and workflows.

  • Experience with version control systems (e.g., Git).

What We Offer:

  • Competitive salary and benefits package.

  • A dynamic and collaborative work environment.

  • Opportunities for growth and career advancement.

  • Access to the latest technologies and tools to work with


Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Trangile Services logo
Trangile Services

Consulting / Technology

San Francisco

RecommendedJobs for You

thiruvananthapuram, kerala

bengaluru, karnataka, india