Data Engineering Manager - Web Crawling & Pipeline Architecture

7 - 12 years

16 - 20 Lacs

Posted:1 week ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

What We Are Looking For:
  • Proven experience

    leading data engineering

    teams with strong ownership of web crawling systems and pipeline architecture.
  • Expertise in

    designing, building, and optimizing scalable data pipelines,

    preferably using workflow orchestration tools such as Airflow or Celery.
  • Hands-on proficiency in

    Python and SQL

    for data extraction, transformation, processing, and storage.
  • Experience working with cloud platforms such as

    AWS, GCP, or Azure

    for data infrastructure, deployments, and pipeline operations.
  • Deep understanding of

    web crawling frameworks

    , proxy rotation, anti-bot strategies, session handling, and compliance with global data collection standards (GDPR/CCPA-safe crawling).
  • Strong expertise in

    AI-driven automation

    , including integrating AI agents or frameworks like Crawl4ai into scraping, validation, and pipeline workflows..
Responsibilities:
  • Lead and mentor data engineering and web crawling teams, ensuring high-quality delivery and adherence to best practices.
  • Architect, implement, and optimize scalable data pipelines that support high-volume data ingestion, transformation, and storage.
  • Build and maintain robust crawling systems using modern frameworks, handling IP rotation, throttling, and dynamic content extraction.
  • Establish pipeline orchestration using Airflow, Celery, or similar distributed processing technologies.
  • Define and enforce data quality, validation, and security measures across all data flows and pipelines.
  • Collaborate with product, engineering, and analytics teams to translate data requirements into scalable technical solutions.
  • Develop monitoring, logging, and performance metrics to ensure high availability and reliability of data systems.
  • Oversee cloud-based deployments, cost optimization, and infrastructure improvements on AWS/GCP/Azure.
  • Integrate AI agents or LLM-based automation for tasks such as error resolution, data validation, enrichment, and adaptive crawling.
Qualifications:
  • Bachelor s or master s degree in engineering, Computer Science, or related field.
  • 7-12 years of relevant experience in data engineering, pipeline design, or large-scale web crawling systems.
  • Strong expertise in Python, SQL, and modern data processing practices.
  • Experience working with Airflow, Celery, or similar workflow automation tools.
  • Solid understanding of proxy systems, anti-bot techniques, and scalable crawler architecture.
  • Hands-on experience with cloud data platforms (AWS/GCP/Azure).
  • Experience with AI/LLM frameworks (Crawl4ai, LangChain, LlamaIndex, AutoGen, OpenAI, or similar).
  • Strong analytical, architectural, and leadership skills.

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You