3 - 5 years

20 - 25 Lacs

Gurgaon

Posted:2 months ago| Platform: Naukri logo

Apply Now

Skills Required

Python Apache Spark Azure Databricks Data Bricks

Work Mode

Work from Office

Job Type

Full Time

Job Description

Responsibilities: Data Ingestion and Transformation: Design, develop, and implement data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, cloud storage) into data warehouses and data lakes. Develop and maintain data quality checks and data validation processes. Implement data transformations and enrichments using various tools and techniques (e.g., SQL, Python, Spark). Data Warehousing and Lakehouse: Design and implement data models and schemas for data warehouses and data lakes. Optimize data storage and retrieval for efficient query performance. Ensure data security and compliance with relevant regulations. Cloud Technologies: Experience with cloud-based data platforms (e.g., AWS, Azure, GCP), including cloud storage, data warehousing, and data processing services. Leverage cloud-native technologies and services to build scalable and cost-effective data solutions. Data Pipelines and Automation: Develop and maintain automated data pipelines using tools like Apache Airflow, Luigi, or similar. Implement monitoring and alerting systems to track data pipeline health and performance. Collaboration and Communication: Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and translate them into technical solutions.

Mock Interview

Boost Confidence & Sharpen Skills

Start Python Interview Now
Serendipity Corporate Services
Serendipity Corporate Services

Corporate Services

Business City

50 Employees

108 Jobs

    Key People

  • John Doe

    CEO
  • Jane Smith

    CFO

RecommendedJobs for You

Bangalore Rural, Bengaluru

Chennai, Tamil Nadu, India