Freelance Data Engineer (Snowflake, Databricks, PySpark, Azure)

4 years

0 Lacs

Posted:1 day ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Company Description

ThreatXIntel is a startup cyber security company dedicated to protecting businesses and organizations from cyber threats. Our team offers a range of services such as cloud security, web and mobile security testing, and DevSecOps. With a focus on delivering customized, affordable solutions, ThreatXIntel ensures high-quality cyber security for businesses of all sizes. We take a proactive approach to security, continuously monitoring and testing digital environments to identify vulnerabilities. Our mission is to provide exceptional cyber security services that give our clients peace of mind.


Role Description

Responsibilities

  • Design, build, and maintain

    scalable data pipelines

    and ETL/ELT processes using

    PySpark, Databricks, and Snowflake

    .
  • Develop and optimize

    data models

    in Snowflake for analytics and reporting.
  • Implement

    Azure data services

    (Azure Data Lake, Azure Synapse, Azure Data Factory, Event Hub, etc.) to integrate structured and unstructured data sources.
  • Engineer

    batch and streaming data flows

    to support real-time analytics and BI use cases.
  • Ensure

    data quality, performance, and governance

    across all pipelines and storage layers.
  • Collaborate with analysts, data scientists, and business stakeholders to deliver reusable datasets and enable self-service analytics.
  • Apply best practices for

    CI/CD, version control, and DevOps

    in data engineering workflows.
  • Provide technical recommendations on architecture, performance tuning, and cost optimization in the Azure ecosystem.

Qualifications

  • Experience:

    4+ years in data engineering with a strong focus on

    Snowflake, Databricks, and PySpark

    .
  • Cloud:

    Hands-on expertise with

    Azure Data Services

    (ADF, Data Lake, Synapse, Event Hub, Key Vault, etc.).
  • Programming:

    Advanced proficiency in

    Python (PySpark)

    and SQL for data transformations.
  • Data Warehousing:

    Strong knowledge of

    schema design, partitioning, clustering, and query optimization

    in Snowflake.
  • Pipelines & Orchestration:

    Experience with

    Azure Data Factory, Airflow, or similar tools

    .
  • Best Practices:

    Familiarity with

    CI/CD pipelines, Git, Terraform/ARM templates

    .
  • Excellent problem-solving, communication, and stakeholder collaboration skills.

  • Mock Interview

    Practice Video Interview with JobPe AI

    Start PySpark Interview
    cta

    Start Your Job Search Today

    Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

    Job Application AI Bot

    Job Application AI Bot

    Apply to 20+ Portals in one click

    Download Now

    Download the Mobile App

    Instantly access job listings, apply easily, and track applications.

    coding practice

    Enhance Your Python Skills

    Practice Python coding challenges to boost your skills

    Start Practicing Python Now

    RecommendedJobs for You