10 years

25 - 27 Lacs

Posted:6 days ago| Platform: GlassDoor logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Data Infrastructure & Pipeline Development: ● Design, develop, and optimize scalable, efficient, and reliable data pipelines for large-scale data processing and transformation. ● Manage and maintain data architecture, ensuring high availability and performance using tools like Snowflake, Dataproc, BigQuery and other cloud technologies. ● Lead the integration of data sources from multiple systems, ensuring seamless data flow across various platforms.

● Build and optimize data pipelines using BigQuery, Snowflake, DBT Cloud, and Airflow.

● Expertise in Data Modelling to desing and build Data warehouses, Data Marts and Data lakes

● Manage version control and workflows with GitHub. Performance & Optimization:

● Perform tuning and optimization of queries and data pipelines to ensure high-performance data systems.

● Conduct regular performance reviews and recommend improvements or optimizations for system reliability, speed, and cost-efficiency. DBT (Data Build Tool) Implementation:

● Implement and maintain DBT models for data transformation workflows.

● Collaborate with data analysts and data scientists to ensure high-quality, well-documented datasets for downstream analysis.

● Ensure the use of best practices for DBT testing, version control, and deployment. Snowflake Management:

● Architect and optimize Snowflake data warehouse environments.

● Oversee and manage Snowflake data ingestion, transformation, and storage strategies.

● Collaborate with cross-functional teams to ensure Snowflake is being utilized effectively and efficiently. Leadership & Mentorship:

● Lead and mentor a team of data engineers, ensuring that best practices are followed in development and deployment of data pipelines.

● Conduct code reviews, provide feedback, and ensure the implementation of high-quality data solutions.

Preferred Skills:

● 10+ years of experience in Data Engineering with a strong focus on data warehousing, ETL pipelines, and big data technologies.

● At least 3-5 years of hands-on experience with Snowflake data warehouse or BigQuery, including setup, configuration, optimization, and maintenance.

● Proficiency in SQL for query optimization and performance tuning.

● In-depth experience with Dataproc for running large-scale data processing workflows (e.g., Spark, Hadoop).

● Expertise with DBT or any other ELT tool for data transformation and model building. Technical Skills:

● Strong experience in cloud platforms like AWS, GCP, or Azure, with a focus on data engineering tools and services.

● Proficient in programming/scripting languages such as Python, Java, or Scala for data processing.

● Experience with CI/CD pipelines and version control (Git, Jenkins, etc.).

● Knowledge of distributed computing frameworks (e.g., Spark, Hadoop) and related data processing concepts. Data Architecture & Design:

● Experience with building and maintaining data warehouses and lakes.

● Strong understanding of data modeling concepts, data quality, and governance.

● Familiarity with Kafka, Airflow, or similar tools for orchestrating data workflows.

Job Types: Full-time, Permanent, Fresher

Pay: ₹2,540,454.62 - ₹2,764,456.72 per year

Work Location: In person

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

Bengaluru, Karnataka, India