Jobs
Interviews

2 Etldata Engineering Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Data Engineer, you will be responsible for developing and maintaining a metadata-driven generic ETL framework to automate ETL code. Your primary tasks will include designing, building, and optimizing ETL/ELT pipelines using Databricks (PySpark/SQL) on AWS. You will be required to ingest data from a variety of structured and unstructured sources such as APIs, RDBMS, flat files, and streaming services. In this role, you will also develop and maintain robust data pipelines for both batch and streaming data utilizing Delta Lake and Spark Structured Streaming. Implementing data quality checks, validations, and logging mechanisms will be essential to ensure data accuracy and reliability. You will work on optimizing pipeline performance, cost, and reliability and collaborate closely with data analysts, BI teams, and business stakeholders to deliver high-quality datasets. Additionally, you will support data modeling efforts, including star and snowflake schemas, de-normalization tables approach, and assist in data warehousing initiatives. Your responsibilities will also involve working with orchestration tools like Databricks Workflows to schedule and monitor pipelines effectively. To excel in this role, you should have hands-on experience in ETL/Data Engineering roles and possess strong expertise in Databricks (PySpark, SQL, Delta Lake). Experience with Spark optimization, partitioning, caching, and handling large-scale datasets is crucial. Proficiency in SQL and scripting in Python or Scala is required, along with a solid understanding of data lakehouse/medallion architectures and modern data platforms. Knowledge of cloud storage systems like AWS S3, familiarity with DevOps practices (Git, CI/CD, Terraform, etc.), and strong debugging, troubleshooting, and performance-tuning skills are also essential for this position. Following best practices for version control, CI/CD, and collaborative development will be a key part of your responsibilities. If you are passionate about data engineering, enjoy working with cutting-edge technologies, and thrive in a collaborative environment, this role offers an exciting opportunity to contribute to the success of data-driven initiatives within the organization.,

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As an offshore Techlead with Databricks engineer experience, your primary responsibility will be to lead the team from offshore. You will be tasked with developing and maintaining a metadata-driven generic ETL framework for automating ETL code. This includes designing, building, and optimizing ETL/ELT pipelines using Databricks (PySpark/SQL) on AWS. Your role will involve ingesting data from various structured and unstructured sources such as APIs, RDBMS, flat files, and streaming. Moreover, you will be expected to develop and maintain robust data pipelines for both batch and streaming data using Delta Lake and Spark Structured Streaming. Implementing data quality checks, validations, and logging mechanisms will also be part of your responsibilities. It will be crucial for you to optimize pipeline performance, cost, and reliability, while collaborating with data analysts, BI, and business teams to deliver fit-for-purpose datasets. You will also support data modeling efforts, including star, snowflake schemas, and de-norm tables approach, as well as assist with data warehousing initiatives. Working with orchestration tools like Databricks Workflows to schedule and monitor pipelines will be essential. Following best practices for version control, CI/CD, and collaborative development is expected from you. In terms of required skills, you should have hands-on experience in ETL/Data Engineering roles and strong expertise in Databricks (PySpark, SQL, Delta Lake), with Databricks Data Engineer Certification being preferred. Experience with Spark optimization, partitioning, caching, and handling large-scale datasets is crucial. Proficiency in SQL and scripting in Python or Scala is required, along with a solid understanding of data lakehouse/medallion architectures and modern data platforms. Additionally, experience working with cloud storage systems like AWS S3, familiarity with DevOps practices (Git, CI/CD, Terraform, etc.), and strong debugging, troubleshooting, and performance-tuning skills are necessary for this role. In summary, as an offshore Techlead with Databricks engineer experience, you will play a vital role in developing and maintaining ETL frameworks, optimizing data pipelines, collaborating with various teams, and ensuring data quality and reliability. Your expertise in Databricks, ETL processes, data modeling, and cloud platforms will be instrumental in driving the success of the projects you undertake. About Virtusa: At Virtusa, we value teamwork, quality of life, and professional and personal development. Joining our team means becoming part of a global workforce of 27,000 individuals who are dedicated to your growth. We offer exciting projects, opportunities, and exposure to state-of-the-art technologies throughout your career with us. We believe in collaboration, a team-oriented environment, and providing a dynamic space for great minds to nurture new ideas and achieve excellence.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies