Posted:2 months ago|
Platform:
Work from Office
Full Time
Job Role: We are seeking a skilled and passionate Data Engineer to join our team. In this role, you will be responsible for designing, implementing, and optimizing data pipelines to extract, transform, and load (ETL) large datasets across a range of systems and platforms. You will be working with cutting-edge tools like Databricks and Azure Data Factory (ADF), leveraging your expertise in Python, SQL, and ETL processes to ensure the scalability and reliability of our data infrastructure. Roles & Responsibilities: ETL Pipeline Development: Build and maintain scalable and efficient ETL pipelines to process large datasets using Azure Data Factory (ADF) and Databricks. Data Integration: Integrate data from multiple sources, ensuring data accuracy, consistency, and availability for downstream analytics and reporting. Python Scripting: Write and optimize Python code to automate ETL workflows and data transformations. SQL Database Management : Design and query SQL databases for data extraction, transformation, and storage. Collaboration: Work closely with data scientists, analysts, and other engineering teams to meet data requirements and optimize pipeline performance. Performance Tuning : Continuously monitor and optimize data pipelines for performance, ensuring that they scale as needed. Documentation: Create and maintain detailed documentation for data pipelines, architecture, and processes. Experience Required: ETL Processes: Strong experience with ETL design and implementation, including real-time and batch processing. Databricks: Hands-on experience with Databricks for data engineering tasks, including building and managing data pipelines, and working with Apache Spark. Azure Data Factory (ADF): Experience using ADF to build, orchestrate, and monitor data pipelines in Azure. Python : Advanced proficiency in Python, particularly for data engineering tasks and automation. SQL: Solid understanding of SQL, including complex queries, performance optimization, and database management.
Paraminfo
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Pune
13.0 - 14.0 Lacs P.A.
Kolkata
3.0 - 7.0 Lacs P.A.
Kolkata, Hyderabad
3.0 - 7.5 Lacs P.A.
Hyderabad
15.0 - 25.0 Lacs P.A.
Pune, Chennai
6.0 - 15.0 Lacs P.A.
Gurugram
10.8 - 24.0 Lacs P.A.
Hyderabad
4.0 - 8.0 Lacs P.A.
Gurugram
4.0 - 8.0 Lacs P.A.
14.0 - 22.5 Lacs P.A.
Ahmedabad
3.0 - 7.0 Lacs P.A.