Posted:1 week ago| Platform:
Remote
Contractual
Job Title: Data Engineer Location: Pune/ Gurgaon/ Hyderabad/ Bhopal/ Indore and Remote Employment Type: Full-time / Contract Experience Level: 3–6 years No of position - 8 Job Summary: We are seeking a skilled and detail-oriented Data Engineer to join our growing team. The ideal candidate will have strong experience working with large-scale data pipelines and possess expertise in Python, PySpark, SQL, Spark SQL , and Databricks . You will play a key role in designing, building, and optimizing scalable data solutions that power analytics and business insights. Key Responsibilities: Design, develop, and maintain robust and scalable data pipelines using PySpark and SQL. Work on data extraction, transformation, and loading (ETL) from a wide variety of data sources. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements. Optimize data workflows for performance, scalability, and reliability in Databricks and Spark environments. Implement data quality and validation checks to ensure data integrity. Participate in code reviews and contribute to the continuous improvement of engineering practices. Required Skills & Qualifications: Proficiency in Python and PySpark for building data pipelines. Strong understanding of SQL and Spark SQL for querying and manipulating data. Hands-on experience with Databricks and Spark-based distributed processing. Familiarity with cloud-based data platforms (e.g., Azure, AWS, or GCP) is a plus. Solid understanding of data warehousing concepts and best practices. Strong problem-solving and communication skills. Preferred Qualifications: Experience with Azure cloud services, including Azure Data Factory , Azure Synapse Analytics , Azure Data Lake , and Azure Machine Learning . Familiarity with Microsoft Fabric for modern data architecture. Experience with orchestration tools such as Apache Airflow . Exposure to Airbyte and dbt (Data Build Tool) for data integration and transformation. Hands-on experience with Python, SQL, Spark, and Databricks in enterprise environments. Experience working with Delta Lake or similar modern data lake architectures. Knowledge of CI/CD practices and data governance standards. Show more Show less
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
15.0 - 30.0 Lacs P.A.
4.0 - 6.0 Lacs P.A.
25.0 - 30.0 Lacs P.A.
25.0 - 30.0 Lacs P.A.
20.0 - 25.0 Lacs P.A.
30.0 - 35.0 Lacs P.A.
30.0 - 35.0 Lacs P.A.
30.0 - 35.0 Lacs P.A.
4.0 - 8.0 Lacs P.A.
4.0 - 8.0 Lacs P.A.