Jobs
Interviews

1262 Azure Databricks Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to deliver high-quality applications that meet user expectations and business goals. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application specifications and user guides.- Collaborate with cross-functional teams to gather requirements and provide technical insights. Professional & Technical Skills: At least 3+ years of overall experienceExperience in building data solutions using Azure Databricks and PySparkExperience in building complex SQL queries , understand data warehousing concepts Experience with Azure DevOps CI/CD pipelines for automated deployment and release managementGood to have :Experience with Snowflake data warehousingExcellent problem-solving and analytical skillsAbility to work independently as well as collaboratively in a team environmentGood to have :Experience building pipelines, Data Flows using Azure Data FactoryStrong communication and interpersonal skills Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 5 days ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Microsoft Azure Databricks, Apache Spark, Microsoft Azure Data Services Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing application features, and ensuring that the applications are aligned with business objectives. You will also engage in testing and troubleshooting to enhance application performance and user experience, while continuously seeking opportunities for improvement and innovation in application design and functionality. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application processes and workflows.- Engage in code reviews to ensure quality and adherence to best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks, Microsoft Azure Data Services, Apache Spark.- Good To Have Skills: Experience with data integration tools and techniques.- Strong understanding of application development methodologies.- Familiarity with cloud computing concepts and services.- Experience in performance tuning and optimization of applications. Additional Information:- The candidate should have minimum 3 years of experience in Microsoft Azure Databricks.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 5 days ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Microsoft Azure Databricks, Apache Spark, Microsoft Azure Data Services Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing application features, and ensuring that the applications function seamlessly within the existing infrastructure. You will also engage in testing and troubleshooting to enhance application performance and user experience, while continuously seeking opportunities for improvement and innovation in application design and functionality. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application processes and workflows.- Engage in code reviews to ensure quality and adherence to best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks, Microsoft Azure Data Services, Apache Spark.- Good To Have Skills: Experience with data integration tools and techniques.- Strong understanding of application development methodologies.- Familiarity with cloud computing concepts and services.- Experience in performance tuning and optimization of applications. Additional Information:- The candidate should have minimum 3 years of experience in Microsoft Azure Databricks.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 5 days ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Chennai

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform, Microsoft Azure Databricks, Microsoft Azure Data Services Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with cross-functional teams to gather requirements, developing application features, and ensuring that the solutions align with organizational goals. You will also participate in testing and debugging processes to enhance application performance and user experience, while continuously seeking opportunities for improvement and innovation in application development. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application specifications and user guides.- Engage in code reviews to ensure quality and adherence to best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform, Microsoft Azure Databricks, Microsoft Azure Data Services.- Strong understanding of data integration techniques and ETL processes.- Experience with application development frameworks and methodologies.- Familiarity with cloud computing concepts and services.- Ability to troubleshoot and resolve application issues efficiently. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Chennai office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 5 days ago

Apply

2.0 - 5.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Project Role : Integration Engineer Project Role Description : Provide consultative Business and System Integration services to help clients implement effective solutions. Understand and translate customer needs into business and technology solutions. Drive discussions and consult on transformation, the customer journey, functional/application designs and ensure technology and business solutions represent business requirements. Must have skills : Infrastructure As Code (IaC) Good to have skills : Google Cloud Storage, Microsoft Azure Databricks, Ansible on Microsoft AzureMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Integration Engineer, you will provide consultative Business and System Integration services to assist clients in implementing effective solutions. Your typical day will involve engaging with clients to understand their needs, facilitating discussions on transformation, and ensuring that the technology and business solutions align with their requirements. You will work collaboratively with various teams to translate customer needs into actionable plans, driving the customer journey and application designs to achieve optimal outcomes. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate workshops and meetings to gather requirements and feedback from stakeholders.- Develop and maintain documentation related to integration processes and solutions.- Infrastructure as Code (IaC):Knowledge of tools like Terraform, Terraform linkage, Helm, Ansible, ansible tower dependency and package management- Broad knowledge of operating systems- Network management knowledge and understanding of network protocols, configuration, and troubleshooting. Proficiency in configuring and managing network settings within cloud platforms- Security:Knowledge with cybersecurity principles and practices, implementing security frameworks that ensure secure workloads and data protection- Expert proficiency in Linux CLI- Monitoring of the environment from technical perspective.- Monitoring the costs of the development environment. Professional & Technical Skills: - Must To Have Skills: Proficiency in Infrastructure As Code (IaC).- Good To Have Skills: Experience with Hitachi Data Systems (HDS), Google Cloud Storage, Microsoft Azure Databricks.- Strong understanding of cloud infrastructure and deployment strategies.- Experience with automation tools and frameworks for infrastructure management.- Familiarity with version control systems and CI/CD pipelines.- Solid understanding of Data Modelling, Data warehousing and Data platforms design.- Working knowledge of databases and SQL.- Proficient with version control such as:Git, GitHub or GitLab- Solid understanding of Data warehousing and Data platforms design.- Experience supporting BAT teams and BAT test environments.- Experience with workflow and batch scheduling. Added advantage of Control-M and Informatica experience.- Good know-how on Financial Markets. Know-how on Clearing, Trading and Risk business process will be added advantage- Know-How on Java, Spark & BI reporting will be an added advantage.- Know-how of cloud platform and affinity towards modern technology an added advantage.- Experience in CI/CD pipeline and exposure to DevOps methodologies will be considered as added advantage. Additional Information:- The candidate should have minimum 5 years of experience in Infrastructure As Code (IaC).- This position is based in Hyderabad.- A 15 years full time education is required. Qualification 15 years full time education

Posted 5 days ago

Apply

5.0 - 8.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Infrastructure As Code (IaC) Good to have skills : Microsoft Azure Architecture, Google Cloud Platform AdministrationMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Integration Engineer, you will provide consultative Business and System Integration services to assist clients in implementing effective solutions. Your typical day will involve engaging with clients to understand their needs, facilitating discussions on transformation, and ensuring that the technology and business solutions align with their requirements. You will work collaboratively with various teams to translate customer needs into actionable plans, driving the customer journey and application designs to achieve optimal outcomes. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate workshops and meetings to gather requirements and feedback from stakeholders.- Develop and maintain documentation related to integration processes and solutions.- Infrastructure as Code (IaC):Knowledge of tools like Terraform, Terraform linkage, Helm, Ansible, ansible tower dependency and package management- Broad knowledge of operating systems- Network management knowledge and understanding of network protocols, configuration, and troubleshooting. Proficiency in configuring and managing network settings within cloud platforms- Security:Knowledge with cybersecurity principles and practices, implementing security frameworks that ensure secure workloads and data protection- Expert proficiency in Linux CLI- Monitoring of the environment from technical perspective.- Monitoring the costs of the development environment. Professional & Technical Skills: - Must To Have Skills: Proficiency in Infrastructure As Code (IaC).- Good To Have Skills: Experience with Hitachi Data Systems (HDS), Google Cloud Storage, Microsoft Azure Databricks.- Strong understanding of cloud infrastructure and deployment strategies.- Experience with automation tools and frameworks for infrastructure management.- Familiarity with version control systems and CI/CD pipelines.- Solid understanding of Data Modelling, Data warehousing and Data platforms design.- Working knowledge of databases and SQL.- Proficient with version control such as:Git, GitHub or GitLab- Solid understanding of Data warehousing and Data platforms design.- Experience supporting BAT teams and BAT test environments.- Experience with workflow and batch scheduling. Added advantage of Control-M and Informatica experience.- Good know-how on Financial Markets. Know-how on Clearing, Trading and Risk business process will be added advantage- Know-How on Java, Spark & BI reporting will be an added advantage.- Know-how of cloud platform and affinity towards modern technology an added advantage.- Experience in CI/CD pipeline and exposure to DevOps methodologies will be considered as added advantage. Additional Information:- The candidate should have minimum 5 years of experience in Infrastructure As Code (IaC).- This position is based in Hyderabad.- A 15 years full time education is required. Qualification 15 years full time education

Posted 5 days ago

Apply

6.0 - 8.0 years

4 - 6 Lacs

Kochi, Chennai, Coimbatore

Hybrid

Required Skills: 5+Years of experience Azure Databricks PySpark Azure Data Factory

Posted 5 days ago

Apply

5.0 - 10.0 years

14 - 24 Lacs

Bengaluru

Hybrid

Roles & Responsibilities: Design and implement end-to-end data engineering solutions by leveraging the full suite of Databricks, Fabric tools, including data ingestion, transformation, and modeling. Design, develop and maintain end-to-end data pipelines by using spark, ensuring scalability, reliability, and cost optimized solutions. Collaborate with internal teams and clients to understand business requirements and translate them into robust technical solutions. Conduct performance tuning and troubleshooting to identify and resolve any issues. Implement data governance and security best practices, including role-based access control, encryption, and auditing. Translate Business requirements into high-quality technical documents including, data mapping, data processes, and operational support guides. Work closely with architects, product managers and reporting team to collect functional and system requirements. Work in fast-paced environment and perform effectively in an agile development environment. REQUIREMENTS 8+ years of experience in designing and implementing data solutions with at least 4+ years of experience in data engineering. Extensive experience with Databricks, Fabric, including a deep understanding of its architecture, data modeling, and real-time analytics. Minimum 6+ years of experience in Spark, PySpark and Python. Must have strong experience in SQL, Spark SQL, data modeling & RDBMS concepts. Strong knowledge of Data Fabric services, particularly Data engineering, Data warehouse, Data factory, and Real- time intelligence. Strong problem-solving skills, with ability to perform multi-tasking. Familiarity with security best practices in cloud environments, Active Directory, encryption, and data privacy compliance. Communicate effectively in both oral and written. Experience in AGILE development, SCRUM and Application Lifecycle Management (ALM). Preference given to current or former Labcorp employees. EDUCATION Bachelors in engineering, MCA or equivalent.

Posted 5 days ago

Apply

6.0 - 11.0 years

15 - 27 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Role: Azure Data Engineer Location: Pune, Gurgaon, or Bangalore Work Mode - Hybrid Key Role and Responsibilities: Design and develop ETL/ELT pipelines using Azure Data Factory, Snowflake, and DBT. o Build and maintain data integration workflows from various data sources to Snowflake. Write efficient and optimized SQL queries for data extraction and transformation. Work with stakeholders to understand business requirements and translate them into technical solutions. o Monitor, troubleshoot, and optimize data pipelines for performance and reliability. Maintain and enforce data quality, governance, and documentation standards. Collaborate with data analysts, architects, and DevOps teams in a cloud-native environment. Must Have: Strong experience with Azure Cloud Platform services. Proven expertise in Azure Data Factory (ADF) for orchestrating and automating data pipelines. Proficiency in SQL for data analysis and transformation. Hands-on experience with Snowflake and SnowSQL for data warehousing. Practical knowledge of DBT (Data Build Tool) for transforming data in the warehouse. Experience working in cloud-based data environments with large-scale datasets Good to Have: Experience with Azure Data Lake, Azure Synapse, or Azure Functions. Familiarity with Python or PySpark for custom data transformations. Understanding of CI/CD pipelines and DevOps for data workflows. Exposure to data governance, metadata management, or data catalog tools. Knowledge of business intelligence tools (e.g., Power BI, Tableau) is a plus. Education: Bachelor's degree in computer science, Software Engineering, MIS, or equivalent combination of education and experience. Key Skills: Azure [Data Factory, Data Bricks], Snowflake, DBT

Posted 5 days ago

Apply

6.0 - 11.0 years

15 - 27 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Role: Azure Data Engineer Location: Pune/Gurugram/ Bangalore/ Hyderabad. Work Mode : Hybrid Key Role and Responsibilities: Design and develop ETL/ELT pipelines using Azure Data Factory, Snowflake, and DBT. o Build and maintain data integration workflows from various data sources to Snowflake. Write efficient and optimized SQL queries for data extraction and transformation. Work with stakeholders to understand business requirements and translate them into technical solutions. o Monitor, troubleshoot, and optimize data pipelines for performance and reliability. Maintain and enforce data quality, governance, and documentation standards. Collaborate with data analysts, architects, and DevOps teams in a cloud-native environment. Must Have: Strong experience with Azure Cloud Platform services. Proven expertise in Azure Data Factory (ADF) for orchestrating and automating data pipelines. Proficiency in SQL for data analysis and transformation. Hands-on experience with Snowflake and SnowSQL for data warehousing. Practical knowledge of DBT (Data Build Tool) for transforming data in the warehouse. Experience working in cloud-based data environments with large-scale datasets. Good to Have: Experience with Azure Data Lake, Azure Synapse, or Azure Functions. Familiarity with Python or PySpark for custom data transformations. Understanding of CI/CD pipelines and DevOps for data workflows. Exposure to data governance, metadata management, or data catalog tools. Knowledge of business intelligence tools (e.g., Power BI, Tableau) is a plus. Education : Bachelors degree in computer science, Software Engineering, MIS or equivalent combination of education and experience. Key Skills: Azure [Data Factory, Data Bricks], Snowflake, DBT

Posted 5 days ago

Apply

6.0 - 11.0 years

15 - 20 Lacs

Pune

Work from Office

Role & responsibilities Key Responsibilities: Lead Data Engineering Projects: Oversee the development and deployment of end-to-end data ingestion pipelines using Azure Databricks, Apache Spark, and related technologies, ensuring scalability, performance, and efficiency. Design & Architecture: Design high-performance, resilient, and scalable data architectures for data ingestion and processing using best practices for Azure Databricks and Spark. Team Leadership: Provide technical guidance and mentorship to a team of data engineers, fostering a culture of collaboration, continuous learning, and innovation. Collaboration: Work closely with data scientists, business analysts, and other stakeholders to understand data requirements and ensure smooth integration of various data sources into the data lake/warehouse. Optimization & Performance Tuning: Ensure data pipelines are optimized for speed, reliability, and cost efficiency in an Azure environment. Conduct performance tuning, troubleshooting, and debugging of Spark jobs and Databricks clusters. Code Quality & Best Practices: Enforce and advocate for best practices in coding standards, version control, testing, and documentation. Integration with Azure Services: Work with other Azure services such as Azure Data Lake Storage, Azure SQL Data Warehouse, Azure Synapse Analytics, and Azure Blob Storage to integrate data seamlessly. Continuous Improvement: Stay current with industry trends and emerging technologies in the field of data engineering and make recommendations for improvements to the teams tools and processes. Ensure Data Quality: Implement data validation and data quality checks as part of the data ingestion process to ensure consistency, accuracy, and integrity of ingested data. Risk & Issue Management: Proactively identify risks and blockers and resolve complex technical issues in a timely and effective manner. Qualifications: Education: Bachelors or masters degree in computer science, Information Technology, Engineering, or a related field. Experience: 8+ years of experience in data engineering or a related field. Strong hands-on experience with Azure Databricks , Spark , Python/Scala, CICD, Scripting for data processing. Experience working in multiple file formats like Parquet , Delta , and Iceberg . Knowledge of Kafka or similar streaming technologies for real-time data ingestion. Experience with data governance and data security in Azure. Proven track record of building large-scale data ingestion and ETL pipelines in cloud environments, specifically Azure. Deep understanding of Azure Data Services (e.g., Azure Blob Storage, Azure Data Lake, Azure SQL Data Warehouse, Event Hubs, Functions etc.). Familiarity with data lakes , data warehouses , and modern data architectures. Experience with CI/CD pipelines , version control (Git), Jenkins and agile methodologies. Understanding of cloud infrastructure and architecture principles (especially within Azure ). Technical Skills: Expert-level proficiency in Spark, SPARK Streaming , including optimization, debugging, and troubleshooting Spark jobs. Solid knowledge of Azure Databricks for scalable, distributed data processing. Strong coding skills in Python and Scala for data processing. Experience working with SQL , especially for large datasets. Knowledge of data formats like Iceberg , Parquet , ORC , and Delta Lake . Leadership Skills: Proven ability to lead and mentor a team of data engineers, ensuring adherence to best practices. Excellent communication skills, capable of interacting with both technical and non-technical stakeholders. Strong problem-solving, analytical, and troubleshooting abilities.

Posted 5 days ago

Apply

7.0 - 11.0 years

15 - 25 Lacs

Hyderabad

Hybrid

Role Purpose: The Senior Data Engineer will support and enable the Data Architecture and the Data Strategy. Supporting solution architecture and engineering for data ingestion and modelling challenges. The role will support the deduplication of enterprise data tools, working with the Lonza Data Governance Board, Digital Council and IT to drive towards a single Data and Information Architecture. This will be a hands-on engineering role with a focus on business and digital transformation. The role will be responsible for managing and maintain the Data Architecture and solutions that deliver the platform at with operational support and troubleshooting. The Senior Data Engineer will also manage (no reporting line changes but from day-to-day delivery) and coordinate the Data Engineering team members (Internal and External) working on the various project implementations. Experience : 7-10 years experience with digital transformation and data projects. Experience in designing, delivering and managing data infrastructures. Proficiency in using Cloud Services (Azure) for data engineering, storage and analytics. Strong SQL and NoSQL experience Data Modelling Hands on developing pipelines, setting-up architectures in Azure Fabric. Team management experience (internal and external resources). Good understanding of data warehousing, data virtualization and analytics. Experience in working with data analysts, data scientists and BI teams to deliver on data requirements. Data Catalogue experience is a plus. ETL Pipeline Design is a plus Python Development skills is a plus Realtime data ingestion (E.g. Kafka) Licenses or Certifications Beneficial; ITIL, PM, CSM, Six Sigma, Lean Knowledge Good understanding about integration, ETL, API and Data sharing concepts. Understanding / Awareness of Visualization tools is a plus Knowledge and understanding of relevant legal and regulatory requirements, such as CFR 21 part 11, EU General Data Protection Regulation, Health Insurance Portability and Accountability Act (HIPAA) and GxP validation process would be a plus. Skills The position requires a pragmatic leader with sound knowledge of data, integration and analytics. Excellent written and verbal communication skills, interpersonal and collaborative skills, and the ability to communicate technical concepts to nontechnical audiences. Exhibit excellent analytical skills, the ability to manage and contribute to multiple projects under strict timelines, as well as the ability to work well in a demanding, dynamic environment and meet overall objectives. Project management skills: scheduling and resource management are a plus. Ability to motivate cross-functional, interdisciplinary teams to achieve tactical and strategic goals. Data Catalogue Project and Team management skills are plus. Strong SAP skills are a plus.

Posted 5 days ago

Apply

10.0 - 15.0 years

27 - 35 Lacs

Noida, New Delhi

Hybrid

Description - External Develop comprehensive digital analytics solutions utilizing Adobe Analytics for web tracking, measurement, and insight generation. Design, manage, and optimize interactive dashboards and reports using Power BI to support business decision-making. Lead the design, development, and maintenance of robust ETL/ELT pipelines integrating diverse data sources. Architect scalable data solutions leveraging Python for automation , scripting, and engineering tasks. Oversee workflow orchestration using Apache Airflow to ensure timely and reliable data processing. Provide leadership and develop robust forecasting models to support sales and marketing strategies. Develop advanced SQL queries for data extraction, manipulation, analysis, and database management. Implement best practices in data modeling and transformation using Snowflake and DBT; exposure to Cosmos DB is a plus. Ensure code quality through version control best practices using GitHub. Collaborate with cross-functional teams to understand business requirements and translate them into actionable analytics solutions. Stay updated with the latest trends in digital analytics; familiarity or hands-on experience with Adobe Experience Platform (AEP) / Customer Journey Analytics (CJO) is highly desirable. Qualifications - External - Undergraduate degree or equivalent experience. •Masters or Bachelor’s degree in Computer Science, Information Systems, Engineering, Mathematics, Statistics, Business Analytics, or a related field. •8–10+ years of progressive experience in digital analytics, data analytics or business intelligence roles. •Advanced proficiency in web and digital analytics platforms (Adobe Analytics). •Experience with data modeling and transformation using tools such as DBT and Snowflake; familiarity with Cosmos DB is a plus. •Proficiency in ETL/ELT pipeline development and workflow orchestration (Apache Airflow). •Skilled in creating interactive dashboards and reports using Power BI or similar BI tools. •Deep understanding of digital marketing metrics, KPIs, attribution models, and customer journey analysis. •Experience developing forecasting models and conducting predictive analytics to drive business strategy. •Industry certifications relevant to digital analytics or cloud data platforms. •Ability to deliver clear digital reporting and actionable insights to stakeholders at all organizational levels.

Posted 5 days ago

Apply

6.0 - 10.0 years

15 - 18 Lacs

Chennai

Work from Office

Role & responsibilities 8-10 years of experience, with a minimum of 5 years working on core data engineering responsibilities on a cloud platform. Project Management experience is a big plus. Proven track record of implementing data-driven solutions in areas such as plant automation, operational analytics, quality control, supply chain optimization. Expertise in cloud-based data platforms, particularly within the Azure ecosystem (Azure Data Factory, Synapse Analytics, Databricks). Familiarity with SAP as a data source. Proficiency in programming languages such as SQL, Python, and R for analytics and reporting.

Posted 5 days ago

Apply

5.0 - 8.0 years

25 - 27 Lacs

Mumbai, Pune

Hybrid

Data Engineer Experience & Education 3+ years in ADF (azure data factory) Proficient in Azure Bachelors/Master’s in CS, Engineering, or related field Key Requirements: Experience with Azure cloud; development and architecture as well as experience with REST APIs to support integration of infrastructure, cloud and platform as a service technology. Strong experience designing Cloud Solutions in large enterprise environments. Experience managing and deploying Azure cloud services - App Gateway, Functions, Vault and ADLS. Knowledge leveraging integration with Power Apps, Power Bi. Experience in building, upgrading terraform templates and deploying services. Strong troubleshooting and problem-solving skills. Ability to or experience of working collaboratively with global cross-functional teams. Excellent written and verbal communication skills

Posted 5 days ago

Apply

6.0 - 10.0 years

16 - 30 Lacs

Amritsar

Remote

Job Title: Senior Azure Data Engineer Location: Remote Experience Required: 5+ years About the Role: We are seeking a highly skilled Senior Azure Data Engineer to design and develop robust, scalable, and high-performance data pipelines using Azure technologies. The ideal candidate will have strong experience with modern data platforms and tools, including Azure Data Factory, Synapse, Databricks, and Data Lake, as well as expertise in SQL, Python, and CI/CD workflows. Key Responsibilities: Design and implement end-to-end data pipelines using Azure Data Factory, Azure Synapse Analytics, Azure Databricks, and Azure Data Lake Storage Gen2. Ingest and integrate data from various sources such as SQL Server, APIs, blob storage, and on-premise systems, ensuring security and performance. Develop and manage ETL/ELT workflows and orchestrations in a scalable, optimized manner. Build and maintain data models, data marts, and data warehouse structures for analytics and reporting. Write and optimize complex SQL queries, stored procedures, and Python scripts. Ensure data quality, consistency, and integrity through validation frameworks and best practices. Support and enhance CI/CD pipelines using Azure DevOps, Git, and ARM/Bicep templates. Collaborate with data scientists, analysts, and business stakeholders to understand requirements and deliver impactful solutions. Enforce data governance, security, and compliance policies, including use of Azure Key Vault and access controls. Mentor junior data engineers, lead design discussions, and conduct code reviews. Monitor and troubleshoot issues related to performance, cost, and scalability across data systems. Required Skills & Experience: 6+ years of experience in data engineering or related fields. 3+ years of hands-on experience with Azure cloud services, specifically: Azure Data Factory (ADF) Azure Synapse Analytics (Dedicated and Serverless SQL Pools) Azure Databricks (Spark preferred) Azure Data Lake Storage Gen2 (ADLS) Azure SQL / Managed Instance / Cosmos DB Strong proficiency in SQL, PySpark, and Python. Solid experience with CI/CD tools: Azure DevOps, Git, ARM/Bicep templates. Experience with data warehousing, dimensional modeling, and medallion/lakehouse architecture. In-depth knowledge of data security best practices, including encryption, identity management, and network configurations in Azure. Expertise in performance tuning, data partitioning, and cost optimization. Excellent communication, problem-solving, and stakeholder management skills.

Posted 6 days ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

It is exciting to be a part of a company where the employees truly believe in the mission and values of the organization. At Fractal Analytics, we are dedicated to bringing passion and customer focus to our business operations. Our vision is to empower every human decision in the enterprise, creating a world where individual choices, freedom, and diversity are celebrated. We believe in fostering an ecosystem where human imagination plays a vital role in every decision-making process, constantly challenging ourselves to innovate and improve. We value individuals who empower imagination with intelligence, and we call them true Fractalites. We are currently seeking a Data Engineer with 2-5 years of experience to join our team in Bangalore, Gurgaon, Chennai, Coimbatore, Pune, or Mumbai. The ideal candidate will be responsible for ensuring that production-related activities are delivered within the agreed Service Level Agreements (SLAs). This role involves working on issues, bug fixes, minor changes, and collaborating with the development team when necessary to address any challenges and implement enhancements. Key Technical Skills required for this role include: - Strong proficiency in Azure Data Engineering services, specifically Azure Data Factory, Azure Databricks, and Storage (ADLS Gen 2) - Experience in Web app/App service development - Proficiency in programming languages such as Python, Pyspark, and SQL - Hands-on experience with log analytics and Application Insights - Strong expertise in Azure SQL In addition to technical skills, the following non-technical skills are mandatory: - Drive incident and problem resolution to support key operational activities - Collaborate on change ticket review, approvals, and planning with internal teams - Support the transition of projects from project teams to support teams - Serve as an escalation point for operation-related issues - Experience with ServiceNow is preferred - Strong attention to detail with a focus on quality and accuracy - Ability to manage multiple tasks with appropriate priority and time management skills - Flexibility in work content and eagerness to learn - Knowledge of service support, operation, and design processes (ITIL) - Strong relationship-building skills to collaborate with stakeholders at all levels and across organizational boundaries If you are someone who thrives in a dynamic environment and enjoys working with motivated individuals who are passionate about growth, then a career with us at Fractal Analytics may be the perfect fit for you. If this role does not align with your experience, feel free to express your interest in future opportunities by connecting with us through the Introduce Yourself feature on our website or by creating an account to receive email alerts for new job postings matching your interests.,

Posted 6 days ago

Apply

5.0 - 10.0 years

0 Lacs

lucknow, uttar pradesh

On-site

HCLTech is seeking a passionate and experienced Azure Data Engineer to become a valuable member of our expanding team. If you possess solid hands-on expertise in Azure Data Factory, Azure Databricks, and Oracle, and are eager to contribute to impactful data projects, then we are eager to learn more about you! As an Azure Data Engineer at HCLTech, you will be responsible for the design, development, and maintenance of data pipelines utilizing Azure Data Factory and Azure Databricks. You will collaborate with Oracle databases for data extraction, transformation, and loading processes. Working closely with cross-functional teams, you will provide support and enhancements to data solutions. Additionally, you will play a key role in optimizing and troubleshooting data workflows and performance issues, as well as actively participating in support and development tasks across various projects. Joining our team offers the opportunity to work with cutting-edge Azure technologies in a collaborative and growth-oriented environment. You will enjoy the flexibility of working in Lucknow, with no customer interviews to streamline the onboarding process. If you are ready to advance your career in ADF data engineering, we encourage you to apply by submitting your resume to sushma-bisht@hcltech.com.,

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

You will be joining YASH Technologies, a leading technology integrator focused on helping clients enhance competitiveness, optimize costs, and drive business transformation in an increasingly virtual world. As a Microsoft Fabric Professional, you will be responsible for working with cutting-edge technologies in Azure Fabric, Azure Data factory, Azure Databricks, Azure Synapse, Azure SQL, and ETL processes. Your key responsibilities will include creating pipelines, datasets, dataflows, Integration runtimes, and monitoring pipelines in Azure. You will be extracting, transforming, and loading data from source systems using Azure Databricks and creating SQL scripts for complex queries. Additionally, you will develop Synapse pipelines to migrate data from Gen2 to Azure SQL and work on data migration pipelines to Azure cloud (Azure SQL). Experience in using Azure Data Catalog and Big Data Batch Processing Solutions, Interactive Processing Solutions, and Real-Time Processing Solutions will be beneficial for this role. While certifications are considered good to have, YASH Technologies provides an inclusive team environment where you are empowered to create a career path aligned with your aspirations. The workplace culture is grounded on principles like flexible work arrangements, emotional positivity, trust, transparency, open collaboration, and all necessary support for realizing business goals. Join us at YASH Technologies for stable employment and a great atmosphere with an ethical corporate culture.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a Data Engineer, you will be responsible for designing and building efficient data pipelines using Azure Databricks (PySpark). You will implement business logic for data transformation and enrichment at scale, as well as manage and optimize Delta Lake storage solutions. Additionally, you will develop REST APIs using FastAPI to expose processed data and deploy them on Azure Functions for scalable and serverless data access. Your role will also involve developing and managing Airflow DAGs to orchestrate ETL processes, ingesting and processing data from various internal and external sources on a scheduled basis. You will handle data storage and access using PostgreSQL and MongoDB, writing optimized SQL queries to support downstream applications and analytics. Collaboration is key in this role, as you will work cross-functionally with teams to deliver reliable, high-performance data solutions. It is essential to follow best practices in code quality, version control, and documentation to ensure the success of projects. To excel in this position, you should have at least 5 years of hands-on experience as a Data Engineer and strong expertise in Azure Cloud services. Proficiency in Azure Databricks, PySpark, Delta Lake, Python, and FastAPI for API development is required. Experience with Azure Functions for serverless API deployments, managing ETL pipelines using Apache Airflow, and hands-on experience with PostgreSQL and MongoDB are also essential. Strong SQL skills and experience in handling large datasets will be beneficial for this role.,

Posted 6 days ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

At EY, you will have the opportunity to shape a career that reflects your uniqueness, supported by a global network, inclusive environment, and cutting-edge technology to empower you to reach your full potential. Your individual voice and perspective are crucial in contributing to EY's continuous improvement. Join our team to create an exceptional journey for yourself and contribute to building a better working world for all. In EY's GDS Tax Technology team, the focus is on developing, implementing, and integrating technological solutions that enhance client service and support engagement teams. As a member of the core Tax practice, you will gain in-depth tax technical knowledge along with exceptional database, data analytics, and programming skills. With ever-evolving regulations, tax departments are required to manage, organize, and analyze vast amounts of data. Meeting these complex regulatory demands often involves gathering data from various systems and departments within an organization. Handling the diverse data sources efficiently presents significant challenges and time constraints for companies. Collaborating closely with partners, clients, and tax technical experts, the GDS Tax Technology team at EY designs and implements technology solutions that add value, streamline processes, and equip clients with innovative tools for Tax support. The GDS Tax Technology team engages with clients and professionals in areas such as Federal Business Tax Services, Partnership Compliance, Corporate Compliance, Indirect Tax Services, Human Capital, and Internal Tax Services. Providing solution architecture, application development, testing, and maintenance support, the team contributes proactively and responsively to the global TAX service line. EY is currently looking for a Data Engineer - Staff to join the Tax Technology practice in India. Key Responsibilities: - Proficiency in Azure Databricks is a must. - Strong expertise in Python and PySpark programming. - Sound knowledge of Azure SQL Database and Azure SQL Datawarehouse. - Design, maintain, and optimize data layer components for new and existing systems, including databases, stored procedures, ETL packages, and SQL queries. - Experience with Azure data platform offerings. - Effective communication with team members and stakeholders. Qualifications & Experience Required: - 1.5 to 3 years of experience in Azure Data Platform (Azure Databricks) with a strong grasp of Python and PySpark. - Excellent verbal and written communication skills. - Ability to function as an individual contributor. - Familiarity with Azure Data Factory, SSIS, or other ETL tools. Join EY in its mission to build a better working world, where diverse teams across 150+ countries leverage data and technology to provide trust through assurance and support clients in their growth and transformation. EY teams across various disciplines strive to address complex global challenges through innovative solutions and insightful questions.,

Posted 6 days ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

Your role at Prudential is to design, build, and maintain data pipelines to ingest data from multiple sources into the cloud data platform. It is essential to ensure that these pipelines are constructed according to defined standards and documented comprehensively. Data governance standards must be adhered to and enforced to maintain data integrity and compliance. Additionally, you will be responsible for implementing data quality rules to ensure the accuracy and reliability of the data. As part of your responsibilities, you will need to implement data security and protection controls around Databricks Unity Catalog. You will be utilizing Azure Data Factory, Azure Databricks, and other Azure services to build and optimize data pipelines. Proficiency in SQL, Python/PySpark, and other programming languages for data processing and transformation is crucial. Staying updated with the latest Azure technologies and best practices is essential for this role. You will also provide technical guidance and support to team members and stakeholders. Detailed documentation of data pipelines, processes, and data quality rules must be maintained. Debugging, fine-tuning, and optimizing large-scale data processing jobs will be part of your routine tasks. Generating reports and dashboards to monitor data pipeline performance and data quality metrics is also important. Collaboration with data teams across Asia and Africa to understand data requirements and deliver solutions will be required in this role. Overall, your role at Prudential will involve designing, building, and maintaining data pipelines, ensuring data integrity, implementing data quality rules, and collaborating with various teams to deliver effective data solutions.,

Posted 6 days ago

Apply

6.0 - 11.0 years

15 - 22 Lacs

Bengaluru

Work from Office

Dear Candidate, Hope you are doing well. Greeting from NAM Info INC. NAM Info Inc. is a technology-forward talent management organization dedicated to bridging the gap between industry leaders and exceptional human resources. They pride themselves on delivering quality candidates, deep industry coverage, and knowledge-based training for consultants. Their commitment to long-term partnerships, rooted in ethical practices and trust, positions them as a preferred partner for many industries. Learn more about their vision, achievements, and services on their website at www.nam-it.com. We have an open position for Data Engineer role with our company for Bangalore, Pune and Mumbai location. Job Description Position: Sr / Lead Data Engineer Location: Bangalore, Pune and Mumbai Experience: 5 + years Required Skills: Azure, Data warehouse, Python, Spark, PySpark, Snowflake / Databricks, Any RDBMS, Any ETL Tool, SQL, Unix Scripting, GitHub Strong experience in Azure / AWS / GCP Permanent with NAM Info Pvt Ltd Work Location: Bangalore, Pune and Mumbai Working time: 12 PM to 9 PM or 2 PM to 11 PM 5 Days work from office, Monday to Friday L1 interview virtual, L2 face to face at Banashankari office (for Bangalore candidate) Notice period immediate to 15 days If you are fine with the above job details then please share your resume to ananya.das@nam-it.com Regards, Recruitment Team NAM Info INC

Posted 1 week ago

Apply

8.0 - 13.0 years

18 - 22 Lacs

Hyderabad, Bengaluru

Work from Office

To Apply - Mandatory to submit Details via Google Form - https://forms.gle/cCa1WfCcidgiSTgh8 Position : Senior Data Engineer - Total 8+ years Required Relevant 6+ years in Databricks, AWS, Apache Spark & Informatica (Required Skills) As a Senior data Engineer in our team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced data Engineer to design, implement, and maintain robust data pipelines and analytics solutions using databricks & AWS services. The ideal candidate will have a strong background in data services, big data technologies, and programming languages. Role & responsibilities Technical Leadership: Guide and mentor teams in designing and implementing Databricks solutions. Architecture & Design: Develop scalable data pipelines and architectures using Databricks Lakehouse. Data Engineering: Lead the ingestion and transformation of batch and streaming data. Performance Optimization: Ensure efficient resource utilization and troubleshoot performance bottlenecks. Security & Compliance: Implement best practices for data governance, access control, and compliance. Collaboration: Work closely with data engineers, analysts, and business stakeholders. Cloud Integration: Manage Databricks environments on Azure, AWS, or GCP. Monitoring & Automation: Set up monitoring tools and automate workflows for efficiency. Qualifications: 6+ years of experience in Databricks, AWS and 4+ Apache Spark, and Informatica. Excellent problem-solving and leadership skills. Good to have these skills 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile (Good to have) 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: Good to have - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

As an Azure Data Engineer within our team, you will play a crucial role in enhancing and supporting existing Data & Analytics solutions by utilizing Azure Data Engineering technologies. Your primary focus will be on developing, maintaining, and deploying IT products and solutions that cater to various business users, with a strong emphasis on performance, scalability, and reliability. Your responsibilities will include incident classification and prioritization, log analysis, coordination with SMEs, escalation of complex issues, root cause analysis, stakeholder communication, code reviews, bug fixing, enhancements, and performance tuning. You will design, develop, and support data pipelines using Azure services, implement ETL techniques, cleanse and transform datasets, orchestrate workflows, and collaborate with both business and technical teams. To excel in this role, you should possess 3 to 6 years of experience in IT and Azure data engineering technologies, with a strong command over Azure Databricks, Azure Synapse, ADLS Gen2, Python, PySpark, SQL, JSON, Parquet, Teradata, Snowflake, Azure DevOps, and CI/CD pipeline deployments. Knowledge of Data Warehousing concepts, data modeling best practices, and familiarity with SNOW (ServiceNow) will be advantageous. In addition to technical skills, you should demonstrate the ability to work independently and in virtual teams, strong analytical and problem-solving abilities, experience in Agile practices, effective task and time management, and clear communication and documentation skills. Experience with Business Intelligence tools, particularly Power BI, and possessing the DP-203 certification (Azure Data Engineer Associate) will be considered a plus. Join us in Chennai, Tamilnadu, India, and be part of our dynamic team working in the FMCG/Foods/Beverage domain.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies