Jobs
Interviews

1265 Azure Databricks Jobs - Page 39

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 8.0 years

15 - 19 Lacs

Pune

Work from Office

Responsibilities Design, build,maintain ,scalable and efficient data pipelines using PySpark, Spark SQL, and optionally Scala. Develop and manage data solutions on the Databricks platform, utilizing Workspace, Jobs, (DLT), Repos, and Unity Catalog.

Posted 2 months ago

Apply

6.0 - 11.0 years

15 - 25 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Hybrid

Job Title : Azure Data Engineer Location : PAN INDIA Qualification : Graduation/Post-Graduation. Experience : 6+ Yrs Required Skills: Must have experience in Azure Data factory, Data bricks • Must have good experience in SQL • Must have experience in Spark or Pyspark • Good problem solving skills. • Good communication skills.

Posted 2 months ago

Apply

7.0 - 11.0 years

20 - 27 Lacs

Noida, Greater Noida

Work from Office

Description - Primary Responsibilities: Design, develop, and implement scalable data pipelines using Azure Databricks Develop PySpark-based data transformations and integrate structured and unstructured data from various sources Optimize Databricks clusters for performance, scalability, and cost-efficiency within the Azure ecosystem Monitor, troubleshoot, and resolve performance bottlenecks in Databricks workloads Manage orchestration and scheduling of end to end data pipeline using tool like Apache airflow, ADF scheduling, logic apps Effective collaboration with Architecture team in designing solutions and with product owners with validating the implementations Implementing best practices to enable data quality, monitoring, logging and alerting the failure scenarios and exception handling Documenting step by step process to trouble shoot the potential issues and deliver cost optimized cloud solutions Provide technical leadership, mentorship, and best practices for junior data engineers Stay up to date with Azure and Databricks advancements to continuously improve data engineering capabilities Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Qualifications - Required Qualifications: B.Tech or equivalent 7+ years of overall experience in IT industry and 6+ years of experience in data engineering with 3+ years of hands-on experience in Azure Databricks Hands-on experience with Delta Lake, Lakehouse architecture, and data versioning Experience with CI/CD pipelines for data engineering solutions (Azure DevOps, Git) Solid knowledge of performance tuning, partitioning, caching, and cost optimization in Databricks Deep understanding of data warehousing, data modeling (Kimball/Inmon), and big data processing Solid expertise in the Azure ecosystem, including Azure Synapse, Azure SQL, ADLS, and Azure Functions Proficiency in PySpark, Python and SQL for data processing in Databricks Proven excellent written and verbal communication skills Proven excellent problem-solving skills and ability to work independently Proven ability to balance multiple and competing priorities and execute accordingly Proven highly self-motivated with excellent interpersonal and collaborative skills Proven ability to anticipate risks and obstacles and develop plans for mitigation Proven excellent documentation experience and skills Preferred Qualifications: Azure certifications DP-203, AZ-304 etc. Experience in infrastructure as code, scheduling as code, and automating operational activities using Terraform scripts

Posted 2 months ago

Apply

5.0 - 7.0 years

15 - 25 Lacs

Pune, Bengaluru

Work from Office

Job Role & responsibilities: - Responsible for architecture designing, building and deploying data systems, pipelines etc Responsible for Designing and implementing agile, scalable, and cost efficiency solution on cloud data services. Responsible for Designing, Implementation, Development & Migration Migrate data from traditional database systems to Cloud environment Architect and implement ETL and data movement solutions. Technical Skill, Qualification & experience required:- 4.5-7 years of experience in Data Engineering, Azure Cloud Data Engineering, Azure Databricks, datafactory , Pyspark, SQL,Python Hands on experience in Azure Databricks, Data factory, Pyspark, SQL Proficient in Cloud Services-Azure Strong hands-on experience for working with Streaming dataset Hands-on Expertise in Data Refinement using Pyspark and Spark SQL Familiarity with building dataset using Scala. Familiarity with tools such as Jira and GitHub Experience leading agile scrum, sprint planning and review sessions Good communication and interpersonal skills Comfortable working in a multidisciplinary team within a fast-paced environment * Immediate Joiners will be preferred only

Posted 2 months ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Bengaluru

Remote

Min 5+ yrs of experience in relevant field. Deep understanding of Private and Public Cloud Architectures. Experience in Azure Services including Azure Data Factory (ADF) for orchestrating complex workflows. Hands-on expertise with Databricks and PySpark for big data transformation and advanced analytics. Strong emphasis and deep experience working with Relational Databases (RDBMS) including SQL Server, PostgreSQL, MySQL with advanced SQL skills.

Posted 2 months ago

Apply

8.0 - 11.0 years

27 - 42 Lacs

Bengaluru

Work from Office

Role : MLOps Engineer Location - Coimbatore Mode of Interview - In Person Key words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities: Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required experience and qualifications: Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage.

Posted 2 months ago

Apply

7.0 - 12.0 years

18 - 33 Lacs

Navi Mumbai

Work from Office

About Us: Celebal Technologies is a leading Solution Service company that provide Services the field of Data Science, Big Data, Enterprise Cloud & Automation. We are at the forefront of leveraging cutting-edge technologies to drive innovation and enhance our business processes. As part of our commitment to staying ahead in the industry, we are seeking a talented and experienced Data & AI Engineer with strong Azure cloud competencies to join our dynamic team. Job Summary: We are looking for a highly skilled Azure Data Engineer with a strong background in real-time and batch data ingestion and big data processing, particularly using Kafka and Databricks . The ideal candidate will have a deep understanding of streaming architectures , Medallion data models , and performance optimization techniques in cloud environments. This role requires hands-on technical expertise , including live coding during the interview process. Key Responsibilities Design and implement streaming data pipelines integrating Kafka with Databricks using Structured Streaming . Architect and maintain Medallion Architecture with well-defined Bronze, Silver, and Gold layers . Implement efficient ingestion using Databricks Autoloader for high-throughput data loads. Work with large volumes of structured and unstructured data , ensuring high availability and performance. Apply performance tuning techniques such as partitioning, caching , and cluster resource optimization . Collaborate with cross-functional teams (data scientists, analysts, business users) to build robust data solutions. Establish best practices for code versioning , deployment automation , and data governance . Required Technical Skills: Strong expertise in Azure Databricks and Spark Structured Streaming Processing modes (append, update, complete) Output modes (append, complete, update) Checkpointing and state management Experience with Kafka integration for real-time data pipelines Deep understanding of Medallion Architecture Proficiency with Databricks Autoloader and schema evolution Deep understanding of Unity Catalog and Foreign catalog Strong knowledge of Spark SQL, Delta Lake, and DataFrames Expertise in performance tuning (query optimization, cluster configuration, caching strategies) Must have Data management strategies Excellent with Governance and Access management Strong with Data modelling, Data warehousing concepts, Databricks as a platform Solid understanding of Window functions Proven experience in: Merge/Upsert logic Implementing SCD Type 1 and Type 2 Handling CDC (Change Data Capture) scenarios Retail/Telcom/Energy any one industry expertise Real time use case execution Data modelling

Posted 2 months ago

Apply

5.0 - 9.0 years

0 - 3 Lacs

Hyderabad, Pune, Chennai

Work from Office

Position : Azure Data Engineer Locations : Bangalore, Pune, Hyderabad, Chennai & Coimbatore Key skills Azure Data bricks, Azure Data Factory, Hadoop Relevant Exp : ADF, ADLF, Databricks- 4 Yrs Only Hadoop- 3.5 or 3 Yrs Experience - 5 Years Must-have skills: Cloud certified in one of these categories • Azure Data Engineer • Azure Data Factory , Azure Data bricks Spark (PySpark or scala), SQL, DATA Ingestion, Curation . Semantic Modelling/ Optimization of data model to work within Rahona • Experience in Azure ingestion from on-prem source, e.g. mainframe, SQL server, Oracle. • Experience in Sqoop / Hadoop • Microsoft Excel (for metadata files with requirements for ingestion) • Any other certificate in Azure/AWS/GCP and data engineering hands-on experience in cloud • Strong Programming skills with at least one of Python, Scala or Java • Strong SQL skills ( T-SQL or PL-SQL) • Data files movement via mailbox • Source-code versioning/promotion tools, e.g. Git/Jenkins • Orchestration tools, e.g. Autosys, Oozie • Source-code versioning with Git. Nice-to-have skills: Experience working with mainframe files • Experience in Agile environment, JIRA/Confluence tools.

Posted 2 months ago

Apply

7.0 - 12.0 years

3 - 7 Lacs

Gurugram

Work from Office

AHEAD builds platforms for digital business. By weaving together advances in cloud infrastructure, automation and analytics, and software delivery, we help enterprises deliver on the promise of digital transformation. AtAHEAD, we prioritize creating a culture of belonging,where all perspectives and voices are represented, valued, respected, and heard. We create spaces to empower everyone to speak up, make change, and drive the culture at AHEAD. We are an equal opportunity employer,anddo not discriminatebased onan individual's race, national origin, color, gender, gender identity, gender expression, sexual orientation, religion, age, disability, maritalstatus,or any other protected characteristic under applicable law, whether actual or perceived. We embraceall candidatesthatwillcontribute to the diversification and enrichment of ideas andperspectives atAHEAD. AHEAD is looking for a Sr. Data Engineer (L3 support) to work closely with our dynamic project teams (both on-site and remotely). This Data Engineer will be responsible for hands-on engineering of Data platforms that support our clients advanced analytics, data science, and other data engineering initiatives. This consultant will build and support modern data environments that reside in the public cloud or multi-cloud enterprise architectures. The Data Engineer will have responsibility for working on a variety of data projects. This includes orchestrating pipelines using modern Data Engineering tools/architectures as well as design and integration of existing transactional processing systems. The appropriate candidate must be a subject matter expert in managing data platforms. Responsibilities: A Sr. Data Engineer should be able to build, operationalize and monitor data processing systems Create robust and automated pipelines to ingest and process structured and unstructured data from various source systems into analytical platforms using batch and streaming mechanisms leveraging cloud native toolset Implement custom applications using tools such as EventHubs, ADF and other cloud native tools as required to address streaming use cases Engineers and maintain ELT processes for loading data lake (Cloud Storage, data lake gen2) Leverages the right tools for the right job to deliver testable, maintainable, and modern data solutions Respond to customer/team inquiries and escalations and assist in troubleshooting and resolving challenges Works with other scrum team members to estimate and deliver work inside of a sprint Research data questions, identifies root causes, and interacts closely with business users and technical resources Should possess ownership and leadership skills to collaborate effectively with Level 1 and Level 2 teams. Must have experience in raising tickets with Microsoft and engaging with them to address any service or tool outages in production. Qualifications: 7+ years of professional technical experience 5+ years of hands-on Data Architecture and Data Modelling SME level 5+ years of experience building highly scalable data solutions using Azure data factory, Spark, Databricks, Python 5+ years of experience working in cloud environments (AWS and/or Azure) 3+ years of programming languages such as Python, Spark and Spark SQL. Should have strong knowledge on architecture of ADF and Databricks. Able to work with Level1 and Level 2 teams to resolve platform outages in production environments. Strong client-facing communication and facilitation skills Strong sense of urgency, ability to set priorities and perform the job with little guidance Excellent written and verbal interpersonal skills and the ability to build and maintain collaborative and positive working relationships at all levels Strong interpersonal and communication skills (Written and oral) required Should be able to work in shifts Should have knowledge on azure Dev Ops process. Key Skills: Azure Data Factory, Azure Data bricks, Python, ETL/ELT, Spark, Data Lake, Data Engineering, EventHubs, Azure delta, Spark streaming Why AHEAD: Through our daily work and internal groups like Moving Women AHEAD and RISE AHEAD, we value and benefit from diversity of people, ideas, experience, and everything in between. We fuel growth by stacking our office with top-notch technologies in a multi-million-dollar lab, by encouraging cross department training and development, sponsoring certifications and credentials for continued learning. USA Employment Benefits include - Medical, Dental, and Vision Insurance - 401(k) - Paid company holidays - Paid time off - Paid parental and caregiver leave - Plus more! See benefits https://www.aheadbenefits.com/ for additional details. The compensation range indicated in this posting reflects the On-Target Earnings (OTE) for this role, which includes a base salary and any applicable target bonus amount. This OTE range may vary based on the candidates relevant experience, qualifications, and geographic location.

Posted 2 months ago

Apply

5.0 - 6.0 years

12 - 18 Lacs

Indore, Hyderabad, Pune

Hybrid

Min. 5+ years of experience with good work experience in Banking domain • Strong experience in Azure Databricks, Pyspark - Both these skills are mandatory

Posted 2 months ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Kochi, Hyderabad, Bengaluru

Work from Office

Design, build, and maintain scalable and efficient data pipelines using Azure services such as Azure Data Factory (ADF), Azure Databricks, and Azure Synapse Analytics. Develop and optimize ETL/ELT workflows for ingestion, cleansing, transformation, Required Candidate profile 5-10 years of overall experience in Data Engineering with a minimum of 5 years in Azure-based data projects. Strong understanding of data warehouse architecture, data lakes, and big data frameworks

Posted 2 months ago

Apply

5.0 - 10.0 years

14 - 24 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Greetings from LTIMindtree! Job Description Notice Period:- 0 to 30 Days only Experience:- 5 to 12 Years Interview Mode :- 2 rounds (One round is F2F) Hybrid (2-3 WFO) Brief Description of Role Job Summary: We are seeking an experienced and strategic Data Architect to design, build, and optimize scalable, secure, and high-performance data solutions. You will play a pivotal role in shaping our data infrastructure, working with technologies such as Databricks, Azure Data Factory, Unity Catalog , and Spark , while aligning with best practices in data governance, pipeline automation , and performance optimization . Key Responsibilities: Design and develop scalable data pipelines using Databricks and Medallion Architecture (Bronze, Silver, Gold layers). • Architect and implement data governance frameworks using Unity Catalog and related tools. • Write efficient PySpark and SQL code for data transformation, cleansing, and enrichment. • Build and manage data workflows in Azure Data Factory (ADF) including triggers, linked services, and integration runtimes. • Optimize queries and data structures for performance and cost-efficiency . • Develop and maintain CI/CD pipelines using GitHub for automated deployment and version control. • Collaborate with cross-functional teams to define data strategies and drive data quality initiatives. • Implement best practices for DevOps, CI/CD , and infrastructure-as-code in data engineering. • Troubleshoot and resolve performance bottlenecks across Spark, ADF, and Databricks pipelines. • Maintain comprehensive documentation of architecture, processes, and workflows . Requirements: Bachelors or masters degree in computer science, Information Systems, or related field. • Proven experience as a Data Architect or Senior Data Engineer. • Strong knowledge of Databricks , Azure Data Factory , Spark (PySpark) , and SQL . • Hands-on experience with data governance , security frameworks , and catalog management . • Proficiency in cloud platforms (preferably Azure). • Experience with CI/CD tools and version control systems like GitHub. • Strong communication and collaboration skills.

Posted 2 months ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Hyderabad

Remote

Role & responsibilities Design and build scalable data pipelines in Azure Databricks and Palantir Foundry Integrate business models and build templated analyses and reports Manage ontologies, object types, and permissions in Foundry Develop applications using Foundry Workshop with writeback and custom actions Ensure data governance, protection, and access control Collaborate with Product Managers and cross-functional teams to deliver business solutions Provide operational support for Azure Palantir data integration Preferred candidate profile Strong proficiency in Python, SQL, PySpark Azure experience: Databricks , Data Factory Hands-on expertise in Palantir Foundry: Pipeline Builder, Ontology Manager, Workshop Knowledge of Mesa (Palantir's proprietary language) is a plus Foundry Certifications preferred (Data Engineering, Foundational) Experience in Oil & Gas or engineering domains Excellent communication and stakeholder engagement skills

Posted 2 months ago

Apply

6.0 - 11.0 years

18 - 33 Lacs

Bengaluru

Remote

Role & responsibilities Mandatory skills: ADB AND UNITY CATALOG Job Summary: We are looking for a skilled StData Engineer /with expertise in Databricks and Unity Catalog to design, implement, and manage scalable data solutions. Key Responsibilities: • Design and implement scalable data pipelines and ETL workflows using Databricks. • Implement Unity Catalog for data governance, access control, and metadata management across multiple workspaces. • Develop Delta Lake architectures for optimized data storage and retrieval. • Establish best practices for data security, compliance, and lineage tracking in Unity Catalog. • Optimize data lakehouse architecture for performance and cost efficiency. • Collaborate with data scientists, engineers, and business teams to support analytical workloads. • Monitor and troubleshoot Databricks clusters, performance tuning, and cost management. • Implement data quality frameworks and observability solutions to maintain high data integrity. • Work with Azure/AWS/GCP cloud environments to deploy and manage data solutions. Required Skills & Qualifications: • 8-19 years of experience in data engineering, data architecture, or cloud data solutions. • Strong hands-on experience with Databricks and Unity Catalog. • Expertise in PySpark, Scala, or SQL for data processing. • Deep understanding of Delta Lake, Lakehouse architecture, and data partitioning strategies. • Experience with RBAC, ABAC, and access control mechanisms within Unity Catalog. • Knowledge of data governance, compliance standards (GDPR, HIPAA, etc.), and audit logging. • Familiarity with cloud platforms (Azure, AWS, or GCP) and their respective data services. • Strong understanding of CI/CD pipelines, DevOps, and Infrastructure as Code (IaC). • Experience integrating BI tools (Tableau, Power BI, Looker) and ML frameworks is a plus. • Excellent problem-solving, communication, and collaboration skills. Preferred candidate profile

Posted 2 months ago

Apply

10.0 - 15.0 years

30 - 40 Lacs

Hyderabad, Pune, Greater Noida

Work from Office

Responsibilities: * Design and build data architecture frameworks leveraging Azure services (Azure Data Factory, Azure Synapse Analytics, Azure Data Lake Storage, Azure SQL Database, ADLS Gen2, Synapse Engineering, Fabric Notebook, Pyspark, Scala, Python etc.). * Define and implement reference architectures and architecture blueprinting. * Experience demonstrating and ability to talk about wide variety of data engineering tools, architectures across cloud providers Especially on Azure platform. * Experience in building Data Product, data processing frameworks, Metadata Driven ETL pipelines , Data Security, Data Standardization, Data Quality and Data Reconciliation workflows. *Vast experience on building data product on MS AZURE / Fabric platform, Azure Managed instance, Microsoft Fabrics, Lakehouse, Synapse Engineering, MS onelake. Requirements:- * 10+ years of experience in Data Warehousing and Azure Cloud technologies. * Strong hands-on experience with Azure Fabrics, Synapse, ADf, SQL, Python/PySpark. * Proven expertise in designing and implementing data architectures on Azure using Microsoft fabric, azure synapse, ADF, MS fabric notebook * Exposure to Azure DevOps and Business Intelligence. * Solid understanding of data governance, data security, and compliance. * Excellent communication and collaboration skills.

Posted 2 months ago

Apply

7.0 - 8.0 years

0 - 1 Lacs

Hyderabad

Work from Office

Title: Databricks Platform Administrator Location: Hyderabad Duration: Full Time (C2H) Experience: 7-8 Years Environment: AWS/Azure Duration: Databricks Platform Administrator to manage and support the day-to-day operations, configuration, and performance of the Databricks Lakehouse Platform. The ideal candidate will be responsible for user provisioning, workspace and cluster management, job scheduling, monitoring, and ensuring platform stability and security. This role requires hands-on experience with Databricks on AWS or Azure, integration with data pipelines, and managing platform-level configurations, libraries, and access controls (Unity Catalog, SCIM, IAM, etc.). The candidate should be familiar with DevOps practices, automation scripting, and collaboration with data engineering and infrastructure teams to support enterprise data initiatives.

Posted 2 months ago

Apply

5.0 - 10.0 years

0 - 2 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Hybrid

Role: Azure Experience: 5 to 12Yrs. Joining Time: Less than immediate-45 days Location: Hyderabad, Bangalore, Chennai, Mumbai, Pune If the above criteria are matching your profiles, please share your profile to allen.prashanth@ltimindtree.com with below details. Allen.Prashanth@ltimindtree.com Total Experience: Relevant in ADF: Relevant in ADB: Current CTC: Expected CTC: Current Location: Preferred Location Offer in hand if any: Pan Card No: Notice period/how soon you can join: Job Summary: JOB DESCRIPTION: 5+ Years of experience using Azure. Strong proficiency in Databricks. Experience with Pyspark. Proficiency in SQL or TSQL. Experience in Azure service components like Azure data factory, Azure data lake, Databricks, SQL DB SQL Server. Databricks Jobs for efficient data processing ETL tasks and report generation. Hands on experience with scripting languages such as Python for data processing and manipulation. Key responsibilities: Leverage Databricks to set up scalable data pipelines that integrate with a variety of data sources and cloud platforms Participate in code and design reviews to maintain high development standards. Optimize data querying layers to enhance performance and support analytical requirements. Should be able to develop end to end automations in Azure stack for ETL workflows data quality validations. Role & responsibilities Preferred candidate profile

Posted 2 months ago

Apply

4.0 - 8.0 years

22 - 25 Lacs

Thane, Navi Mumbai, Mumbai (All Areas)

Work from Office

Role & responsibilities Develop and maintain data lake architecture using Azure Data Lake Storage Gen2 and Delta Lake. Build end-to-end solutions to ingest, transform, and model data from various sources for analytics and reporting. Work with stakeholders to gather requirements and translate them into scalable data solutions. Optimize data processing workflows and ensure high performance for large-scale data sets. Collaborate with data analysts, BI developers, and data scientists to support advanced analytics use cases. Implement data quality checks, logging, and monitoring of data pipelines. Ensure compliance with data security, privacy, and governance standards Preferred candidate profile 4 6 years of strong hands-on experience with Azure Databricks (PySpark, SparkSQL, Delta Lake). Solid understanding of Azure Data Factory (ADF) building pipelines, triggers, linked services, datasets. Familiarity with Microsoft Fabric – including OneLake, Dataflows, and Lakehouses. Proficiency in SQL, Python, and PySpark. Experience working with Azure Synapse Analytics, Azure SQL, and Azure Blob/Data Lake Storage. Strong knowledge of data warehousing, data modeling, and performance tuning.

Posted 2 months ago

Apply

7.0 - 12.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Data Engineer Python- Azure Databricks – 7 Years – Bangalore Location – Bangalore Are you a seasoned Data Engineer passionate about turning complex datasets into scalable insights? Here’s your chance to build robust data platforms and pipelines that support global decision-making at scale—within a forward-thinking organization that champions innovation and excellence. Your Future Employer – A global enterprise delivering high-impact technology and operational services to Fortune-level clients. Known for fostering a culture of innovation, agility, and collaboration. Responsibilities – 1. Architect and implement data models and infrastructure to support analytics, reporting, and data science. 2. Build high-performance ETL pipelines and manage data integration from multiple sources. 3. Maintain data quality, governance, and security standards. 4. Collaborate with cross-functional teams to translate business needs into technical solutions. 5. Troubleshoot, optimize, and document scalable data workflows. Requirements – 1. 7+ years of experience as a Data Engineer with at least 4 years in cloud ecosystems . 2. Strong expertise in Azure (ADF, Data Lake Gen2, Databricks) or AWS. 3. Proficiency in Python and SQL ; experience with Spark, Kafka, or Hadoop is a plus. 4. Deep understanding of data warehousing, OLAP, and data modelling. 5. Familiarity with visualization tools like Power BI, Tableau, or Looker. What is in it for you – High-visibility projects with real-world impact. Access to cutting-edge cloud and big data technologies. Flexible hybrid work environment in Bangalore. Dynamic and collaborative global work culture. Reach us: If you think this role aligns with your career, kindly write to me along with your updated CV at parul.arora@crescendogroup.in for a confidential discussion. Disclaimer: Crescendo Global specializes in senior to C-level niche recruitment. We are passionate about empowering job seekers and employers with a memorable job search and leadership hiring experience. We do not discriminate based on race, religion, gender, or any other protected status. Note: We receive a lot of applications on a daily basis, so it becomes difficult for us to get back to each candidate. Please assume that your profile has not been shortlisted in case you don't hear back from us in 1 week. Your patience is highly appreciated. Profile Keywords – Data Engineer Bangalore, Azure Data Factory, Azure Data Lake, Azure Databricks, ETL Developer, Big Data Engineer, Python Data Engineer, SQL Developer, Data Pipeline Developer, Cloud Data Engineering, Data Warehousing, Data Modelling, Spark Developer, Kafka Engineer, Hadoop Jobs, Power BI Developer, Tableau Analyst, CI/CD for Data, Streaming Data Engineer, DataOps

Posted 2 months ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Hyderabad, Pune, Gurugram

Work from Office

We Are Hiring! Sr. Azure Data Engineer at GSPANN Technologies - 5 + years of experience . Application Process: If you are ready to take the next step in your career and be a part of a leading IT services company, please send your updated CV to heena.ruchwani@gspann.com. 5 + years of experience . Location: Pune, Hyderabad, Gurgaon, Noida Key Skills & Experience: Azure Synapse Analytics Azure Data Factory (ADF) PySpark Databricks Expertise in developing and maintaining Stored Procedures Proven experience in designing and implementing scalable data solutions in Azure Preferred Qualifications: Minimum 6 years of hands-on experience working with Azure Data Services Strong analytical and problem-solving skills Excellent communication skills, both verbal and written Ability to collaborate effectively in a fast-paced, cross-functional environment Immediate Joiners Only: We are looking for professionals who can join immediately and contribute to dynamic projects. Application Process: If you are ready to take the next step in your career and be a part of a leading IT services company, please send your updated CV to heena.ruchwani@gspann.com. Join GSPANN Technologies and accelerate your career with exciting opportunities in data engineering!

Posted 2 months ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Pune

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Databricks Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved in the development process. Your role will be pivotal in driving innovation and efficiency within the application development lifecycle, fostering a collaborative environment that encourages creativity and problem-solving. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Good To Have Skills: Experience with cloud computing platforms.- Strong understanding of application development methodologies.- Experience in managing cross-functional teams.- Familiarity with Agile and DevOps practices. Additional Information:- The candidate should have minimum 5 years of experience in Microsoft Azure Databricks.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 months ago

Apply

7.0 - 12.0 years

20 - 35 Lacs

Mumbai

Work from Office

Job Summary: We are looking for a highly skilled Azure Data Engineer with a strong background in real-time and batch data ingestion and big data processing, particularly using Kafka and Databricks . The ideal candidate will have a deep understanding of streaming architectures , Medallion data models , and performance optimization techniques in cloud environments. This role requires hands-on technical expertise , including live coding during the interview process. Key Responsibilities Design and implement streaming data pipelines integrating Kafka with Databricks using Structured Streaming . Architect and maintain Medallion Architecture with well-defined Bronze, Silver, and Gold layers . Implement efficient ingestion using Databricks Autoloader for high-throughput data loads. Work with large volumes of structured and unstructured data , ensuring high availability and performance. Apply performance tuning techniques such as partitioning, caching , and cluster resource optimization . Collaborate with cross-functional teams (data scientists, analysts, business users) to build robust data solutions. Establish best practices for code versioning , deployment automation , and data governance . Required Technical Skills: Strong expertise in Azure Databricks and Spark Structured Streaming Processing modes (append, update, complete) Output modes (append, complete, update) Checkpointing and state management Experience with Kafka integration for real-time data pipelines Deep understanding of Medallion Architecture Proficiency with Databricks Autoloader and schema evolution Deep understanding of Unity Catalog and Foreign catalog Strong knowledge of Spark SQL, Delta Lake, and DataFrames Expertise in performance tuning (query optimization, cluster configuration, caching strategies) Must have Data management strategies Excellent with Governance and Access management Strong with Data modelling, Data warehousing concepts, Databricks as a platform Solid understanding of Window functions Proven experience in: Merge/Upsert logic Implementing SCD Type 1 and Type 2 Handling CDC (Change Data Capture) scenarios Retail/Telcom/Energy any one industry expertise Real time use case execution Data modelling Location: Mumbai

Posted 2 months ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Pune

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Databricks Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders. You will also engage in problem-solving activities, providing guidance and support to your team while ensuring that best practices are followed throughout the development process. Your role will be pivotal in driving innovation and efficiency within the application development lifecycle. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate training and development opportunities for team members to enhance their skills.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Strong understanding of cloud computing principles and architecture.- Experience with data integration and ETL processes.- Familiarity with programming languages such as Python or Scala.- Knowledge of data analytics and visualization tools. Additional Information:- The candidate should have minimum 7.5 years of experience in Microsoft Azure Databricks.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 months ago

Apply

7.0 - 9.0 years

20 - 22 Lacs

Chennai

Remote

Role & responsibilities The developer will be responsible for developing in Postgres platform hosted in Azure as well as in Azure Databricks. Good data Engineering, data modeling, SQL knowledge is a must with Postgres and Azure programming background. The developer will be responsible for providing design and development solutions for applications in the Postgres(EDB) and Azure space *Required: Postgres or any other DBA/Application development certification {color:#172b4d}and Azure databricks development experience.{color}* Essential Job Functions: Understand requirements and engage with the team to design and deliver projects. • Design and implement Postgres /Azure projects in CMS • Design and develop application lifecycle utilizing Postgres / Azure technologies • Participate in design and planning and necessary documentation • Participate in Agile ceremonies including daily standups, scrum, retrospectives, demos, code reviews. • Hands on with PSQL/SQL development , Python and Unix scripting • Engage with team to develop and deliver cross functional products • Key Skills *a. Data Engineering, SQL and ETL* *b. Python* *c. Azure Databricks* *d. Unix scripting* *e. Postgres* f. DBMS g. Data transfer methodologies h. CICD i. Strong communication Preferred candidate profile 7 years of hands-on experience in designing and developing DB solutions • 5 years of hands-on experience in Oracle or Postgres DBMS • 2 years of hands-on experience in Azure Databricks • 5 years of hands-on experience in Unix scripting, SQL, Object oriented programming, ETL and unit testing • Experience with Azure DevOps and CI/CD as well as agile tools and processes including JIRA, confluence.

Posted 2 months ago

Apply

5.0 - 10.0 years

10 - 15 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Hybrid

Designation : Azure Data Engineer Experience : 5+ Years Location: Chennai, Bangalore, Pune, Mumbai Notice Period: Immediate Joiners/ Serving Notice Period Shift Timing: 3:30 PM IST to 12:30 AM IST Job Description : Azure Data Engineer: Must Have Azure Data Bricks, Azure Data Factory, Spark SQL with analytical knowledge Years 6-7 years of development experience in data engineering skills Strong experience in Spark. Understand complex data system by working closely with engineering and product teams Develop scalable and maintainable applications to extract, transform, and load data in various formats to SQL Server, Hadoop Data Lake or other data storage locations. Sincerely, Sonia HR Recruiter Talent Sketchers

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies