Jobs
Interviews

648 Azure Synapse Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 4.0 years

4 - 8 Lacs

bengaluru

Work from Office

About The Role Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Microsoft Azure Databricks Good to have skills : Microsoft Azure Data Services, PySpark Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Additionally, you will monitor and optimize data workflows to enhance performance and reliability, ensuring that data is accessible and actionable for stakeholders. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and maintain robust data pipelines to support data processing and analytics.- Collaborate with data architects and analysts to design data models that meet business requirements. Must have Proficiency in programming languages such as Python, SQL, and experience with big data technology - Spark.Experience with cloud platforms mainly on Microsoft Azure.Experience on Microsoft Azure Databricks and Azure Data Factory.Experience with CI/CD processes and tools, including Azure DevOps, Jenkins, and Git, to ensure smooth and efficient deployment of data solutions. Familiarity with APIs to push and Pull data from data systems and Platforms. Familiarity with understanding software architecture High level Design document and translating them to developmental tasks. Familiarity with Microsoft data stack such as Azure Data Factory, Azure Synapse, Databricks, Azure DevOps and Fabric / PowerBI. Nice to have:Experience with machine learning and AI technologies Data Modelling & Architecture ETL pipeline design Azure DevOps Logging and Monitoring using Azure / Databricks services Apache Kafka Qualification 15 years full time education

Posted -1 days ago

Apply

4.0 - 9.0 years

6 - 10 Lacs

hyderabad

Work from Office

Job Summary We are seeking a skilled Azure Data Engineer with 4 years of overall experience , including at least 2 years of hands-on experience with Azure Databricks (Must) . The ideal candidate will have strong expertise in building and maintaining scalable data pipelines and working across cloud-based data platforms. Key Responsibilities Design, develop, and optimize large-scale data pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse. Implement data lake solutions and work with structured and unstructured datasets in Azure Data Lake Storage (ADLS). Collaborate with data scientists, analysts, and engineering teams to design and deliver end-to-end data solutions. Develop ETL/ELT processes and integrate data from multiple sources. Monitor, debug, and optimize workflows for performance and cost-efficiency. Ensure data governance, quality, and security best practices are maintained. Must-Have Skills 4+ years of total experience in data engineering. 2+ years of experience with Azure Databricks (PySpark, Notebooks, Delta Lake). Strong experience with Azure Data Factory, Azure SQL, and ADLS. Proficient in writing SQL queries and Python/Scala scripting. Understanding of CI/CD pipelines and version control systems (e.g., Git). Solid grasp of data modeling and warehousing concepts. Skills: azure synapse,data modeling,data engineering,azure,azure databricks,azure data lake storage (adls),ci/cd,etl,elt,data warehousing,sql,scala,git,azure data factory,python Mandatory Key SkillsAzure Databricks,Azure Data Factory,Azure SQL,ADLS,ETL,ELT,SQL,Data Engineering*

Posted 2 hours ago

Apply

8.0 - 13.0 years

30 - 45 Lacs

bengaluru

Remote

Position: Sr. Data Engineer / Data Architect Your team You will act as a key member of the consulting team helping Clients to re-invent their corporate finance function by leveraging advanced analytics. You will be closely working directly with senior stakeholders of the clients designing and implementing data strategy in finance space which includes multiple use cases viz. controllership, FP&A and GPO. You will be responsible for developing technical solutions to deliver scalable analytical solutions leveraging cloud and big data technologies. You will also collaborate with Business Consultants and Product Owners to design and implement technical solutions. Communication and organisation skills are keys for this position. Responsibilities: • Design and drive end to end data and analytics solution architecture from concept to delivery • Design, develop, and support conceptual/logical/physical data models for analytics solutions. • Ensures that industry-accepted data architecture principles, standards, guidelines and concepts are integrated with those of allied disciplines and coordinated roll-out and adoption strategies are in place. • Drive the design, sizing, setup, etc. of Azure environments and related services • Provide mentoring on data architecture design and requirements to development and business teams • Reviews the solution requirements and architecture to ensure selection of appropriate technology, efficient use of resources and integration of multiple systems and technology. • Advising on new technology trends and possible adoption to maintain competitive advantage • Participate in pre-sales activities and publish thought leaderships • Work closely with the founders to drive the technology strategy for the organisation • Help and lead technology team recruitments in various areas of data analytics Experience Needed : • Demonstrated experience delivering multiple data solutions • Demonstrated experience with ETL development both on-premises and in the cloud using SSIS, Data Factory, and related Microsoft and other ETL technologies. • Demonstrated in depth skills with SQL Server, Azure Synapse, Azure Databricks, HDInsights, Azure Data Lake with the ability to configure and administrate all aspects of SQL Server. • Demonstrated experience with different data models like normalised, denormalized, stars, and snowflake models. Worked with transactional, temporal, time series, and structured and unstructured data. • Data Quality Management (Microsoft DQS and other data quality and governance tools) and Data Architecture standardization experience Aays Analytics | www.aaysanalytics.com • Deep understanding of the operational dependencies of applications, networks, systems, security and policy (both on-premise and in the cloud; VMs, Networking, VPN (Express Route), Active Directory, Storage (Blob, etc.), Windows/Linux). • Advanced study / knowledge in the field of computer science or software engineering along with advanced knowledge of software development and methodologies (Microsoft development lifecycle including OOP principles, Visual Studio, SDKs, PowerShell, CLI). • Is familiar with the principles and practices involved in development and maintenance of software solutions and architectures and in-service delivery (Microsoft and Azure DevOps. Azure Automation). • Has strong technical background and remains evergreen with technology and industry developments.

Posted 16 hours ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

pune

Work from Office

We are looking for a skilled and experienced Data Engineer with hands-on expertise in Azure Data Services to join our growing team. The ideal candidate will be responsible for designing, developing, and maintaining scalable data pipelines and enterprise-grade data solutions using modern Azure tools and technologies. Azure Data Engineer Job Title : Azure Data Engineer Location : Pune Experience : 8+ years Key Responsibilities: Design, develop, and maintain scalable data pipelines using Azure Data Factory , Azure Synapse Analytics , Databricks , and Azure Data Lake Storage (ADLS) . Implement data ingestion, transformation, and integration processes from various sources (on-premises/cloud). Create and manage Azure resources for data solutions, including storage accounts, databases, and compute services. Develop and optimize SQL scripts, stored procedures, and views for data transformation and reporting. Ensure data quality, governance, and security standards are met using tools like Azure Purview , Azure Key Vault , and Role-Based Access Control (RBAC) . Collaborate with Data Scientists, BI Developers, and other stakeholders to deliver enterprise- grade data solutions. Monitor and troubleshoot data pipeline failures and performance issues. Document technical solutions and maintain best practices. Technical Skills Required: Azure Data Factory (ADF) Expertise in building pipelines, triggers, linked services, etc. Azure Synapse Analytics / SQL Data Warehouse Azure Databricks / Spark / PySpark Azure Data Lake (Gen2) Azure SQL / Cosmos DB / SQL Server Strong knowledge of SQL , T-SQL , and performance tuning Good understanding of ETL/ELT frameworks and data modeling concepts (Star/Snowflake schema) Experience with CI/CD pipelines using Azure DevOps Familiarity with tools like Git , ARM templates , Terraform (optional) Knowledge of Power BI integration is a plus Soft Skills & Additional Qualifications: Strong analytical and problem-solving skills Excellent communication and collaboration abilities Ability to work independently and lead junior team members Azure certifications (e.g., DP-203 ) are preferred Apply Now to be a part of a dynamic and forward-thinking data team on below mail id kiran.ghorpade@neutrinotechlabs.com

Posted 2 days ago

Apply

5.0 - 9.0 years

16 - 20 Lacs

noida

Remote

Develop, maintain, and optimize data pipelines using Microsoft Fabric, Azure Data Factory (ADF), and Azure Synapse Analytics. Integrate data from various sources such as Azure SQL Database, Data Lake, Blob Storage, and REST APIs. Design and implement scalable data models and data marts in Microsoft Fabric. Build, deploy, and manage dataflows and notebooks in Fabric. Develop robust ETL/ELT processes for structured and unstructured data. Having data governance, security, and monitoring within Azure ecosystems will be a plus. Utilize GitHub or Azure DevOps for source control and CI/CD of Fabric assets. Collaborate with data analysts, data scientists, and business stakeholders to define data requirements. Optimize query performance and manage Fabric datasets. Regards Rajan Paul 3Pillar Global

Posted 2 days ago

Apply

3.0 - 6.0 years

5 - 10 Lacs

chennai

Work from Office

Roles & Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes using Azure Data Factory and related services. Work with large, complex datasets and ensure data integrity, quality, and security. Implement and optimize data storage solutions (Azure SQL Database, Azure Data Lake, Synapse, etc.). Collaborate with business and technical teams to gather requirements and deliver efficient data solutions. Monitor and improve performance, reliability, and cost-efficiency of data workflows. Ensure adherence to best practices in data engineering, security, and compliance. Support data scientists and analysts by providing clean, reliable, and well-structured data. Required Skills: Hands-on experience with Azure Data Factory, Azure Databricks, Synapse Analytics, Azure SQL. Proficiency in SQL for data transformation. Strong understanding of data warehousing concepts, ETL processes, and cloud-based data architecture. Good problem-solving and communication skills

Posted 2 days ago

Apply

6.0 - 11.0 years

20 - 30 Lacs

gurugram

Work from Office

Job Application Link: https://app.fabrichq.ai/jobs/e1003d62-f76d-4ee1-b787-da40ce6f717f Job Summary: You will act as a key member of the Data consulting team, working directly with partners and senior stakeholders. You will design and implement big data and analytics solutions. Communication and organisational skills are key for this position. Key Responsibilities Develop data solutions within a Big Data Azure and/or other cloud environments Working with divergent data sets that meet the requirements of the Data Science and Data Analytics teams Build and design Data Architectures using Azure Data factory, Databricks, Data lake, Synapse Liaising with CTO, Product Owners and other Operations teams to deliver engineering roadmaps Perform data mapping activities to describe source data, target data and the high-level or detailed transformations Assist Data Analyst team in developing KPIs and reporting in tools viz. Power BI, Tableau Data Integration, Transformation, Modelling Maintaining all relevant documentation and knowledge bases Research and suggest new database products, services and protocols Skills & Requirements Must Have Skills Technical expertise with emerging Big Data technologies, such as: Python, Spark, Git, SQL Experience with cloud, container and micro service infrastructures Experience working with divergent data sets that meet the requirements of the Data Science and Data Analytics teams Hands-on experience with data modelling, query techniques and complexity analysis Hands-on experience with Azure and Databricks Experience with CI/CD and DevOps

Posted 2 days ago

Apply

5.0 - 8.0 years

8 - 16 Lacs

gurugram, bengaluru

Work from Office

Responsibilities: Design, build, and maintain scalable and reliable data pipelines using Azure Data Factory, Azure Synapse, and Databricks. • Develop and optimize large-scale data processing jobs using PySpark and Spark SQL on Azure Databricks or Synapse Spark pools. • Manage and work with large datasets stored in data lakes (ADLS Gen2) and integrate with enterprise data warehouses (e.g., SQL Server, Synapse). • Implement robust data transformation, cleansing, and aggregation logic to support analytics and reporting use cases. • Collaborate with BI developers, analysts, and data scientists to provision clean, reliable, and timely datasets. • Optimize data flows for performance and cost efficiency on Azure cloud platforms. • Implement data governance, lineage, and security practices across the data architecture. • Troubleshoot and resolve data pipeline failures and ensure high availability and fault-tolerant design. • Participate in code reviews, adhere to version control practices, and maintain high coding standards. Mandatory skill sets: Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks • Apache Spark, PySpark • SQL Server / T-SQL / Synapse SQL • Azure Data Lake Storage Gen2 • Big Data ecosystem knowledge (Parquet, Delta Lake, etc.) • Git, DevOps pipelines for data engineering • Performance tuning of Spark jobs and SQL queries Preferred skill sets: Python for data engineering workflows • Azure Monitor, Log Analytics for pipeline observability • Power BI (data modeling, DAX) • Delta Lake • Experience with CI/CD for data pipelines (YAML pipelines, ADF integration) • Knowledge of data quality tools and frameworks

Posted 2 days ago

Apply

10.0 - 20.0 years

45 - 55 Lacs

pune, bengaluru

Hybrid

Overview of 66degrees 66degrees is a leading consulting and professional services company specializing in developing AI-focused, data-led solutions leveraging the latest advancements in cloud technology. With our unmatched engineering capabilities and vast industry experience, we help the world's leading brands transform their business challenges into opportunities and shape the future of work. At 66degrees, we believe in embracing the challenge and winning together. These values not only guide us in achieving our goals as a company but also for our people. We are dedicated to creating a significant impact for our employees by fostering a culture that sparks innovation and supports professional and personal growth along the way. Role Description Own end-to-end design of modern data platforms on Microsoft Azure. Provide architectural leadership with hands-on engineering skills, guide data engineering team to build a secure, scalable data platform; consisting of data lake, data lakehouse or data warehouse. Deliver raw data into analytics-ready assets. Liaison between business and technology stakeholders (Cloud Infrastructure, App Development, Security and Compliance) to define data strategy, standards and governance while optimising cost, performance and compliance across the Azure ecosystem. Responsibilities Design and document data architectures (data lake, warehouse, lakehouse, MDM, streaming) on Azure Synapse Analytics, Data Lake Storage Gen2, Microsoft Fabric,CosmosDB. Lead migration of on-prem workloads to Azure with appropriate IaaS, PaaS or SaaS solutions and right-sizing for cost and performance. Guide development of data pipelines using Azure Data Factory, Synapse Pipelines, dbt, ensuring orchestration, monitoring and CI/CD via Azure DevOps. Model conceptual, logical and physical data structures; enforce naming standards, data lineage and master-data management practices. Implement robust security (RBAC, managed identities, Key Vault), data privacy and regulatory controls such as GDPR or HIPAA. Define data governance policies, metadata management and catalogue strategies using Microsoft Purview or equivalent tools. Provide technical leadership to data engineers, analysts and BI developers; lead code/design review meetings and mentor on Azure best practices. Collaborate with enterprise architects, product owners and business SMEs to translate analytical use cases into scalable cloud data design and feature roadmap. Establish patterns to monitor platform health, automate cost optimisation and capacity planning via Azure features. 66degrees is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to actual or perceived race, color, religion, sex, gender, gender identity, national origin, age, weight, height, marital status, sexual orientation, veteran status, disability status or other legally protected class.

Posted 2 days ago

Apply

3.0 - 6.0 years

10 - 15 Lacs

mumbai

Work from Office

Position Overview: The Microsoft Cloud Data Engineer role is ideal for a highly capable and motivated Microsoft Cloud Data Engineer to design, build, and maintain cloud-based data solutions using Microsoft Azure technologies. This role is key for developing robust, scalable, and secure data pipelines and supporting analytics workloads that power business insights and data-driven decision-making. Key Responsibilities: Design and implement ETL/ELT pipelines using Azure Data Factory, Azure Synapse Analytics, Azure Databricks, and Azure Data Lake Storage. Develop and manage data integration workflows to ingest data from multiple sources including APIs, on-prem systems, and cloud services. Optimize and maintain SQL-based data models, views, and stored procedures in Azure SQL, SQL MI, or Synapse SQL Pools. Collaborate with analysts, data scientists, and business teams to gather data requirements and deliver reliable, high-quality datasets. Ensure data quality, governance, and security by implementing robust validation, monitoring, and encryption mechanisms. Support infrastructure automation using Azure DevOps, ARM templates, or Terraform for resource provisioning and deployment. Participate in troubleshooting, performance tuning, and continuous improvement of the data platform. Qualifications: Education : Bachelor’s degree in Computer Science, Engineering, Information Systems, or a related field . Experience : 3+ years of experience in data engineering with a focus on Microsoft Azure data services. Hands-on experience with Azure Data Factory, Azure Synapse Analytics, and Azure Data Lake. Skills : Strong proficiency in SQL and data modeling. Experience with Python, PySpark, or .NET for data processing. Understanding of data warehousing, data lakes, and ETL/ELT best practices. Familiarity with DevOps tools and practices in an Azure environment. Knowledge of Power BI or similar visualization tools Certifications : Microsoft Certified: Azure Data Engineer Associate or equivalent.

Posted 2 days ago

Apply

6.0 - 11.0 years

15 - 25 Lacs

hyderabad, bengaluru

Work from Office

About Client Hiring for One of the Most Prestigious Multinational Corporations Job Title : Azure Databricks Qualification : Any Graduate or Above Experience : 6 to 16 Yrs Location : pan india Skill required - Azure Databricks, Azure Data Factory, Azure SQL with Data Bricks Professional certificate must. Key Responsibilities: Design and build scalable and robust data pipelines using Azure Databricks, PySpark, and Spark SQL. Integrate data from various structured and unstructured data sources using Azure Data Factory,ADLS, Azure Synapse, etc. Develop and maintain ETL/ELT processes for ingestion, transformation, and storage of data. Collaborate with data scientists, analysts, and other engineers to deliver data products and solutions. Monitor, troubleshoot, and optimize existing pipelines for performance and reliability. Ensure data quality,governance, and security compliance in all solutions. Participate in architectural decisions and cloud data solutioning. Required Skills: 5+ years of experience in data engineering or related fields. Strong hands-on experience with Azure Databricks and Apache Spark. Proficiency in Python (PySpark),SQL, and performance tuning techniques. Experience with Azure Data Factory, Azure Data Lake Storage (ADLS), and Azure Synapse Analytics. Solid understanding of data modeling, data warehousing, and data lakes. Familiarity with DevOps practices, CI/CD pipelines, and version control (e.g., Git). Notice period : immediate,serving notice Mode of Work : Hybrid Mode of interview : Virtual -- Thanks & Regards Bhavana B Black and White Business Solutions Pvt.Ltd. Bangalore,Karnataka,India. Direct Number:8067432454 bhavana.b@blackwhite.in |www.blackwhite.in

Posted 2 days ago

Apply

5.0 - 8.0 years

9 - 18 Lacs

bengaluru

Work from Office

Design, build, & optimize ADF pipelines; integrate diverse data sources; ensure quality/performance; collaborate with teams; support deployments. ADF, SQL, ETL Azure Data Lake, Synapse, Azure DevOps, pipeline optimization, performance tuning. Provident fund Health insurance

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

You will be responsible for testing ETL pipelines with a focus on source to target validation. Your key skills will include: - Proficiency in SSMS (SQL Server Management Studio) - Familiarity with Azure Synapse, including understanding of medallion architecture and PySpark notebooks - Strong understanding of Data Warehousing - Excellent communication skills - Extensive experience working in Agile Qualifications required for this role: - Minimum 5 years of experience as an ETL Test Engineer - Proficiency in SSMS, Azure Synapse, and testing ETL pipelines - Strong understanding of Data Warehousing concepts - Excellent communication skills - Experience working in Agile environments The company offers benefits such as health insurance, internet reimbursement, life insurance, and Provident Fund. This is a full-time position based in person at the work location.,

Posted 3 days ago

Apply

4.0 - 9.0 years

5 - 15 Lacs

chandigarh, bengaluru, delhi / ncr

Work from Office

Educational Requirements Bachelor of Engineering,BSc,BCA,MCA,MSc,MTech Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion • As part of the Infosys delivery team, your primary role would be to ensure effective Design, Development, Validation and Support activities, to assure that our clients are satisfied with the high levels of service in the technology domain. • You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements. • You will play a key role in the overall estimation of work requirements to provide the right information on project estimations to Technology Leads and Project Managers. • You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: • Knowledge of design principles and fundamentals of architecture • Understanding of performance engineering • Knowledge of quality processes and estimation techniques • Basic understanding of project domain • Ability to translate functional / nonfunctional requirements to systems requirements • Ability to design and code complex programs • Ability to write test cases and scenarios based on the specifications • Good understanding of SDLC and agile methodologies • Awareness of latest technologies and trends • Logical thinking and problem solving skills along with an ability to collaborate Technical and Professional Requirements: • Primary skills: Technology->Cloud Platform->Azure Development & Solution Architecting Preferred Skills: Technology->Cloud Platform->Azure Development & Solution Architecting

Posted 3 days ago

Apply

4.0 - 7.0 years

15 - 20 Lacs

pune, bengaluru

Hybrid

Overview of 66degrees 66degrees is a leading Google Cloud Premier Partner, though this specific role focuses on our Microsoft Azure projects. We believe that great engineering takes heart, and we are dedicated to building innovative data solutions that guide our clients on their digital transformation journey. Role Description As an Azure Data Engineer, you will be responsible for building and maintaining robust, scalable data platforms. You will work with a variety of Azure services to deliver raw data into analytics-ready assets and support a secure, efficient data environment. This role requires strong hands-on engineering skills to implement data pipelines, manage data integrity, and ensure compliance with security and governance standards. Responsibilities Data Pipeline Development: Design, build, and maintain robust and scalable data pipelines to ingest, transform, and load data from various sources into the Azure data platform. This includes creating and managing jobs, dataflows, and triggers in Azure Data Factory and Azure Synapse Pipelines . Data Modeling and Warehousing: Develop and implement conceptual, logical, and physical data models for data warehouses and data lakes. Work with star schema, snowflake schema, and other data modeling techniques to optimize data for analytics and reporting. Data Lake Management: Manage data storage and organization within Azure Data Lake Storage Gen2 . Ensure proper folder structures, security, and data lifecycle management for raw, processed, and curated data zones. Data Transformation: Utilize tools like dbt (data build tool) within the Azure ecosystem to transform raw data into analytics-ready datasets. Write complex SQL queries and stored procedures to cleanse, enrich, and aggregate data. Azure Service Integration: Integrate and work with a variety of Azure services, including Azure Synapse Analytics, Azure Databricks, Azure Functions, and Cosmos DB , to build a comprehensive data solution. ETL/ELT Processes: Implement and optimize Extract, Transform, Load (ETL) and Extract, Load, Transform (ELT) processes to ensure data is moved efficiently and accurately. Security and Governance: Implement and maintain data security best practices. Apply Role-Based Access Control (RBAC) , managed identities, and Azure Key Vault to protect sensitive data. Work with Microsoft Purview for data cataloging and governance. Performance Optimization: Monitor and optimize the performance of data pipelines, queries, and data warehouse solutions. Fine-tune data structures and processing logic to improve efficiency and reduce costs. Qualifications 4+ years of proven experience with key Azure data services, including Azure Data Factory, Azure Synapse Analytics, Azure Data Lake Storage, and Azure Databricks . Strong proficiency in SQL and at least one programming language such as Python or Spark (PySpark, Scala) . Solid understanding of data warehousing principles, data modeling (dimensional modeling, etc.), and ETL/ELT concepts. CI/CD: Experience with version control systems like Git and Continuous Integration/Continuous Deployment (CI/CD) practices using Azure DevOps . Ability to work effectively with cross-functional teams, including data scientists, business analysts, and product owners, to understand requirements and deliver data solutions. Excellent analytical and problem-solving skills with a focus on delivering high-quality, reliable data solutions. Strong verbal and written communication skills to articulate technical concepts to both technical and non-technical stakeholders. 66degrees is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to actual or perceived race, color, religion, sex, gender, gender identity, national origin, age, weight, height, marital status, sexual orientation, veteran status, disability status, or other legally protected class.

Posted 3 days ago

Apply

5.0 - 8.0 years

9 - 14 Lacs

pune

Work from Office

Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Mandatory Skills: Azure Synapse Analytics. Experience: 5-8 Years.

Posted 3 days ago

Apply

5.0 - 10.0 years

22 - 32 Lacs

pune, chennai

Hybrid

We're Hiring: Senior Technical Engineer Azure Data Engineering & Customer Support. Location: Pune & Chennai Experience: 510 Years Are you a seasoned data engineer with a passion for solving complex technical challenges and delivering exceptional customer support? Were looking for a Senior Technical Engineer with deep expertise in Azure Databricks, Azure Data Factory, and Python/Spark to join our growing team. Key Responsibilities: Leverage your expertise in Data Scale, ADF, and Python/Spark to troubleshoot and resolve technical issues Provide timely, effective, and empathetic customer support to ensure high satisfaction Identify and implement process improvements to boost support team efficiency Collaborate with product development, engineering, and cross-functional teams for seamless issue resolution Qualifications: Bachelors degree in Computer Science, IT, or related field 7–8 years of hands-on experience in data engineering and support roles Strong command of Azure Data Factory, Azure Databricks, Python/Spark, and Data Scale Excellent communication skills to bridge technical and non-technical conversations Proven customer-first mindset with a commitment to quality support Preferred Qualifications: Experience with other Azure services and cloud platforms Azure certifications or related credentials Background in a product-based company If you're ready to make an impact in a fast-paced, tech-driven environment, we’d love to hear from you! Apply Now or DM for more details : swati.pawar@citiustech.com

Posted 3 days ago

Apply

4.0 - 8.0 years

17 - 25 Lacs

chennai

Work from Office

Job Title: Data Engineer (Azure) Location: Chennai Shift Timings: 4:30 PM 2:00 AM IST Experience: 4+ years About the Role We are seeking a highly skilled and experienced Azure Data Engineer to join our team, focusing on building and maintaining scalable data pipelines and infrastructure for our U.S. healthcare payer operations. The ideal candidate will be proficient in modern data engineering practices, cloud-based platforms, and healthcare data management. This role will involve designing, developing, and optimizing data workflows to ensure secure, reliable, and high-quality data delivery across the organization. Key Responsibilities Design, develop, and optimize scalable ETL/ELT pipelines to ingest, transform, and load healthcare data from multiple sources into data warehouses/lakes. Build and maintain robust data models (e.g., star schema, snowflake schema) to support reporting, analytics, and ML initiatives. Write optimized SQL queries and Python scripts for data cleansing, validation, transformation, and automation. Ensure data integrity, quality, and governance , maintaining compliance with HIPAA and other regulatory requirements . Collaborate with data scientists, ML engineers, and business stakeholders to understand data requirements and deliver effective solutions. Monitor and troubleshoot pipeline performance, implementing enhancements for scalability and reliability . Automate workflows using orchestration tools and integrate with CI/CD pipelines . Contribute to evaluating and implementing new data tools and technologies . Maintain thorough documentation of data pipelines, models, and flows . Required Technical Skills Programming: Strong proficiency in Python (data processing, automation). Databases: Advanced SQL expertise , with hands-on experience in relational and NoSQL databases. ETL/ELT: Proven experience designing and implementing scalable pipelines. Azure Cloud: Hands-on experience with Azure Data Lake, Azure Synapse Analytics, Azure Data Factory , and related services. Containerization: Familiarity with Docker for deploying data applications. Version Control: Proficient with Git/GitHub for code management. Testing: Experience using unittest / Pytest frameworks for pipeline validation. CI/CD: Working knowledge of continuous integration and deployment in Azure environments. Data Modeling: Strong understanding of data warehousing concepts and schema design .

Posted 3 days ago

Apply

6.0 - 11.0 years

10 - 20 Lacs

bengaluru

Remote

Key Responsibilities Design, build, and maintain ETL pipelines using Azure Data Factory (preferably Fabric Data Factory) and SQL. Write and optimize complex SQL logic to ensure performance and scalability across large datasets. Ensure data quality, monitoring, and observability with restartability, idempotency, and debugging principles in mind. Enhance ETL processes with Python scripting where applicable. Collaborate with business unit partners to translate requirements into effective data solutions. Document workflows, standards, and best practices; mentor junior team members. Implement version control (GitHub) and CI/CD practices across SQL and ETL processes. Work with Azure components such as Blob Storage and integrate with orchestration tools. Apply troubleshooting and performance-tuning techniques to improve data pipelines. Required Skills & Experience Strong hands-on SQL development with focus on integration, optimization, and performance tuning. Proven experience with Azure Data Factory (ADF) , with preference for Fabric Data Factory . Exposure to ETL/orchestration tools such as Matillion (preferred but not mandatory). Proficiency in Python for ETL enhancements and automation. Understanding of cloud platforms , particularly Microsoft Azure services. Familiarity with version control (GitHub) and CI/CD in data environments. Excellent communication and technical writing skills to engage with stakeholders. Having Advanced Azure certifications would be a plus. Technology & Skill Areas Core: Azure Data Factory / Fabric Data Factory, SQL, Python Secondary: Matillion, Azure Blob Storage Skill Areas: Data Integration, Data Quality, Performance Optimization, Cloud Data Engineering

Posted 4 days ago

Apply

10.0 - 17.0 years

0 Lacs

chennai, coimbatore, bengaluru

Hybrid

Open & Direct Walk-in Drive event | Hexaware technologies - Azure Data Engineer/Architect in Chennai, Tamilnadu on 13th Sep [Saturday] 2025 - Azure Databricks/ Data factory/ SQL & Pyspark or Spark/Synapse/MS Fabrics Dear Candidate, I hope this email finds you well. We are thrilled to announce an exciting opportunity for talented professionals like yourself to join our team as an Azure Data Engineer/Architect. We are hosting an Open Walk-in Drive in Chennai, Tamilnadu on 13th Sep [Saturday] 2025, and we believe your skills in Databricks, Data Factory, SQL, and Pyspark or Spark align perfectly with what we are seeking. Total years of Experience: 6 years to 20 years Relevant experience: 3+ years Details of the Walk-in Drive: Date: 13th Sep [Saturday] 2025 Experience 6 years to 20 years Time: 9.30 AM to 4PM Point of Contact: Azhagu Kumaran Mohan/+91-9789518386 Venue: Hexaware Technologies, H-5, Sipcot It Park, Post, Navalur, Siruseri, Tamil Nadu 603103 Work Location: Chennai, Bangalore, Coimbatore, Mumbai, Pune & Noida Key Skills and Experience: As an Azure Data Engineer, we are looking for candidates who possess expertise in the following: Databricks Data Factory SQL Pyspark/Spark Roles and Responsibilities: As a part of our dynamic team, you will be responsible for: Designing, implementing, and maintaining data pipelines Collaborating with cross-functional teams to understand data requirements. Optimizing and troubleshooting data processes Leveraging Azure data services to build scalable solutions. What to Bring: 1. Updated resume 2. Photo ID, Passport size photo How to Register: To express your interest and confirm your participation, please reply to this email with your updated resume attached. Walk-ins are also welcome on the day of the event. This is an excellent opportunity to showcase your skills, network with industry professionals, and explore the exciting possibilities that await you at Hexaware Technologies. If you have any questions or require further information, please feel free to reach out to me at AzhaguK@hexaware.com - +91-9789518386 We look forward to meeting you and exploring the potential of having you as a valuable member of our team.

Posted 4 days ago

Apply

5.0 - 10.0 years

0 Lacs

chennai, coimbatore, bengaluru

Hybrid

Open & Direct Walk-in Drive event | Hexaware technologies - Azure Data Engineer/Architect in Chennai, Tamilnadu on 13th Sep [Saturday] 2025 - Azure Databricks/ Data factory/ SQL & Pyspark or Spark/Synapse/MS Fabrics Dear Candidate, I hope this email finds you well. We are thrilled to announce an exciting opportunity for talented professionals like yourself to join our team as an Azure Data Engineer/Architect. We are hosting an Open Walk-in Drive in Chennai, Tamilnadu on 13th Sep [Saturday] 2025, and we believe your skills in Databricks, Data Factory, SQL, and Pyspark or Spark align perfectly with what we are seeking. Total years of Experience: 6years to 20years Relevant experience: 3+ years Details of the Walk-in Drive: Date: 13th Sep [Saturday] 2025 Experience 6 years to 20 years Time: 9.30 AM to 4PM Point of Contact: Azhagu Kumaran Mohan/+91-9789518386 Venue: Hexaware Technologies, H-5, Sipcot It Park, Post, Navalur, Siruseri, Tamil Nadu 603103 Work Location: Chennai, Bangalore, Coimbatore, Mumbai, Pune & Noida Key Skills and Experience: As an Azure Data Engineer, we are looking for candidates who possess expertise in the following: Databricks Data Factory SQL Pyspark/Spark Roles and Responsibilities: As a part of our dynamic team, you will be responsible for: Designing, implementing, and maintaining data pipelines Collaborating with cross-functional teams to understand data requirements. Optimizing and troubleshooting data processes Leveraging Azure data services to build scalable solutions. What to Bring: 1. Updated resume 2. Photo ID, Passport size photo How to Register: To express your interest and confirm your participation, please reply to this email with your updated resume attached. Walk-ins are also welcome on the day of the event. This is an excellent opportunity to showcase your skills, network with industry professionals, and explore the exciting possibilities that await you at Hexaware Technologies. If you have any questions or require further information, please feel free to reach out to me at AzhaguK@hexaware.com - +91-9789518386 We look forward to meeting you and exploring the potential of having you as a valuable member of our team.

Posted 4 days ago

Apply

12.0 - 22.0 years

45 - 60 Lacs

chennai, chennai - india, only chennai

Work from Office

Skills : SAP Data & Analytics Tower Lead Related . Position : :Lead Consultant /Technical Specialist / Senior Technical Specialist / Team Leader / Manager / Senior Manager/Architect / Senior Architect. Work Experience : 8.00Years to 25.00 Years Work Location : Only Chennai Job Type : Permanent Employee (Direct Payroll) This is for CMM Level 5 Indian MNC (Direct Payroll) Opening in Only Chennai Location - Have you applied before = Yes/No = Below Mentioned All the Details are Mandatory - Please Send : * Current Location : * Preferred Location : * Total Experience: * Relevant Experience: * Primary Active Personal Email ID : * Alternate Active Personal Email ID : * Primary Contact Number : * Alternate Contact Number : * Current CTC: * Expected CTC: * Notice Period: * Last Working Date: * Current Payroll Company Name (Contract / Permanent) : * DOB & Place of Birth : Mandatory JD - SAP Data & Analytics Tower Lead Expertise in SAP/SAP BW, Snowflake, AWS Data Analytics Services, BI Tools, Advanced Analytics 'Team Leadership: Provide leadership and guidance on design and management of data for data applications, formulate best practices and organize processes for data management, governance and evolution. Build processes and tools to maintain high data availability, quality and maintainability. Lead and mentor a team of data professionals, including data analysts, data engineers, and database administrators. Strategic Planning: Become a trusted analytical leader and partner to the functional areas of support and proactively identify improvement opportunities through analytics. 'Data Management: Oversee the collection, storage, and maintenance of data to ensure efficiency and security. Data Quality: Implement processes to ensure data accuracy, consistency, and reliability. Data Integration: Coordinate the integration of data from various sources to provide a unified view. Data Governance: Establish and enforce policies and procedures for data usage, privacy, and compliance. Stakeholder Collaboration: Work closely with other departments to understand their data needs and provide tailored solutions. '5+ years experience leading teams in the implemenation of Data Analytics applications 5+ years of hands-on design and development experience in implementing wide ranage of Data Analytics applications on AWS/Snowflake/Azure Open AI using data from SAP, SalesForce and other data sources Experience with AWS Services such as S3, Glue, AWS Airflow, Snowflake 'Proven experience in analytics, data management, data integration, and data governance. Excellent understanding of data quality principles and practices. Proficiency in data management tools and technologies. Strong analytical and problem-solving skills. Excellent communication and interpersonal skills. Domain Expertise Manufacturing Industry Chemical Processing Supply Chain and Logistics SDLC and project experience Expertise in SAP/SAP BW, Snowflake, AWS Data Analytics Services, BI Tools, Advanced Analytics Atleast 3+ years of experience in the implementation of all the Amazon Web Services (listed above) Atleast 3+ years of experience as a SAP or SAP BW Developer Atleast 3+ years of experience in Snowflake (or Redshift or Google Big Query or Azure Synapse) Atleast 3+ years of experience as Data Integration Developer in Fivetran/HVR/DBT, Boomi (or Talend/Infomatica) Atleast 2+ years of experience with Azure Open AI, Azure AI Services, Microsoft CoPilot Studio, PowerBI, PowerAutomate Visualization Tools: Proficiency in data visualization tools like Tableau, Power BI, or Looker. Experience implementing wide range of Gen AI Use Cases Hands on experience in the end-to-end implementation of Data Analytics applications on AWS Hands on experience in the end to end implementation of SAP BW application for FICO, Sales & Distribution and Materials Management Hands on experience with Fivetran/HVR/Boomi in development of data integration services with data from SAP, SalesForce, Workday and other SaaS applications Hands on experience in the implementation of Gen AI use cases using Azure Services 'Hands on experience in the implementation of Advanced Analytics use cases using Python/R Certifications in project management (e.g., PMP, PRINCE2) AWS Certified Data Analytics - Specialty Warm Regards, **Sanjay Mandavkar** Recruitment Manager | Think People Solutions Pvt. Ltd. Empowering People. Enabling Growth. Email : sanjay@thinkpeople.in www.thinkpeople.in

Posted 4 days ago

Apply

5.0 - 9.0 years

11 Lacs

hyderabad, pune, chennai

Work from Office

Data Quality with 5+ years of hands-on experience in data quality testing within large-scale data analytics projects. The ideal candidate will have strong expertise in Azure Cloud, Databricks, PySpark, and Informatica Data Marketplace

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

In this role, you will be part of Kimberly Clark's International Family Care and Professional (IFP) business, which is focused on creating Better Care for a Better World. As a Senior Data Engineer, you will play a critical role in advancing the data modernization journey, enabling scalable analytics, and driving business insights across global markets. Your main responsibilities will include: - Designing, developing, and optimizing scalable data pipelines using Azure Data Factory and related services. - Integrating and transforming data from SAP ECC/S4, Snowflake, and other enterprise systems to support analytics and reporting needs. - Building and maintaining dimensional data models and data marts in Snowflake to support self-service BI and advanced analytics. - Collaborating with cross-functional teams to understand data requirements and deliver fit-for-purpose solutions. - Implementing data quality checks, monitoring, and governance practices to ensure data reliability and compliance. - Supporting the development of analytics products including dashboards, predictive models, and machine learning pipelines. - Contributing to the evolution of the enterprise data architecture and adopting new technologies and best practices. About Us: Kimberly-Clark is known for iconic brands like Huggies, Kleenex, Cottonelle, and more. With a legacy of over 150 years, Kimberly-Clark is committed to driving innovation, growth, and impact. As part of the team, you will be empowered to explore new ideas and achieve results. About You: You are a high-performing individual who thrives in a performance culture driven by authentic caring. You value sustainability, inclusion, wellbeing, and career development. To succeed in this role, you should have: - Bachelor's or Masters degree in Computer Science, Engineering, or related field. - 5+ years of experience in data engineering roles, preferably in a global enterprise environment. - Hands-on experience with Azure Data Factory, Azure Synapse, and Snowflake. - Proficiency in SQL, Python, and data modeling techniques. - Experience working with SAP data structures and integration patterns. - Strong problem-solving skills and ability to work independently in a fast-paced environment. - Excellent communication and collaboration skills. - Experience in the CPG or healthcare industry. - Exposure to SAP S/4HANA and SAP BW. - Knowledge of data privacy and compliance standards (e.g., GDPR). To Be Considered: Click the Apply button and complete the online application process. A member of the recruiting team will review your application and follow up if you are a great fit for this role. Please note that the statements above are not exhaustive and are subject to verification of pre-screening tests. Employment is subject to verification of pre-screening tests, which may include drug screening, background check, and DMV check.,

Posted 4 days ago

Apply

7.0 - 12.0 years

6 - 11 Lacs

bengaluru

Work from Office

Key Responsibilities : Lead the design, development, and deployment of end-to-end data solutions on Azure Databricks platform. Work with data scientists, data engineers, and business analysts to design and implement data pipelines and machine learning models. Develop efficient, scalable, and high-performance data processing workflows and analytics solutions using Databricks , Apache Spark , and Azure Synapse . Manage and optimize Databricks clusters and data pipelines. Collaborate with cross-functional teams to gather requirements and deliver optimal solutions. Design and implement ETL processes using Databricks Notebooks, Azure Data Factory, and other Azure services. Ensure high availability, performance, and security of cloud-based data solutions. Implement best practices for data quality, security, and governance. Monitor system performance and troubleshoot any issues related to Databricks clusters or data pipelines. Stay up-to-date with the latest advancements in cloud computing, big data, and machine learning technologies.

Posted 4 days ago

Apply

Exploring Azure Synapse Jobs in India

The Azure Synapse job market in India is currently experiencing a surge in demand as organizations increasingly adopt cloud solutions for their data analytics and business intelligence needs. With the growing reliance on data-driven decision-making, professionals with expertise in Azure Synapse are highly sought after in the job market.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Hyderabad
  4. Pune
  5. Chennai

Average Salary Range

The average salary range for Azure Synapse professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15-20 lakhs per annum.

Career Path

A typical career path in Azure Synapse may include roles such as Junior Developer, Senior Developer, Tech Lead, and Architect. As professionals gain experience and expertise in the platform, they can progress to higher-level roles with more responsibilities and leadership opportunities.

Related Skills

In addition to expertise in Azure Synapse, professionals in this field are often expected to have knowledge of SQL, data warehousing concepts, ETL processes, data modeling, and cloud computing principles. Strong analytical and problem-solving skills are also essential for success in Azure Synapse roles.

Interview Questions

  • What is Azure Synapse Analytics and how does it differ from Azure Data Factory? (medium)
  • Can you explain the differences between a Data Warehouse and a Data Lake? (basic)
  • How do you optimize data loading and querying performance in Azure Synapse? (advanced)
  • What is PolyBase in Azure Synapse and how is it used for data integration? (medium)
  • How do you handle security and compliance considerations in Azure Synapse? (advanced)
  • Explain the concept of serverless SQL pools in Azure Synapse. (medium)
  • What are the different components of an Azure Synapse workspace? (basic)
  • How do you monitor and troubleshoot performance issues in Azure Synapse? (advanced)
  • Describe your experience with building data pipelines in Azure Synapse. (medium)
  • Can you walk us through a recent project where you used Azure Synapse for data analysis? (advanced)
  • How do you ensure data quality and integrity in Azure Synapse? (medium)
  • What are the key features of Azure Synapse Link for Azure Cosmos DB? (advanced)
  • How do you handle data partitioning and distribution in Azure Synapse? (medium)
  • Discuss a scenario where you had to optimize data storage and processing costs in Azure Synapse. (advanced)
  • What are some best practices for data security in Azure Synapse? (medium)
  • How do you automate data integration workflows in Azure Synapse? (advanced)
  • Can you explain the role of Azure Data Lake Storage Gen2 in Azure Synapse? (medium)
  • Describe a situation where you had to collaborate with cross-functional teams on a data project in Azure Synapse. (advanced)
  • How do you ensure data governance and compliance in Azure Synapse? (medium)
  • What are the advantages of using Azure Synapse over traditional data warehouses? (basic)
  • Discuss your experience with real-time analytics and streaming data processing in Azure Synapse. (advanced)
  • How do you handle schema evolution and versioning in Azure Synapse? (medium)
  • What are some common challenges you have faced while working with Azure Synapse and how did you overcome them? (advanced)
  • Explain the concept of data skew and how it can impact query performance in Azure Synapse. (medium)
  • How do you stay updated on the latest developments and best practices in Azure Synapse? (basic)

Closing Remark

As the demand for Azure Synapse professionals continues to rise in India, now is the perfect time to upskill and prepare for exciting career opportunities in this field. By honing your expertise in Azure Synapse and related skills, you can position yourself as a valuable asset in the job market and embark on a rewarding career journey. Prepare diligently, showcase your skills confidently, and seize the numerous job opportunities waiting for you in the Azure Synapse domain. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies