Jobs
Interviews

106 Synapse Analytics Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 5.0 years

8 - 13 Lacs

Pune, Maharashtra, India

On-site

We are looking for a proactive and detail-oriented Junior Data Engineer with 2 to 5 years of experience to join our cloud data transformation team. The candidate will work closely with the Data Engineering Lead and Solution Architect to support data migration, pipeline development, testing, and integration efforts on the Microsoft Azure platform. Key Responsibilities: Data Migration Support Assist in migrating structured and semi-structured data from GCP storage systems to Azure Blob Storage, Azure Data Lake, or Synapse. Help validate and reconcile data post-migration to ensure completeness and accuracy. ETL/ELT Development Build and maintain ETL pipelines using Azure Data Factory, Synapse Pipelines, or Microsoft Fabric. Support the development of data transformation logic (SQL/ADF/Dataflows). Ensure data pipelines are efficient, scalable, and meet defined SLAs. Data Modeling & Integration Support the design of data models to enable effective reporting in Power BI. Prepare clean, structured datasets ready for downstream KPI reporting and analytics use cases. Testing & Documentation Conduct unit and integration testing of data pipelines. Maintain documentation of data workflows, metadata, and pipeline configurations. Collaboration & Learning Collaborate with the Data Engineering Lead, BI Developers, and other team members. Stay current with Azure technologies and best practices under the guidance of senior team members. Qualifications: Education: bachelors degree in Computer Science, Information Technology, Engineering, or a related field. Experience: 2 to 5 years of hands-on experience in data engineering or analytics engineering roles. Exposure to at least one cloud platform (preferably Microsoft Azure). Technical Skills Required: Experience with SQL and data transformation logic. Familiarity with Azure data services like Azure Data Factory, Synapse Analytics, Blob Storage, Data Lake, or Microsoft Fabric. Basic knowledge of ETL/ELT concepts and data warehousing principles and familiarity with Unix shell scripting. Familiarity with Power BI datasets or Power Query is a plus. Good understanding of data quality and testing practices. Exposure to version control systems like Git. Soft Skills: Eagerness to learn and grow under the mentorship of experienced team members. Strong analytical and problem-solving skills. Ability to work in a collaborative, fast-paced team environment. Good written and verbal communication skills. Our Perks and Benefits: Our benefits and rewards program has been thoughtfully designed to recognize your skills and contributions, elevate your learning/upskilling experience and provide care and support for you and your loved ones. As an Apexon Associate, you get continuous skill-based development, opportunities for career advancement, and access to comprehensive health and we'll-being benefits and assistance. We also offer: Group Health Insurance covering family of 4 Term Insurance and Accident Insurance Paid Holidays & Earned Leaves Paid Parental Leaveo Learning & Career Development Employee we'llness Job Location : Pune, India

Posted 3 months ago

Apply

2.0 - 4.0 years

7 - 9 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

POSITION Senior Data Engineer / Data Engineer LOCATION Bangalore/Mumbai/Kolkata/Gurugram/Hyd/Pune/Chennai EXPERIENCE 2+ Years JOB TITLE: Senior Data Engineer / Data Engineer OVERVIEW OF THE ROLE: As a Data Engineer or Senior Data Engineer, you will be hands-on in architecting, building, and optimizing robust, efficient, and secure data pipelines and platforms that power business-critical analytics and applications. You will play a central role in the implementation and automation of scalable batch and streaming data workflows using modern big data and cloud technologies. Working within cross-functional teams, you will deliver well-engineered, high-quality code and data models, and drive best practices for data reliability, lineage, quality, and security. HASHEDIN BY DELOITTE 2025 Mandatory Skills: Hands-on software coding or scripting for minimum 3 years Experience in product management for at-least 2 years Stakeholder management experience for at-least 3 years Experience in one amongst GCP, AWS or Azure cloud platform Key Responsibilities: Design, build, and optimize scalable data pipelines and ETL/ELT workflows using Spark (Scala/Python), SQL, and orchestration tools (e.g., Apache Airflow, Prefect, Luigi). Implement efficient solutions for high-volume, batch, real-time streaming, and event-driven data processing, leveraging best-in-class patterns and frameworks. Build and maintain data warehouse and lakehouse architectures (e.g., Snowflake, Databricks, Delta Lake, BigQuery, Redshift) to support analytics, data science, and BI workloads. Develop, automate, and monitor Airflow DAGs/jobs on cloud or Kubernetes, following robust deployment and operational practices (CI/CD, containerization, infra-as-code). Write performant, production-grade SQL for complex data aggregation, transformation, and analytics tasks. Ensure data quality, consistency, and governance across the stack, implementing processes for validation, cleansing, anomaly detection, and reconciliation. Collaborate with Data Scientists, Analysts, and DevOps engineers to ingest, structure, and expose structured, semi-structured, and unstructured data for diverse use-cases. Contribute to data modeling, schema design, data partitioning strategies, and ensure adherence to best practices for performance and cost optimization. Implement, document, and extend data lineage, cataloging, and observability through tools such as AWS Glue, Azure Purview, Amundsen, or open-source technologies. Apply and enforce data security, privacy, and compliance requirements (e.g., access control, data masking, retention policies, GDPR/CCPA). Take ownership of end-to-end data pipeline lifecycle: design, development, code reviews, testing, deployment, operational monitoring, and maintenance/troubleshooting. Contribute to frameworks, reusable modules, and automation to improve development efficiency and maintainability of the codebase. Stay abreast of industry trends and emerging technologies, participating in code reviews, technical discussions, and peer mentoring as needed. Skills & Experience: Proficiency with Spark (Python or Scala), SQL, and data pipeline orchestration (Airflow, Prefect, Luigi, or similar). Experience with cloud data ecosystems (AWS, GCP, Azure) and cloud-native services for data processing (Glue, Dataflow, Dataproc, EMR, HDInsight, Synapse, etc.). © HASHEDIN BY DELOITTE 2025 Hands-on development skills in at least one programming language (Python, Scala, or Java preferred); solid knowledge of software engineering best practices (version control, testing, modularity). Deep understanding of batch and streaming architectures (Kafka, Kinesis, Pub/Sub, Flink, Structured Streaming, Spark Streaming). Expertise in data warehouse/lakehouse solutions (Snowflake, Databricks, Delta Lake, BigQuery, Redshift, Synapse) and storage formats (Parquet, ORC, Delta, Iceberg, Avro). Strong SQL development skills for ETL, analytics, and performance optimization. Familiarity with Kubernetes (K8s), containerization (Docker), and deploying data pipelines in distributed/cloud-native environments. Experience with data quality frameworks (Great Expectations, Deequ, or custom validation), monitoring/observability tools, and automated testing. Working knowledge of data modeling (star/snowflake, normalized, denormalized) and metadata/catalog management. Understanding of data security, privacy, and regulatory compliance (access management, PII masking, auditing, GDPR/CCPA/HIPAA). Familiarity with BI or visualization tools (PowerBI, Tableau, Looker, etc.) is an advantage but not core. Previous experience with data migrations, modernization, or refactoring legacy ETL processes to modern cloud architectures is a strong plus. Bonus: Exposure to open-source data tools (dbt, Delta Lake, Apache Iceberg, Amundsen, Great Expectations, etc.) and knowledge of DevOps/MLOps processes. Professional Attributes: Strong analytical and problem-solving skills; attention to detail and commitment to code quality and documentation. Ability to communicate technical designs and issues effectively with team members and stakeholders. Proven self-starter, fast learner, and collaborative team player who thrives in dynamic, fast-paced environments. Passion for mentoring, sharing knowledge, and raising the technical bar for data engineering practices. Desirable Experience: Contributions to open source data engineering/tools communities. Implementing data cataloging, stewardship, and data democratization initiatives. Hands-on work with DataOps/DevOps pipelines for code and data. Knowledge of ML pipeline integration (feature stores, model serving, lineage/monitoring integration) is beneficial. © HASHEDIN BY DELOITTE 2025 EDUCATIONAL QUALIFICATIONS: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience). Certifications in cloud platforms (AWS, GCP, Azure) and/or data engineering (AWS Data Analytics, GCP Data Engineer, Databricks). Experience working in an Agile environment with exposure to CI/CD, Git, Jira, Confluence, and code review processes. Prior work in highly regulated or large-scale enterprise data environments (finance, healthcare, or similar) is a plus.

Posted 3 months ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Mumbai

Work from Office

Job Summary This position provides input and support for full systems life cycle management activities (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.). He/She performs tasks within planned durations and established deadlines. This position collaborates with teams to ensure effective communication and support the achievement of objectives. He/She provides knowledge, development, maintenance, and support for applications. Responsibilities: Generates application documentation. Contributes to systems analysis and design. Designs and develops moderately complex applications. Contributes to integration builds. Contributes to maintenance and support. Monitors emerging technologies and products. Technical Skills : Cloud Platforms: Azure (Databricks, Data Factory, Data Lake Storage, Synapse Analytics). Data Processing: Databricks (PySpark, Spark SQL), Apache Spark. Programming Languages: Python, SQL Data Engineering Tools: Delta Lake, Azure Data Factory, Apache Airflow Other: Git, CI/CD Professional Experience : Design and implementation of a scalable data lakehouse on Azure Databricks, optimizing data ingestion, processing, and analysis for improved business insights. Develop and maintain efficient data pipelines using PySpark and Spark SQL for extracting, transforming, and loading (ETL) data from diverse sources.(Azure and GCP). Develop SQL stored procedures for data integrity. Ensure data accuracy and consistency across all layers. Implement Delta Lake for ACID transactions and data versioning, ensuring data quality and reliability. Create frameworks using Databricks and Data Factory to process incremental data for external vendors and applications. Implement Azure functions to trigger and manage data processing workflows. Design and implement data pipelines to integrate various data sources and manage Databricks workflows for efficient data processing. Conduct performance tuning and optimization of data processing workflows. Provide technical support and troubleshooting for data processing issues. Experience with successful migrations from legacy data infrastructure to Azure Databricks,improvingscalability and cost savings. Collaborate with data scientists and analysts to build interactive dashboards and visualizations on Databricks for data exploration and analysis. Effective oral and written management communication skills. Qualifications: Minimum 5 years of Relevant experience Bachelors Degree or International equivalent Bachelor's Degree or International equivalent in Computer Science, Information Systems, Mathematics, Statistics or related field

Posted 3 months ago

Apply

6.0 - 11.0 years

30 - 40 Lacs

Chennai

Work from Office

Role & responsibilities Data Engineer, with working on data migration projects. Experience with Azure data stack, including Data Lake Storage, Synapse Analytics, ADF, Azure Databricks, and Azure ML. Solid knowledge of Python, PySpark and other Python packages Familiarity with ML workflows and collaboration with data science teams. Strong understanding of data governance, security, and compliance in financial domains. Experience with CI/CD tools and version control systems (e.g., Azure DevOps, Git). Experience modularizing and migrating ML logic Note:- We encourage interested candidates to submit their updated CVs to mohan.kumar@changepond.com

Posted 3 months ago

Apply

11.0 - 17.0 years

20 - 35 Lacs

Indore, Hyderabad

Work from Office

Greetings of the Day !! We have job opening for Microsoft Fabric + ADF with one of our clients. If you are interested in this position, please share update resume in this email id : shaswati.m@bct-consulting.com . * Primary Skill Microsoft Fabric Secondary Skill 1 Azure Data Factory (ADF) 12+ years of experience in Microsoft Azure Data Engineering for analytical projects. Proven expertise in designing, developing, and deploying high-volume, end-to-end ETL pipelines for complex models, including batch, and real-time data integration frameworks using Azure, Microsoft Fabric and Databricks. Extensive hands-on experience with Azure Data Factory, Databricks (with Unity Catalog), Azure Functions, Synapse Analytics, Data Lake, Delta Lake, and Azure SQL Database for managing and processing large-scale data integrations. Experience in Databricks cluster optimization and workflow management to ensure cost-effective and high-performance processing. Sound knowledge of data modelling, data governance, data quality management, and data modernization processes. Develop architecture blueprints and technical design documentation for Azure-based data solutions. Provide technical leadership and guidance on cloud architecture best practices, ensuring scalable and secure solutions. Keep abreast of emerging Azure technologies and recommend enhancements to existing systems. Lead proof of concepts (PoCs) and adopt agile delivery methodologies for solution development and delivery.

Posted 3 months ago

Apply

8.0 - 13.0 years

15 - 30 Lacs

Pune

Remote

Join our innovative team and architect the future of data solutions on Azure, Synapse, and Databricks! Senior Data Engineer (Data Architect) Additional Details: Notice Period: 30 days (maximum) Location: Remote About the Role Design and implement scalable data pipelines, data warehouses, and data lakes that drive business growth. Collaborate with stakeholders to deliver data-driven insights and shape the data landscape. Requirements 8+ years of experience in data engineering and data architecture Strong expertise in Azure services (Synapse Analytics, Databricks, Storage, Active Directory) Proven experience in designing and implementing data pipelines, data warehouses, and data lakes Strong understanding of data governance, data quality, and data security Experience with infrastructure design and implementation, including DevOps practices and tools

Posted 3 months ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Nagpur, Pune

Work from Office

We are looking for a skilled Data Engineer to design, build, and manage scalable data pipelines and ensure high-quality, secure, and reliable data infrastructure across our cloud and on-prem platforms.

Posted 3 months ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Nagpur, Pune, Gurugram

Work from Office

We are looking for a skilled Data Engineer to design, build, and manage scalable data pipelines and ensure high-quality, secure, and reliable data infrastructure across our cloud and on-prem platforms.

Posted 3 months ago

Apply

4.0 - 7.0 years

10 - 20 Lacs

Pune

Work from Office

Experience in designing, developing, implementing, and optimizing data solutions on Microsoft Azure. Proven expertise in leveraging Azure services for ETL processes, data warehousing and analytics, ensuring optimal performance and scalability.

Posted 3 months ago

Apply

5.0 - 10.0 years

1 - 2 Lacs

Pune, Maharashtra, India

On-site

Our client is an EU subsidiary of a Global Financial Bank working in multiple markets and asset classes. DWH / ETL developer will work closely with the Development Lead to design, build interfaces and integrate data from a variety from internal and external data sources into the new Enterprise Data Warehouse environment. The ETL Developer will be responsible for developing ETL primarily utilizing Microsoft & Azure technologies within industry recognized ETL standards, architecture, and best practices. Responsibilities Act as a technical expert in the designing, coding, unit testing, supporting, and debugging of data warehouse software components in all aspects of SDLC Apply cloud and ETL engineering skills to solve problems and design approaches Troubleshoot and debug ETL pipelines and creating unit tests for ETL pipelines. Assess query performance and actively contribute to optimizing the code Write technical documentation and specifications Support internal audit by submitting required evidence Create reports and dashboards in the BI portal Work with Development Lead, DWH Architect and QA Engineers to plan, implement and deliver best ETL strategies Work with business analysts to understand requirements to create technical design specifications, gaining a sound understanding of business processes for related applications so that integration processes fulfill the end-user requirements Communicate effectively in a collaborative, complex and high performing team environment as per Agile principles Skills Proven work experience as an ETL Developer Advanced knowledge of relational databases and dimensional Data Warehouse modelling concepts Good understanding of physical and logical data modeling Very good understanding of modern SaaS / PaaS data solutions in a cost conscious approach Expert level of knowledge of Microsoft Data stack Experience in developing and deploying data oriented solutions in Cloud (Azure / Synapse Analytics / Fabric) Experience in designing and implementing data transformation and ETL layers using Data Factory, Notebooks Experience with PowerBI for report & dashboard creation. PowerQuery and/or DAX is an advantage. Experience in / understanding of Azure Data Lake Storage Knowledge / use of CI/CD tools and principles, preferably Azure DevOps or Bamboo Strong SQL knowledge, able to create complex SQL queries and good understanding of stored procedures, views, indexes, functions, etc. Good working knowledge of at least one scripting language. Python is an advantage. Experience with GIT repositories and working with branches. GitHub, Azure DevOps or Bitbucket experience are preferable. Ability to troubleshoot and solve complex technical problems Good understanding of software development best practices Working experience in Agile projects; preferably using JIRA Experience in working in high priority projects preferably greenfield project experience Able to communicate complex information clearly and concisely. Able to work independently and also to collaborate across the organization Highly developed problem-solving skills with minimal supervision Understanding of data governance and enterprise concepts preferably in banking environment Verbal and written communication skills in English are essential. Nice to have Microsoft Fabric Snowflake Background in SSIS / SSAS / SSRS Azure DevTest Labs, ARM templates Azure PurView Banking / finance experience

Posted 3 months ago

Apply

7.0 years

7 - 17 Lacs

Gurugram

Hybrid

Position : Azure Data Engineer Experience : 4-7 Years Location : Gurugram Type : Full Time Notice period : Immediate to 30 days Preferred Certifications : Azure Data Engineer Associate, Databricks About the Role : We are looking for a skilled Azure Data Engineer with 4-7 years of experience in Azure Data Services, including Azure Data Factory (ADF), Synapse Analytics , and Databricks. The candidate will play a key role in developing and maintaining data solutions on Azure. Key Responsibilities : Develop and implement data pipelines using Azure Data Factory and Databricks . Work with stakeholders to gather requirements and translate them into technical solutions. Migrate data from various data sources to Azure Data Lake . Optimize data processing workflows for performance and scalability. Ensure data quality and integrity throughout the data lifecycle. Collaborate with data architects and other team members to design and implement data solutions. Required Skills : Strong experience with Azure Data Services, including Azure Data Factory (ADF) , Synapse Analytics, and Databricks. Proficiency in SQL , data transformation and ETL processes . Hands-on experience with Azure Data Lake migrations and Python/Pyspark Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Preferred Qualifications : Azure Data Engineer Associate certification. Databricks Certification. Mandatory skill set: Pyspark- Databricks - Python and Sql

Posted 3 months ago

Apply

0.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos AI Gigafactory, our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Senior Principal Consultant- Senior Data Engineer - Databricks, Azure & Mosaic AI Role Summary: We are seeking a Senior Data Engineer with extensive expertise in Data & Analytics platform modernization using Databricks, Azure, and Mosaic AI. This role will focus on designing and optimizing cloud-based data architectures, leveraging AI-driven automation to enhance data pipelines, governance, and processing at scale. Key Responsibilities: . Architect & modernize Data & Analytics platforms using Databricks on Azure. . Design and optimize Lakehouse architectures integrating Azure Data Lake, Databricks Delta Lake, and Synapse Analytics. . Implement Mosaic AI for AI-driven automation, predictive analytics, and intelligent data engineering solutions. . Lead the migration of legacy data platforms to a modern cloud-native Data & AI ecosystem. . Develop high-performance ETL pipelines, integrating Databricks with Azure services such as Data Factory, Synapse, and Purview. . Utilize MLflow & Mosaic AI for AI-enhanced data processing and decision-making. . Establish data governance, security, lineage tracking, and metadata management across modern data platforms. . Work collaboratively with business leaders, data scientists, and engineers to drive innovation. . Stay at the forefront of emerging trends in AI-powered data engineering and modernization strategies. Qualifications we seek in you! Minimum Qualifications . experience in Data Engineering, Cloud Platforms, and AI-driven automation. . Expertise in Databricks (Apache Spark, Delta Lake, MLflow) and Azure (Data Lake, Synapse, ADF, Purview). . Strong experience with Mosaic AI for AI-powered data engineering and automation. . Advanced proficiency in SQL, Python, and Scala for big data processing. . Experience in modernizing Data & Analytics platforms, migrating from on-prem to cloud. . Knowledge of Data Lineage, Observability, and AI-driven Data Governance frameworks. . Familiarity with Vector Databases & Retrieval-Augmented Generation (RAG) architectures for AI-powered data analytics. . Strong leadership, problem-solving, and stakeholder management skills. Preferred Skills: . Experience with Knowledge Graphs (Neo4J, TigerGraph) for data structuring. . Exposure to Kubernetes, Terraform, and CI/CD for scalable cloud deployments. . Background in streaming technologies (Kafka, Spark Streaming, Kinesis). Why join Genpact . Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation . Make an impact - Drive change for global enterprises and solve business challenges that matter . Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities . Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day . Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 3 months ago

Apply

9.0 - 14.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. A Senior Data Engineer designs and oversees the entire data infrastructure, data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective. Key Responsibilities : Oversee the entire data infrastructure to ensure scalability, operation efficiency and resiliency. - Mentor junior data engineers within the organization. - Design, develop, and maintain data pipelines and ETL processes using Microsoft Azure services (e.g., Azure Data Factory, Azure Synapse, Azure Databricks, Azure Fabric). - Utilize Azure data storage accounts for organizing and maintaining data pipeline outputs. (e.g., Azure Data Lake Storage Gen 2 & Azure Blob storage). - Collaborate with data scientists, data analysts, data architects and other stakeholders to understand data requirements and deliver high-quality data solutions. - Optimize data pipelines in the Azure environment for performance, scalability, and reliability. - Ensure data quality and integrity through data validation techniques and frameworks. - Develop and maintain documentation for data processes, configurations, and best practices. - Monitor and troubleshoot data pipeline issues to ensure timely resolution. - Stay current with industry trends and emerging technologies to ensure our data solutions remain cutting-edge. - Manage the CI/CD process for deploying and maintaining data solutions. Keywords: ETL,Data Pipeline,Data Quality,Data Analytics,Data Modeling,Azure Databricks,Synapse Analytics,Azure Data Factory,Data Validation,Data Engineering*

Posted 3 months ago

Apply

7.0 - 9.0 years

25 - 35 Lacs

Pune

Hybrid

Warm Greetings from Dataceria Software Solutions Pvt Ltd We are Looking For: Senior Azure Data Engineer Domain : BFSI Immediate joiners Send your resumes to carrers@dataceria.com ------------------------------------------------------------------------------------------------------------------------------------------------- As a Senior Azure Data Engineer , you will play a pivotal role in bridging data engineering with front-end development. You willll work closely with Data Scientists and UI Developers (React.js) to design, build, and secure data services that power a next-generation platform. This is a hands-on, collaborative role requiring deep experience across the Azure data ecosystem, API development, and modern DevOps practices. Your Responsibilities Will Include: Building and maintaining scalable Azure data pipelines ( ADF, Synapse, Databricks, DBT) to serve dynamic frontend interfaces. Creating API access layers to expose data to front-end applications and external services. Collaborating with the Data Science team to operationalize models and insights. Working directly with React JS developers to support UI data integration. Ensuring data security , integrity , and monitoring across systems. Implementing and maintaining CI/CD pipelines for seamless deployment. Automating and managing cloud infrastructure using Terraform, Kubernetes, and Azure App Services . Supporting data migration initiatives from legacy infrastructure to modern platforms like Data Mesh Refactoring legacy pipelines with code reuse, version control, and infrastructure-as-code best practices. Analyzing, mapping, and documenting financial data models across various systems. What Were Looking For: 8+ years of experience in data engineering, with a strong focus on the Azure ecosystem (ADF, Synapse, Databricks, App Services). Proven ability to develop and host secure, scalable REST APIs . Experience supporting cross-functional teams, especially front-end/UI and data science groups is a plus. Hands-on experience with Terraform, Kubernetes (Azure EKS), CI/CD, and cloud automation. Strong expertise in ETL/ELT design , performance tuning, and pipeline monitoring . Solid command of Python, SQL , and optionally Scala, Java, or PowerShell. Knowledge of data security practices, governance, and compliance (e.g., GDPR) . Familiarity with big data tools (e.g., Spark, Kafka ), version control (Git), and testing frameworks for data pipelines. Excellent communication skills and the ability to explain technical concepts to diverse stakeholders. Role & responsibilities ---------------------------------------------------------------------------------------------------------------------------------------------- Joining: Immediate Work location: Pune (hybrid) , Open Positions: Senior Azure Data Engineer, If interested, please share your updated resume to carrers@dataceria.com: We welcome applications from skilled candidates who are open to working in a hybrid model. Candidates with less experience but strong technical abilities are also encouraged to apply. ----------------------------------------------------------------------------------------------------- Dataceria Software Solutions Pvt Ltd Follow our LinkedIn for more job openings : https://www.linkedin.com/company/dataceria/ Email : careers@dataceria.com

Posted 3 months ago

Apply

5.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Introduction In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc

Posted 3 months ago

Apply

2.0 - 7.0 years

5 - 15 Lacs

Hyderabad, Bengaluru

Work from Office

Job Description - Data Warehouse Senior Engineer / Lead: Location : Bangalore or Hyderabad Responsibilities 1) Own design and development complex data integrations from multiple systems 2) Coordinate with onshore teams to obtain clarity on requirements, scope etc. 3) Be able to develop high quality BI reports that meet the needs of the customer 4) Good communication, interpersonal skills, and team player Qualifications Required 1) Strong knowledge on Azure Data warehousing and integration solutions such as Azure Data factory, Synapse Analytics Analytics 2) Working knowledge on Power Platform - PowerBI, PowerApps, Data verse, Power Automate 3) Working knowledge on Azure app integrations - Logic apps, function apps 4) Good knowledge on Azure data storage solutions - Data Lake, Cosmos DB, Storage accounts, SQL database 5) Strong data modelling experience (snowflake, dimensional etc.) and SQL expertise 6) Strong data analysis skills 7) Knowledge in Microsoft Fabrics Optional Skills 1) Knowledge on other data integration/streaming services (Data bricks, Azure data streaming services, Event grids, Kafka etc.) is a plus. 2) Knowledge on Microsoft Dynamics 365 platform - including working knowledge on export/import data from data verse etc. is a plus. 3) Having a certification on Azure data engineering related aspects is a plus.

Posted 3 months ago

Apply

7.0 - 9.0 years

25 - 35 Lacs

Chennai, Bengaluru

Hybrid

Warm Greetings from Dataceria Software Solutions Pvt Ltd We are Looking For: Senior Azure Data Engineer Domain : BFSI ------------------------------------------------------------------------------------------------------------------------------------------------- As a Senior Azure Data Engineer , you will play a pivotal role in bridging data engineering with front-end development. You willll work closely with Data Scientists and UI Developers (React.js) to design, build, and secure data services that power a next-generation platform. This is a hands-on, collaborative role requiring deep experience across the Azure data ecosystem, API development, and modern DevOps practices. Your Responsibilities Will Include: Building and maintaining scalable Azure data pipelines ( ADF, Synapse, Databricks, DBT) to serve dynamic frontend interfaces. Creating API access layers to expose data to front-end applications and external services. Collaborating with the Data Science team to operationalize models and insights. Working directly with React JS developers to support UI data integration. Ensuring data security , integrity , and monitoring across systems. Implementing and maintaining CI/CD pipelines for seamless deployment. Automating and managing cloud infrastructure using Terraform, Kubernetes, and Azure App Services . Supporting data migration initiatives from legacy infrastructure to modern platforms like Data Mesh Refactoring legacy pipelines with code reuse, version control, and infrastructure-as-code best practices. Analyzing, mapping, and documenting financial data models across various systems. What Were Looking For: 8+ years of experience in data engineering, with a strong focus on the Azure ecosystem (ADF, Synapse, Databricks, App Services). Proven ability to develop and host secure, scalable REST APIs . Experience supporting cross-functional teams, especially front-end/UI and data science groups is a plus. Hands-on experience with Terraform, Kubernetes (Azure EKS), CI/CD, and cloud automation. Strong expertise in ETL/ELT design , performance tuning, and pipeline monitoring . Solid command of Python, SQL , and optionally Scala, Java, or PowerShell. Knowledge of data security practices, governance, and compliance (e.g., GDPR) . Familiarity with big data tools (e.g., Spark, Kafka ), version control (Git), and testing frameworks for data pipelines. Excellent communication skills and the ability to explain technical concepts to diverse stakeholders. Role & responsibilities ---------------------------------------------------------------------------------------------------------------------------------------------- Joining: Immediate Work location: Bangalore (hybrid) , Chennai Open Positions: Senior Azure Data Engineer, If interested, please share your updated resume to carrers@dataceria.com: We welcome applications from skilled candidates who are open to working in a hybrid model. Candidates with less experience but strong technical abilities are also encouraged to apply. ----------------------------------------------------------------------------------------------------- Dataceria Software Solutions Pvt Ltd Follow our LinkedIn for more job openings : https://www.linkedin.com/company/dataceria/ Email : careers@dataceria.com

Posted 3 months ago

Apply

5.0 - 10.0 years

8 - 14 Lacs

Hyderabad

Work from Office

Job Title : Azure Synapse Developer Position Type : Permanent Experience : 5+ Years Location : Hyderabad (Work From Office / Hybrid) Shift Timings : 2 PM to 11 PM Mode of Interview : 3 rounds (Virtual/In-person) Notice Period : Immediate to 15 days Job Description : We are looking for an experienced Azure Synapse Developer to join our growing team. The ideal candidate should have a strong background in Azure Synapse Analytics, SSRS, and Azure Data Factory (ADF), with a solid understanding of data modeling, data movement, and integration. As an Azure Synapse Developer, you will work closely with cross-functional teams to design, implement, and manage data pipelines, ensuring the smooth flow of data across platforms. The candidate must have a deep understanding of SQL and ETL processes, and ideally, some exposure to Power BI for reporting and dashboard creation. Key Responsibilities : - Develop and maintain Azure Synapse Analytics solutions, ensuring scalability, security, and performance. - Design and implement data models for efficient storage and retrieval of data in Azure Synapse. - Utilize Azure Data Factory (ADF) for ETL processes, orchestrating data movement, and integrating data from various sources. - Leverage SSIS/SSRS/SSAS to build, deploy, and maintain data integration and reporting solutions. - Write and optimize SQL queries for data manipulation, extraction, and reporting. - Collaborate with business analysts and other stakeholders to understand reporting needs and create actionable insights. - Perform performance tuning on SQL queries, pipelines, and Synapse workloads to ensure high performance. - Provide support for troubleshooting and resolving data integration and performance issues. - Assist in setting up automated data processes and create reusable templates for data integration. - Stay updated on Azure Synapse features and tools, recommending improvements to the data platform as appropriate. Required Skills & Qualifications : - 5+ years of experience as a Data Engineer or Azure Synapse Developer. - Strong proficiency in Azure Synapse Analytics (Data Warehouse, Data Lake, and Analytics). - Solid understanding and experience in data modeling for large-scale data architectures. - Expertise in SQL for writing complex queries, optimizing performance, and managing large datasets. - Hands-on experience with Azure Data Factory (ADF) for data integration, ETL processes, and pipeline creation. - SSRS (SQL Server Reporting Services) and SSIS (SQL Server Integration Services) expertise. - Power BI knowledge (basic to intermediate) for reporting and data visualization. - Familiarity with SSAS (SQL Server Analysis Services) and OLAP concepts is a plus. - Experience in troubleshooting and optimizing complex data processing tasks. - Strong communication and collaboration skills to work effectively in a team-oriented environment. - Ability to quickly adapt to new tools and technologies in the Azure ecosystem.

Posted 3 months ago

Apply

5 - 10 years

8 - 14 Lacs

Kolkata

Work from Office

Role : Data Engineer - Azure Synapse Analytics - Experience in Data engineering projects using Microsoft Azure platform (Min 2-3 projects) - Strong expertise in data engineering tools and storage such as Azure ADLS Gen2, Blob storage - Experience implementing automated Synapse pipelines - Ability to implement Synapse pipelines for data integration ETL/ELT using Synapse studio - Experience integrating Synapse notebooks and Data Flow - Should be able to troubleshoot pipelines - Strong T-SQL programming skills or with any other flavor of SQL - Experience working with high volume data, large objects - Experience working in DevOps environments integrated with GIT for version control and CI/CD pipeline. - Good understanding of data modelling for data warehouse and data marts - Should have experience on Big data components like HIVE, Sqoop, HDFS, Spark - Strong verbal and written communication skills - Ability to learn, contribute and grow in a fast phased environment.

Posted 4 months ago

Apply

5 - 10 years

8 - 14 Lacs

Ahmedabad

Work from Office

Role : Data Engineer - Azure Synapse Analytics - Experience in Data engineering projects using Microsoft Azure platform (Min 2-3 projects) - Strong expertise in data engineering tools and storage such as Azure ADLS Gen2, Blob storage - Experience implementing automated Synapse pipelines - Ability to implement Synapse pipelines for data integration ETL/ELT using Synapse studio - Experience integrating Synapse notebooks and Data Flow - Should be able to troubleshoot pipelines - Strong T-SQL programming skills or with any other flavor of SQL - Experience working with high volume data, large objects - Experience working in DevOps environments integrated with GIT for version control and CI/CD pipeline. - Good understanding of data modelling for data warehouse and data marts - Should have experience on Big data components like HIVE, Sqoop, HDFS, Spark - Strong verbal and written communication skills - Ability to learn, contribute and grow in a fast phased environment

Posted 4 months ago

Apply

5 - 10 years

8 - 14 Lacs

Jaipur

Work from Office

Role : Data Engineer - Azure Synapse Analytics - Experience in Data engineering projects using Microsoft Azure platform (Min 2-3 projects) - Strong expertise in data engineering tools and storage such as Azure ADLS Gen2, Blob storage - Experience implementing automated Synapse pipelines - Ability to implement Synapse pipelines for data integration ETL/ELT using Synapse studio - Experience integrating Synapse notebooks and Data Flow - Should be able to troubleshoot pipelines - Strong T-SQL programming skills or with any other flavor of SQL - Experience working with high volume data, large objects - Experience working in DevOps environments integrated with GIT for version control and CI/CD pipeline. - Good understanding of data modelling for data warehouse and data marts - Should have experience on Big data components like HIVE, Sqoop, HDFS, Spark - Strong verbal and written communication skills - Ability to learn, contribute and grow in a fast phased environment

Posted 4 months ago

Apply

5 - 10 years

8 - 14 Lacs

Mumbai

Work from Office

Role : Data Engineer - Azure Synapse Analytics - Experience in Data engineering projects using Microsoft Azure platform (Min 2-3 projects) - Strong expertise in data engineering tools and storage such as Azure ADLS Gen2, Blob storage - Experience implementing automated Synapse pipelines - Ability to implement Synapse pipelines for data integration ETL/ELT using Synapse studio - Experience integrating Synapse notebooks and Data Flow - Should be able to troubleshoot pipelines - Strong T-SQL programming skills or with any other flavor of SQL - Experience working with high volume data, large objects - Experience working in DevOps environments integrated with GIT for version control and CI/CD pipeline. - Good understanding of data modelling for data warehouse and data marts - Should have experience on Big data components like HIVE, Sqoop, HDFS, Spark - Strong verbal and written communication skills - Ability to learn, contribute and grow in a fast phased environment

Posted 4 months ago

Apply

2 - 7 years

9 - 13 Lacs

Kochi

Work from Office

We are looking for a highly skilled and experienced Azure Data Engineer with 2 to 7 years of experience to join our team. The ideal candidate should have expertise in Azure Synapse Analytics, PySpark, Azure Data Factory, ADLS Gen2, SQL DW, T-SQL, and other relevant technologies. ### Roles and Responsibilities Design, develop, and implement data pipelines using Azure Data Factory or Azure Synapse Analytics. Develop and maintain data warehouses or data lakes using various tools and technologies. Work with various types of data sources including flat files, JSON, and databases. Build workflows and pipelines in Azure Synapse Analytics. Collaborate with cross-functional teams to identify and prioritize project requirements. Ensure data quality and integrity by implementing data validation and testing procedures. ### Job Requirements Hands-on experience in Azure Data Factory or Azure Synapse Analytics. Experience in data warehouse or data lake development. Strong knowledge of Spark, Python, and DWH concepts. Ability to build workflows and pipelines in Azure Synapse Analytics. Fair knowledge of Microsoft Fabric & One Lake, SSIS, ADO, and other relevant technologies. Strong analytical, interpersonal, and collaboration skills. Must Have: Azure Synapse Analytics with PySpark, Azure Data Factory, ADLS Gen2, SQL DW, T-SQL. Good to have: Azure data bricks, Microsoft Fabric & One Lake, SSIS, ADO.

Posted 4 months ago

Apply

5 - 10 years

13 - 17 Lacs

Kochi

Work from Office

We are looking for a highly skilled and experienced Data Engineering Lead to join our team. The ideal candidate will have 5-10 years of experience in designing and implementing scalable data lake architecture and data pipelines. ### Roles and Responsibility Design and implement scalable data lake architectures using Azure Data Lake services. Develop and maintain data pipelines to ingest data from various sources. Optimize data storage and retrieval processes for efficiency and performance. Ensure data security and compliance with industry standards. Collaborate with data scientists and analysts to facilitate data accessibility. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data lake designs, processes, and best practices. Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Must have skills: Azure Data Lake, Azure Synapse Analytics, Azure Data Factory, Azure DataBricks, Python (PySpark, Numpy etc), SQL, ETL, Data warehousing, Azure Devops, Experience in developing streaming pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming, and integration with business intelligence tools such as Power BI. Good to have skills: Big Data technologies (e.g., Hadoop, Spark), Data security. General Skills: Experience with Agile and DevOps methodologies and the software development lifecycle, proactive and responsible for deliverables, escalates dependencies and risks, works with most DevOps tools, limited supervision, completes assigned tasks on time and provides regular status reports, trains new team members, and builds strong relationships with project stakeholders. ### Job Requirements Minimum 5 years of experience in designing and implementing scalable data lake architecture and data pipelines. Strong knowledge of Azure Data Lake, Azure Synapse Analytics, Azure Data Factory, Azure DataBricks, Python (PySpark, Numpy etc), SQL, ETL, Data warehousing, and Azure Devops. Experience in developing streaming pipelines using Azure Event Hub, Azure Stream analytics, and Spark streaming. Familiarity with big data file formats like Parquet and Avro. Ability to work with multi-cultural global teams and virtually. Knowledge of cloud solutions such as Azure or AWS with DevOps/Cloud certifications is desired. Proactive and responsible for deliverables. Escalates dependencies and risks. Works with most DevOps tools, limited supervision. Completes assigned tasks on time and provides regular status reports. Trains new team members and builds strong relationships with project stakeholders.

Posted 4 months ago

Apply

8 - 10 years

13 - 17 Lacs

Kochi

Work from Office

We are looking for a skilled Data Engineering Lead with 8 to 10 years of experience, based in Bengaluru. The ideal candidate will have a strong background in designing and implementing scalable data lake architecture and data pipelines. ### Roles and Responsibility Design and implement scalable data lake architectures using Azure Data Lake services. Develop and maintain data pipelines to ingest data from various sources. Optimize data storage and retrieval processes for efficiency and performance. Ensure data security and compliance with industry standards. Collaborate with data scientists and analysts to facilitate data accessibility. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data lake designs, processes, and best practices. Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Experience in developing streaming pipelines using Azure Event Hub, Azure Stream analytics, Spark streaming. Experience in integrating with business intelligence tools such as Power BI. ### Job Requirements Strong knowledge of Azure Data Lake, Azure Synapse Analytics, Azure Data Factory, and Azure DataBricks. Proficiency in Python (PySpark, Numpy), SQL, ETL, and data warehousing. Experience with Agile and DevOps methodologies and the software development lifecycle. Proactive and responsible for deliverables; escalates dependencies and risks. Works with most DevOps tools, limited supervision, and completes assigned tasks on time with regular status reporting. Ability to train new team members and build strong relationships with project stakeholders. Knowledge of cloud solutions such as Azure or AWS with DevOps/Cloud certifications is desired. Ability to work with multi-cultural global teams virtually. Completion of assigned tasks on time and regular status reporting.

Posted 4 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies