Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 10.0 years
10 - 20 Lacs
Nagpur, Pune
Work from Office
We are looking for a skilled Data Engineer to design, build, and manage scalable data pipelines and ensure high-quality, secure, and reliable data infrastructure across our cloud and on-prem platforms.
Posted 1 week ago
4.0 - 7.0 years
10 - 20 Lacs
Pune
Work from Office
Experience in designing, developing, implementing, and optimizing data solutions on Microsoft Azure. Proven expertise in leveraging Azure services for ETL processes, data warehousing and analytics, ensuring optimal performance and scalability.
Posted 1 week ago
5.0 - 10.0 years
1 - 2 Lacs
Pune, Maharashtra, India
On-site
Our client is an EU subsidiary of a Global Financial Bank working in multiple markets and asset classes. DWH / ETL developer will work closely with the Development Lead to design, build interfaces and integrate data from a variety from internal and external data sources into the new Enterprise Data Warehouse environment. The ETL Developer will be responsible for developing ETL primarily utilizing Microsoft & Azure technologies within industry recognized ETL standards, architecture, and best practices. Responsibilities Act as a technical expert in the designing, coding, unit testing, supporting, and debugging of data warehouse software components in all aspects of SDLC Apply cloud and ETL engineering skills to solve problems and design approaches Troubleshoot and debug ETL pipelines and creating unit tests for ETL pipelines. Assess query performance and actively contribute to optimizing the code Write technical documentation and specifications Support internal audit by submitting required evidence Create reports and dashboards in the BI portal Work with Development Lead, DWH Architect and QA Engineers to plan, implement and deliver best ETL strategies Work with business analysts to understand requirements to create technical design specifications, gaining a sound understanding of business processes for related applications so that integration processes fulfill the end-user requirements Communicate effectively in a collaborative, complex and high performing team environment as per Agile principles Skills Proven work experience as an ETL Developer Advanced knowledge of relational databases and dimensional Data Warehouse modelling concepts Good understanding of physical and logical data modeling Very good understanding of modern SaaS / PaaS data solutions in a cost conscious approach Expert level of knowledge of Microsoft Data stack Experience in developing and deploying data oriented solutions in Cloud (Azure / Synapse Analytics / Fabric) Experience in designing and implementing data transformation and ETL layers using Data Factory, Notebooks Experience with PowerBI for report & dashboard creation. PowerQuery and/or DAX is an advantage. Experience in / understanding of Azure Data Lake Storage Knowledge / use of CI/CD tools and principles, preferably Azure DevOps or Bamboo Strong SQL knowledge, able to create complex SQL queries and good understanding of stored procedures, views, indexes, functions, etc. Good working knowledge of at least one scripting language. Python is an advantage. Experience with GIT repositories and working with branches. GitHub, Azure DevOps or Bitbucket experience are preferable. Ability to troubleshoot and solve complex technical problems Good understanding of software development best practices Working experience in Agile projects; preferably using JIRA Experience in working in high priority projects preferably greenfield project experience Able to communicate complex information clearly and concisely. Able to work independently and also to collaborate across the organization Highly developed problem-solving skills with minimal supervision Understanding of data governance and enterprise concepts preferably in banking environment Verbal and written communication skills in English are essential. Nice to have Microsoft Fabric Snowflake Background in SSIS / SSAS / SSRS Azure DevTest Labs, ARM templates Azure PurView Banking / finance experience
Posted 1 week ago
7.0 years
7 - 17 Lacs
Gurugram
Hybrid
Position : Azure Data Engineer Experience : 4-7 Years Location : Gurugram Type : Full Time Notice period : Immediate to 30 days Preferred Certifications : Azure Data Engineer Associate, Databricks About the Role : We are looking for a skilled Azure Data Engineer with 4-7 years of experience in Azure Data Services, including Azure Data Factory (ADF), Synapse Analytics , and Databricks. The candidate will play a key role in developing and maintaining data solutions on Azure. Key Responsibilities : Develop and implement data pipelines using Azure Data Factory and Databricks . Work with stakeholders to gather requirements and translate them into technical solutions. Migrate data from various data sources to Azure Data Lake . Optimize data processing workflows for performance and scalability. Ensure data quality and integrity throughout the data lifecycle. Collaborate with data architects and other team members to design and implement data solutions. Required Skills : Strong experience with Azure Data Services, including Azure Data Factory (ADF) , Synapse Analytics, and Databricks. Proficiency in SQL , data transformation and ETL processes . Hands-on experience with Azure Data Lake migrations and Python/Pyspark Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Preferred Qualifications : Azure Data Engineer Associate certification. Databricks Certification. Mandatory skill set: Pyspark- Databricks - Python and Sql
Posted 1 week ago
0.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos AI Gigafactory, our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Senior Principal Consultant- Senior Data Engineer - Databricks, Azure & Mosaic AI Role Summary: We are seeking a Senior Data Engineer with extensive expertise in Data & Analytics platform modernization using Databricks, Azure, and Mosaic AI. This role will focus on designing and optimizing cloud-based data architectures, leveraging AI-driven automation to enhance data pipelines, governance, and processing at scale. Key Responsibilities: . Architect & modernize Data & Analytics platforms using Databricks on Azure. . Design and optimize Lakehouse architectures integrating Azure Data Lake, Databricks Delta Lake, and Synapse Analytics. . Implement Mosaic AI for AI-driven automation, predictive analytics, and intelligent data engineering solutions. . Lead the migration of legacy data platforms to a modern cloud-native Data & AI ecosystem. . Develop high-performance ETL pipelines, integrating Databricks with Azure services such as Data Factory, Synapse, and Purview. . Utilize MLflow & Mosaic AI for AI-enhanced data processing and decision-making. . Establish data governance, security, lineage tracking, and metadata management across modern data platforms. . Work collaboratively with business leaders, data scientists, and engineers to drive innovation. . Stay at the forefront of emerging trends in AI-powered data engineering and modernization strategies. Qualifications we seek in you! Minimum Qualifications . experience in Data Engineering, Cloud Platforms, and AI-driven automation. . Expertise in Databricks (Apache Spark, Delta Lake, MLflow) and Azure (Data Lake, Synapse, ADF, Purview). . Strong experience with Mosaic AI for AI-powered data engineering and automation. . Advanced proficiency in SQL, Python, and Scala for big data processing. . Experience in modernizing Data & Analytics platforms, migrating from on-prem to cloud. . Knowledge of Data Lineage, Observability, and AI-driven Data Governance frameworks. . Familiarity with Vector Databases & Retrieval-Augmented Generation (RAG) architectures for AI-powered data analytics. . Strong leadership, problem-solving, and stakeholder management skills. Preferred Skills: . Experience with Knowledge Graphs (Neo4J, TigerGraph) for data structuring. . Exposure to Kubernetes, Terraform, and CI/CD for scalable cloud deployments. . Background in streaming technologies (Kafka, Spark Streaming, Kinesis). Why join Genpact . Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation . Make an impact - Drive change for global enterprises and solve business challenges that matter . Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities . Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day . Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 week ago
9.0 - 14.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. A Senior Data Engineer designs and oversees the entire data infrastructure, data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective. Key Responsibilities : Oversee the entire data infrastructure to ensure scalability, operation efficiency and resiliency. - Mentor junior data engineers within the organization. - Design, develop, and maintain data pipelines and ETL processes using Microsoft Azure services (e.g., Azure Data Factory, Azure Synapse, Azure Databricks, Azure Fabric). - Utilize Azure data storage accounts for organizing and maintaining data pipeline outputs. (e.g., Azure Data Lake Storage Gen 2 & Azure Blob storage). - Collaborate with data scientists, data analysts, data architects and other stakeholders to understand data requirements and deliver high-quality data solutions. - Optimize data pipelines in the Azure environment for performance, scalability, and reliability. - Ensure data quality and integrity through data validation techniques and frameworks. - Develop and maintain documentation for data processes, configurations, and best practices. - Monitor and troubleshoot data pipeline issues to ensure timely resolution. - Stay current with industry trends and emerging technologies to ensure our data solutions remain cutting-edge. - Manage the CI/CD process for deploying and maintaining data solutions. Keywords: ETL,Data Pipeline,Data Quality,Data Analytics,Data Modeling,Azure Databricks,Synapse Analytics,Azure Data Factory,Data Validation,Data Engineering*
Posted 2 weeks ago
7.0 - 9.0 years
25 - 35 Lacs
Pune
Hybrid
Warm Greetings from Dataceria Software Solutions Pvt Ltd We are Looking For: Senior Azure Data Engineer Domain : BFSI Immediate joiners Send your resumes to carrers@dataceria.com ------------------------------------------------------------------------------------------------------------------------------------------------- As a Senior Azure Data Engineer , you will play a pivotal role in bridging data engineering with front-end development. You willll work closely with Data Scientists and UI Developers (React.js) to design, build, and secure data services that power a next-generation platform. This is a hands-on, collaborative role requiring deep experience across the Azure data ecosystem, API development, and modern DevOps practices. Your Responsibilities Will Include: Building and maintaining scalable Azure data pipelines ( ADF, Synapse, Databricks, DBT) to serve dynamic frontend interfaces. Creating API access layers to expose data to front-end applications and external services. Collaborating with the Data Science team to operationalize models and insights. Working directly with React JS developers to support UI data integration. Ensuring data security , integrity , and monitoring across systems. Implementing and maintaining CI/CD pipelines for seamless deployment. Automating and managing cloud infrastructure using Terraform, Kubernetes, and Azure App Services . Supporting data migration initiatives from legacy infrastructure to modern platforms like Data Mesh Refactoring legacy pipelines with code reuse, version control, and infrastructure-as-code best practices. Analyzing, mapping, and documenting financial data models across various systems. What Were Looking For: 8+ years of experience in data engineering, with a strong focus on the Azure ecosystem (ADF, Synapse, Databricks, App Services). Proven ability to develop and host secure, scalable REST APIs . Experience supporting cross-functional teams, especially front-end/UI and data science groups is a plus. Hands-on experience with Terraform, Kubernetes (Azure EKS), CI/CD, and cloud automation. Strong expertise in ETL/ELT design , performance tuning, and pipeline monitoring . Solid command of Python, SQL , and optionally Scala, Java, or PowerShell. Knowledge of data security practices, governance, and compliance (e.g., GDPR) . Familiarity with big data tools (e.g., Spark, Kafka ), version control (Git), and testing frameworks for data pipelines. Excellent communication skills and the ability to explain technical concepts to diverse stakeholders. Role & responsibilities ---------------------------------------------------------------------------------------------------------------------------------------------- Joining: Immediate Work location: Pune (hybrid) , Open Positions: Senior Azure Data Engineer, If interested, please share your updated resume to carrers@dataceria.com: We welcome applications from skilled candidates who are open to working in a hybrid model. Candidates with less experience but strong technical abilities are also encouraged to apply. ----------------------------------------------------------------------------------------------------- Dataceria Software Solutions Pvt Ltd Follow our LinkedIn for more job openings : https://www.linkedin.com/company/dataceria/ Email : careers@dataceria.com
Posted 2 weeks ago
5.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Introduction In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc
Posted 2 weeks ago
2.0 - 7.0 years
5 - 15 Lacs
Hyderabad, Bengaluru
Work from Office
Job Description - Data Warehouse Senior Engineer / Lead: Location : Bangalore or Hyderabad Responsibilities 1) Own design and development complex data integrations from multiple systems 2) Coordinate with onshore teams to obtain clarity on requirements, scope etc. 3) Be able to develop high quality BI reports that meet the needs of the customer 4) Good communication, interpersonal skills, and team player Qualifications Required 1) Strong knowledge on Azure Data warehousing and integration solutions such as Azure Data factory, Synapse Analytics Analytics 2) Working knowledge on Power Platform - PowerBI, PowerApps, Data verse, Power Automate 3) Working knowledge on Azure app integrations - Logic apps, function apps 4) Good knowledge on Azure data storage solutions - Data Lake, Cosmos DB, Storage accounts, SQL database 5) Strong data modelling experience (snowflake, dimensional etc.) and SQL expertise 6) Strong data analysis skills 7) Knowledge in Microsoft Fabrics Optional Skills 1) Knowledge on other data integration/streaming services (Data bricks, Azure data streaming services, Event grids, Kafka etc.) is a plus. 2) Knowledge on Microsoft Dynamics 365 platform - including working knowledge on export/import data from data verse etc. is a plus. 3) Having a certification on Azure data engineering related aspects is a plus.
Posted 2 weeks ago
7.0 - 9.0 years
25 - 35 Lacs
Chennai, Bengaluru
Hybrid
Warm Greetings from Dataceria Software Solutions Pvt Ltd We are Looking For: Senior Azure Data Engineer Domain : BFSI ------------------------------------------------------------------------------------------------------------------------------------------------- As a Senior Azure Data Engineer , you will play a pivotal role in bridging data engineering with front-end development. You willll work closely with Data Scientists and UI Developers (React.js) to design, build, and secure data services that power a next-generation platform. This is a hands-on, collaborative role requiring deep experience across the Azure data ecosystem, API development, and modern DevOps practices. Your Responsibilities Will Include: Building and maintaining scalable Azure data pipelines ( ADF, Synapse, Databricks, DBT) to serve dynamic frontend interfaces. Creating API access layers to expose data to front-end applications and external services. Collaborating with the Data Science team to operationalize models and insights. Working directly with React JS developers to support UI data integration. Ensuring data security , integrity , and monitoring across systems. Implementing and maintaining CI/CD pipelines for seamless deployment. Automating and managing cloud infrastructure using Terraform, Kubernetes, and Azure App Services . Supporting data migration initiatives from legacy infrastructure to modern platforms like Data Mesh Refactoring legacy pipelines with code reuse, version control, and infrastructure-as-code best practices. Analyzing, mapping, and documenting financial data models across various systems. What Were Looking For: 8+ years of experience in data engineering, with a strong focus on the Azure ecosystem (ADF, Synapse, Databricks, App Services). Proven ability to develop and host secure, scalable REST APIs . Experience supporting cross-functional teams, especially front-end/UI and data science groups is a plus. Hands-on experience with Terraform, Kubernetes (Azure EKS), CI/CD, and cloud automation. Strong expertise in ETL/ELT design , performance tuning, and pipeline monitoring . Solid command of Python, SQL , and optionally Scala, Java, or PowerShell. Knowledge of data security practices, governance, and compliance (e.g., GDPR) . Familiarity with big data tools (e.g., Spark, Kafka ), version control (Git), and testing frameworks for data pipelines. Excellent communication skills and the ability to explain technical concepts to diverse stakeholders. Role & responsibilities ---------------------------------------------------------------------------------------------------------------------------------------------- Joining: Immediate Work location: Bangalore (hybrid) , Chennai Open Positions: Senior Azure Data Engineer, If interested, please share your updated resume to carrers@dataceria.com: We welcome applications from skilled candidates who are open to working in a hybrid model. Candidates with less experience but strong technical abilities are also encouraged to apply. ----------------------------------------------------------------------------------------------------- Dataceria Software Solutions Pvt Ltd Follow our LinkedIn for more job openings : https://www.linkedin.com/company/dataceria/ Email : careers@dataceria.com
Posted 3 weeks ago
5.0 - 10.0 years
8 - 14 Lacs
Hyderabad
Work from Office
Job Title : Azure Synapse Developer Position Type : Permanent Experience : 5+ Years Location : Hyderabad (Work From Office / Hybrid) Shift Timings : 2 PM to 11 PM Mode of Interview : 3 rounds (Virtual/In-person) Notice Period : Immediate to 15 days Job Description : We are looking for an experienced Azure Synapse Developer to join our growing team. The ideal candidate should have a strong background in Azure Synapse Analytics, SSRS, and Azure Data Factory (ADF), with a solid understanding of data modeling, data movement, and integration. As an Azure Synapse Developer, you will work closely with cross-functional teams to design, implement, and manage data pipelines, ensuring the smooth flow of data across platforms. The candidate must have a deep understanding of SQL and ETL processes, and ideally, some exposure to Power BI for reporting and dashboard creation. Key Responsibilities : - Develop and maintain Azure Synapse Analytics solutions, ensuring scalability, security, and performance. - Design and implement data models for efficient storage and retrieval of data in Azure Synapse. - Utilize Azure Data Factory (ADF) for ETL processes, orchestrating data movement, and integrating data from various sources. - Leverage SSIS/SSRS/SSAS to build, deploy, and maintain data integration and reporting solutions. - Write and optimize SQL queries for data manipulation, extraction, and reporting. - Collaborate with business analysts and other stakeholders to understand reporting needs and create actionable insights. - Perform performance tuning on SQL queries, pipelines, and Synapse workloads to ensure high performance. - Provide support for troubleshooting and resolving data integration and performance issues. - Assist in setting up automated data processes and create reusable templates for data integration. - Stay updated on Azure Synapse features and tools, recommending improvements to the data platform as appropriate. Required Skills & Qualifications : - 5+ years of experience as a Data Engineer or Azure Synapse Developer. - Strong proficiency in Azure Synapse Analytics (Data Warehouse, Data Lake, and Analytics). - Solid understanding and experience in data modeling for large-scale data architectures. - Expertise in SQL for writing complex queries, optimizing performance, and managing large datasets. - Hands-on experience with Azure Data Factory (ADF) for data integration, ETL processes, and pipeline creation. - SSRS (SQL Server Reporting Services) and SSIS (SQL Server Integration Services) expertise. - Power BI knowledge (basic to intermediate) for reporting and data visualization. - Familiarity with SSAS (SQL Server Analysis Services) and OLAP concepts is a plus. - Experience in troubleshooting and optimizing complex data processing tasks. - Strong communication and collaboration skills to work effectively in a team-oriented environment. - Ability to quickly adapt to new tools and technologies in the Azure ecosystem.
Posted 3 weeks ago
5 - 10 years
8 - 14 Lacs
Kolkata
Work from Office
Role : Data Engineer - Azure Synapse Analytics - Experience in Data engineering projects using Microsoft Azure platform (Min 2-3 projects) - Strong expertise in data engineering tools and storage such as Azure ADLS Gen2, Blob storage - Experience implementing automated Synapse pipelines - Ability to implement Synapse pipelines for data integration ETL/ELT using Synapse studio - Experience integrating Synapse notebooks and Data Flow - Should be able to troubleshoot pipelines - Strong T-SQL programming skills or with any other flavor of SQL - Experience working with high volume data, large objects - Experience working in DevOps environments integrated with GIT for version control and CI/CD pipeline. - Good understanding of data modelling for data warehouse and data marts - Should have experience on Big data components like HIVE, Sqoop, HDFS, Spark - Strong verbal and written communication skills - Ability to learn, contribute and grow in a fast phased environment.
Posted 1 month ago
5 - 10 years
8 - 14 Lacs
Ahmedabad
Work from Office
Role : Data Engineer - Azure Synapse Analytics - Experience in Data engineering projects using Microsoft Azure platform (Min 2-3 projects) - Strong expertise in data engineering tools and storage such as Azure ADLS Gen2, Blob storage - Experience implementing automated Synapse pipelines - Ability to implement Synapse pipelines for data integration ETL/ELT using Synapse studio - Experience integrating Synapse notebooks and Data Flow - Should be able to troubleshoot pipelines - Strong T-SQL programming skills or with any other flavor of SQL - Experience working with high volume data, large objects - Experience working in DevOps environments integrated with GIT for version control and CI/CD pipeline. - Good understanding of data modelling for data warehouse and data marts - Should have experience on Big data components like HIVE, Sqoop, HDFS, Spark - Strong verbal and written communication skills - Ability to learn, contribute and grow in a fast phased environment
Posted 1 month ago
5 - 10 years
8 - 14 Lacs
Jaipur
Work from Office
Role : Data Engineer - Azure Synapse Analytics - Experience in Data engineering projects using Microsoft Azure platform (Min 2-3 projects) - Strong expertise in data engineering tools and storage such as Azure ADLS Gen2, Blob storage - Experience implementing automated Synapse pipelines - Ability to implement Synapse pipelines for data integration ETL/ELT using Synapse studio - Experience integrating Synapse notebooks and Data Flow - Should be able to troubleshoot pipelines - Strong T-SQL programming skills or with any other flavor of SQL - Experience working with high volume data, large objects - Experience working in DevOps environments integrated with GIT for version control and CI/CD pipeline. - Good understanding of data modelling for data warehouse and data marts - Should have experience on Big data components like HIVE, Sqoop, HDFS, Spark - Strong verbal and written communication skills - Ability to learn, contribute and grow in a fast phased environment
Posted 1 month ago
5 - 10 years
8 - 14 Lacs
Mumbai
Work from Office
Role : Data Engineer - Azure Synapse Analytics - Experience in Data engineering projects using Microsoft Azure platform (Min 2-3 projects) - Strong expertise in data engineering tools and storage such as Azure ADLS Gen2, Blob storage - Experience implementing automated Synapse pipelines - Ability to implement Synapse pipelines for data integration ETL/ELT using Synapse studio - Experience integrating Synapse notebooks and Data Flow - Should be able to troubleshoot pipelines - Strong T-SQL programming skills or with any other flavor of SQL - Experience working with high volume data, large objects - Experience working in DevOps environments integrated with GIT for version control and CI/CD pipeline. - Good understanding of data modelling for data warehouse and data marts - Should have experience on Big data components like HIVE, Sqoop, HDFS, Spark - Strong verbal and written communication skills - Ability to learn, contribute and grow in a fast phased environment
Posted 1 month ago
2 - 7 years
9 - 13 Lacs
Kochi
Work from Office
We are looking for a highly skilled and experienced Azure Data Engineer with 2 to 7 years of experience to join our team. The ideal candidate should have expertise in Azure Synapse Analytics, PySpark, Azure Data Factory, ADLS Gen2, SQL DW, T-SQL, and other relevant technologies. ### Roles and Responsibilities Design, develop, and implement data pipelines using Azure Data Factory or Azure Synapse Analytics. Develop and maintain data warehouses or data lakes using various tools and technologies. Work with various types of data sources including flat files, JSON, and databases. Build workflows and pipelines in Azure Synapse Analytics. Collaborate with cross-functional teams to identify and prioritize project requirements. Ensure data quality and integrity by implementing data validation and testing procedures. ### Job Requirements Hands-on experience in Azure Data Factory or Azure Synapse Analytics. Experience in data warehouse or data lake development. Strong knowledge of Spark, Python, and DWH concepts. Ability to build workflows and pipelines in Azure Synapse Analytics. Fair knowledge of Microsoft Fabric & One Lake, SSIS, ADO, and other relevant technologies. Strong analytical, interpersonal, and collaboration skills. Must Have: Azure Synapse Analytics with PySpark, Azure Data Factory, ADLS Gen2, SQL DW, T-SQL. Good to have: Azure data bricks, Microsoft Fabric & One Lake, SSIS, ADO.
Posted 1 month ago
5 - 10 years
13 - 17 Lacs
Kochi
Work from Office
We are looking for a highly skilled and experienced Data Engineering Lead to join our team. The ideal candidate will have 5-10 years of experience in designing and implementing scalable data lake architecture and data pipelines. ### Roles and Responsibility Design and implement scalable data lake architectures using Azure Data Lake services. Develop and maintain data pipelines to ingest data from various sources. Optimize data storage and retrieval processes for efficiency and performance. Ensure data security and compliance with industry standards. Collaborate with data scientists and analysts to facilitate data accessibility. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data lake designs, processes, and best practices. Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Must have skills: Azure Data Lake, Azure Synapse Analytics, Azure Data Factory, Azure DataBricks, Python (PySpark, Numpy etc), SQL, ETL, Data warehousing, Azure Devops, Experience in developing streaming pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming, and integration with business intelligence tools such as Power BI. Good to have skills: Big Data technologies (e.g., Hadoop, Spark), Data security. General Skills: Experience with Agile and DevOps methodologies and the software development lifecycle, proactive and responsible for deliverables, escalates dependencies and risks, works with most DevOps tools, limited supervision, completes assigned tasks on time and provides regular status reports, trains new team members, and builds strong relationships with project stakeholders. ### Job Requirements Minimum 5 years of experience in designing and implementing scalable data lake architecture and data pipelines. Strong knowledge of Azure Data Lake, Azure Synapse Analytics, Azure Data Factory, Azure DataBricks, Python (PySpark, Numpy etc), SQL, ETL, Data warehousing, and Azure Devops. Experience in developing streaming pipelines using Azure Event Hub, Azure Stream analytics, and Spark streaming. Familiarity with big data file formats like Parquet and Avro. Ability to work with multi-cultural global teams and virtually. Knowledge of cloud solutions such as Azure or AWS with DevOps/Cloud certifications is desired. Proactive and responsible for deliverables. Escalates dependencies and risks. Works with most DevOps tools, limited supervision. Completes assigned tasks on time and provides regular status reports. Trains new team members and builds strong relationships with project stakeholders.
Posted 1 month ago
8 - 10 years
13 - 17 Lacs
Kochi
Work from Office
We are looking for a skilled Data Engineering Lead with 8 to 10 years of experience, based in Bengaluru. The ideal candidate will have a strong background in designing and implementing scalable data lake architecture and data pipelines. ### Roles and Responsibility Design and implement scalable data lake architectures using Azure Data Lake services. Develop and maintain data pipelines to ingest data from various sources. Optimize data storage and retrieval processes for efficiency and performance. Ensure data security and compliance with industry standards. Collaborate with data scientists and analysts to facilitate data accessibility. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data lake designs, processes, and best practices. Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Experience in developing streaming pipelines using Azure Event Hub, Azure Stream analytics, Spark streaming. Experience in integrating with business intelligence tools such as Power BI. ### Job Requirements Strong knowledge of Azure Data Lake, Azure Synapse Analytics, Azure Data Factory, and Azure DataBricks. Proficiency in Python (PySpark, Numpy), SQL, ETL, and data warehousing. Experience with Agile and DevOps methodologies and the software development lifecycle. Proactive and responsible for deliverables; escalates dependencies and risks. Works with most DevOps tools, limited supervision, and completes assigned tasks on time with regular status reporting. Ability to train new team members and build strong relationships with project stakeholders. Knowledge of cloud solutions such as Azure or AWS with DevOps/Cloud certifications is desired. Ability to work with multi-cultural global teams virtually. Completion of assigned tasks on time and regular status reporting.
Posted 1 month ago
2 - 7 years
9 - 13 Lacs
Kochi
Work from Office
We are looking for a highly skilled and experienced Azure Data Engineer with 2 to 7 years of experience to join our team. The ideal candidate will have expertise in Azure Synapse Analytics, PySpark, Azure Data Factory, ADLS Gen2, SQL DW, T-SQL, and other relevant technologies. ### Roles and Responsibilities Design, develop, and implement data pipelines using Azure Data Factory or Azure Synapse Analytics. Develop and maintain data warehouses or data lakes using various tools and technologies. Build workflows and pipelines in Azure Synapse Analytics to support business intelligence and analytics. Collaborate with cross-functional teams to identify and prioritize project requirements. Ensure data quality and integrity by implementing data validation and testing procedures. Troubleshoot and resolve technical issues related to data engineering and analytics. ### Job Requirements Hands-on experience in Azure Data Factory or Azure Synapse Analytics is required. Experience in handling data in datastores such as Azure SQL, T-SQL, and SQL DW is necessary. Ability to work with various types of data sources including flat files, JSON, and databases. Strong analytical, interpersonal, and collaboration skills are essential. Fair knowledge of Spark, Python, and DWH concepts is expected. Experience in CI/CD and build automations for deployment is preferred.
Posted 1 month ago
1 - 4 years
1 - 3 Lacs
Raipur
Work from Office
ROLES AND RESPONSIBILITIES: •Serve as the primary contact for customer issues via ticketing systems, email, or phone. •Troubleshoot and resolve technical issues, escalating when necessary. • Analyze logs, reproduce issues, and perform root cause analysis. • Collaborate with relevant stakeholder to resolve bugs and deploy fixes. • Write SQL queries and perform API testing for data and integration support. • Support deployments in staging/production environments. • Maintain documentation and mentor junior team members. DESIRED SKILLS: •Bachelors degree in computer science, Information Technology, or a related field • Candidate with 1 to 2 years IT experience of Software Support. •Strong working knowledge of Windows and/or Linux operating systems. Strong working knowledge of Windows and/or Linux operating systems. • Basic scripting or programming skills (e.g., Python or similar). • Basic scripting or programming skills (e.g., Python or similar). • Basic of SQL as well. • Familiarity with REST APIs, JSON, and tools like Postman or equivalent. • Solid understanding of issue tracking/ticketing systems such as Jira or ServiceNow. • Strong analytical and communication skills with a customer-first approach. • Familiarity with QLIK or similar business intelligence tools. • Knowledge of Azure Databricks and Synapse is an added advantage. Just the high level. COMMUNICATION: • Excellent communication and interpersonal skills. CERTIFICATIONS: • Experience with ETL tools and processes. •ITIL Foundation certification or equivalent EDUCATION: 15 Years full time education WORK LOCATION: • Raipur (Chhattisgarh) • Willing to travel on short-term to client locations WORKING HOURS: • Due to the nature of the work, you may be expected to work shifts or be on call and it may be necessary to work extra hours to finish a job. • Rotational shift of 9 hours. Timing - 8 AM to 5 PM, 4 PM to 12 AM and 12 AM to 9 AM.
Posted 1 month ago
4 - 6 years
12 - 14 Lacs
Delhi NCR, Mumbai, Bengaluru
Work from Office
Strong hands-on experience with Azure Databricks, PySpark, and ADF Advanced expertise in Azure SQL DB, Synapse Analytics, and Azure Data Lake Familiar with Azure Analysis Services, Azure SQL, and CI/CD (Azure DevOps) Proficient in data modeling, SQL Server best practices, and BI/Data Warehousing architecture Agile methodologies: ADO, Scrum, Kanban, Lean Collaborate with business/technical teams to design scalable data solutions Architect and implement data pipelines and models Provide technical leadership, code reviews, and best practice guidance Support end-to-end lifecycle: estimation, design, development, deployment Risk/issue management and solution recommendation. Location- Remote, Delhi NCR, Bangalore, Chennai, Pune, Kolkata, Ahmedabad, Mumbai, Hyderabad
Posted 1 month ago
4 - 8 years
7 - 17 Lacs
Pune, Hyderabad
Work from Office
Role & responsibilities: Outline the day-to-day responsibilities for this role. Preferred candidate profile: Specify required role expertise, previous job experience, or relevant certifications. Perks and benefits: Mention available facilities and benefits the company is offering with this job.
Posted 2 months ago
8 - 13 years
13 - 23 Lacs
Hyderabad, Noida, Jaipur
Hybrid
5 to 8 years of solutions design & development experience Experience in building Data Ingestion/Transformation pipelines on Azure Cloud Experience in BigData Tools like Spark, Delta Lakes, ADLS, Azure Synapse /Databricks Proficient understanding of distributed computing principles
Posted 2 months ago
4 - 6 years
8 - 14 Lacs
Ahmedabad
Work from Office
The successful applicant will be working within a highly specialised and growing team to enable delivery of data and advanced analytics system capability. Roles and Responsibility : - Develop and implement a reusable architecture of data pipelines to make data available for various purposes including Machine Learning (ML), Analytics and Reporting - Work collaboratively as part of team engaging with system architects, data scientists and business in a healthcare context - Define hardware, tools and software to enable the reusable framework for data sharing and ML model productionization - Work comfortably with structured and unstructured data in a variety of different programming languages such as SQL, R, python, Java etc - Understanding of distributing programming and advising data scientists on how to optimally structure program code for maximum efficiency - Build data solutions that leverage controls to ensure privacy, security, compliance and data quality - Understand meta-data management systems and orchestration architecture in the designing of ML/AI pipelines. - Deep understanding of cutting edge cloud technology and frameworks to enable Data Science - System integration skills between Business Intelligence and source transactional - Improving overall production landscape as required - Define strategies with Data Scientists to monitor models post production - Write unit tests and participate in code reviews Skill Requirement : - Expert in programming languages such as R, Python, Scala and Java - Expert database knowledge in SQL and experience with MS Azure tools such as Data Factory, Synapse Analytics, Data Lake, Databricks, Azure stream analytics and PowerBI - Modern Azure datawarehouse skills Consulting Services (Tech Talents | Contracting & Permanent) | Solution Engineering | SaaS Products (Consulting, Implementation & User-adoption) - Expert Unix/Linux admin experience including shell script development - Exposure to AI or model development - Experience working on large and complex datasets - Understanding and application of Big Data and distributed computing principles (Hadoop and MapReduce) - ML model optimization skills in a production environment - Production environment machine learning and AI - DevOps/DataOps and CI/CD experience Technical skills additional : AWS experience Qualification : Bachelor's or Master's degree in Computer Science or related field.
Posted 2 months ago
5 - 8 years
18 - 19 Lacs
Chennai, Pune, Noida
Work from Office
Key Responsibilities AI/ML Development: Design, develop, and deploy AI/ML models (e.g., PINNs, RNNs, Gradient Boosting) to simulate, predict, and optimize drilling fluid performance. Develop real-time anomaly detection systems for sensor data from drilling operations. Implement time-series analysis models for monitoring and forecasting drilling fluid properties. Data Management: Integrate and manage large-scale data from various sources (sensors, lab results, historical data) into Azure Data Lake or Synapse Analytics. Perform data cleaning, preprocessing, and exploratory analysis to identify actionable insights. Cloud Integration: Deploy and scale AI solutions using Azure Machine Learning, Azure Kubernetes Service (AKS), and Azure Stream Analytics. Build and automate workflows using Azure Logic Apps and IoT Hub for seamless integration with drilling systems. Location- Pune,Noida,Chennai,Trichy
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2