Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
18 - 33 Lacs
Bengaluru
Hybrid
Neudesic, an IBM Company is home to some very smart, talented and motivated people. People who want to work for an innovative company that values their skills and keeps their passions alive with new challenges and opportunities. We have created a culture of innovation that makes Neudesic not only an industry leader, but also a career destination for todays brightest technologists. You can see it in our year-over-year growth, made possible by satisfied employees dedicated to delivering the right solutions to our clients Must Have Skills: Prior experience in ETL, data pipelines, data flow techniques using Azure Data Services Working experience in Python, Scala, PySpark, Azure Data Factory, Azure Data Lake Gen2, Databricks, Azure Synapse and file formats like JSON & Parquet Experience in creating ADF Pipelines to source and process data sets. Experience in creating Databricks notebooks to cleanse, transform and enrich data sets. Good understanding about SQL, Databases, NO-SQL DBs, Data Warehouse, Hadoop and various data storage options on the cloud. Development experience in orchestration of pipelines Experience in deployment and monitoring techniques Working experience with Azure DevOps CI/CD pipelines to deploy Azure resources. Experience in handling operations/Integration with source repository Must have good knowledge on Datawarehouse concepts and Datawarehouse modelling Good to Have Skills: Familiarity with DevOps, Agile Scrum methodologies and CI/CD Domain-driven development exposure Analytical / problem solving skills Strong communication skills Good experience with unit, integration and UAT support Able to design and code reusable components and functions Should able to review design, code & provide review comments with justification Zeal to learn new tool/technologies and adoption Power BI and Data Catalog experience
Posted 2 months ago
5.0 - 10.0 years
18 - 33 Lacs
Bengaluru
Hybrid
Neudesic, an IBM Company is home to some very smart, talented and motivated people. People who want to work for an innovative company that values their skills and keeps their passions alive with new challenges and opportunities. We have created a culture of innovation that makes Neudesic not only an industry leader, but also a career destination for todays brightest technologists. You can see it in our year-over-year growth, made possible by satisfied employees dedicated to delivering the right solutions to our clients Must Have Skills: Prior experience in ETL, data pipelines, data flow techniques using Azure Data Services Working experience in Python, Scala, PySpark, Azure Data Factory, Azure Data Lake Gen2, Databricks, Azure Synapse and file formats like JSON & Parquet Experience in creating ADF Pipelines to source and process data sets. Experience in creating Databricks notebooks to cleanse, transform and enrich data sets. Good understanding about SQL, Databases, NO-SQL DBs, Data Warehouse, Hadoop and various data storage options on the cloud. Development experience in orchestration of pipelines Experience in deployment and monitoring techniques Working experience with Azure DevOps CI/CD pipelines to deploy Azure resources. Experience in handling operations/Integration with source repository Must have good knowledge on Datawarehouse concepts and Datawarehouse modelling Good to Have Skills: Familiarity with DevOps, Agile Scrum methodologies and CI/CD Domain-driven development exposure Analytical / problem solving skills Strong communication skills Good experience with unit, integration and UAT support Able to design and code reusable components and functions Should able to review design, code & provide review comments with justification Zeal to learn new tool/technologies and adoption Power BI and Data Catalog experience
Posted 2 months ago
7.0 - 9.0 years
9 - 11 Lacs
Indore, Hyderabad, Ahmedabad
Work from Office
We're Hiring: Data Governance LeadLocations:Offices in Austin (USA), Singapore, Hyderabad, Indore, Ahmedabad (India)Primary Job Location: Mumbai/ Hyderabad / Indore / Ahmedabad (Work from Office) Compensation Range: Competitive | Based on experience and expertise To Apply, Share Your Resume With:Current CTCExpected CTCNotice PeriodPreferred Location What You Will Do Role Overview A Key Responsibilities 1 Governance Strategy & Stakeholder EnablementDefine and drive enterprise-level data governance frameworks and policies Align governance objectives with compliance, analytics, and business priorities Work with IT, Legal, Compliance, and Business teams to drive adoption Conduct training, workshops, and change management programs 2 Microsoft Purview Implementation & AdministrationAdminister Microsoft Purview: accounts, collections, RBAC, and scanning policies Design scalable governance architecture for large-scale data environments (>50TB) Integrate with Azure Data Lake, Synapse, SQL DB, Power BI, and Snowflake 3 Metadata & Data Lineage ManagementDesign metadata repositories and workflows Ingest technical/business metadata via ADF, REST APIs, PowerShell, Logic Apps Validate end-to-end lineage (ADF Synapse Power BI), impact analysis, and remediation 4 Data Classification & SecurityImplement and govern sensitivity labels (PII, PCI, PHI) and classification policies.Integrate with Microsoft Information Protection (MIP), DLP, Insider Risk, and Compliance Manager.Enforce lifecycle policies, records management, and information barriers. Working knowledge of GDPR, HIPAA, SOX, CCPA.Strong communication and leadership to bridge technical and business governance.
Posted 2 months ago
6.0 - 10.0 years
30 - 35 Lacs
Bengaluru
Work from Office
We are seeking an experienced PySpark Developer / Data Engineer to design, develop, and optimize big data processing pipelines using Apache Spark and Python (PySpark). The ideal candidate should have expertise in distributed computing, ETL workflows, data lake architectures, and cloud-based big data solutions. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark on distributed computing platforms (Hadoop, Databricks, EMR, HDInsight). Work with structured and unstructured data to perform data transformation, cleansing, and aggregation. Implement data lake and data warehouse solutions on AWS (S3, Glue, Redshift), Azure (ADLS, Synapse), or GCP (BigQuery, Dataflow). Optimize PySpark jobs for performance tuning, partitioning, and caching strategies. Design and implement real-time and batch data processing solutions. Integrate data pipelines with Kafka, Delta Lake, Iceberg, or Hudi for streaming and incremental updates. Ensure data security, governance, and compliance with industry best practices. Work with data scientists and analysts to prepare and process large-scale datasets for machine learning models. Collaborate with DevOps teams to deploy, monitor, and scale PySpark jobs using CI/CD pipelines, Kubernetes, and containerization. Perform unit testing and validation to ensure data integrity and reliability. Required Skills & Qualifications: 6+ years of experience in big data processing, ETL, and data engineering. Strong hands-on experience with PySpark (Apache Spark with Python). Expertise in SQL, DataFrame API, and RDD transformations. Experience with big data platforms (Hadoop, Hive, HDFS, Spark SQL). Knowledge of cloud data processing services (AWS Glue, EMR, Databricks, Azure Synapse, GCP Dataflow). Proficiency in writing optimized queries, partitioning, and indexing for performance tuning. Experience with workflow orchestration tools like Airflow, Oozie, or Prefect. Familiarity with containerization and deployment using Docker, Kubernetes, and CI/CD pipelines. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, CCPA, etc.). Excellent problem-solving, debugging, and performance optimization skills.
Posted 2 months ago
0.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Data Engineering roles with Databricks + Azure skill sets (and few more) but specifically looking for Mumbai/Pune and immediately available candidates only. Skills: Azure Data Engineer Azure Databricks Azure Data Factory SQL Workmode: Hybrid Azure Databricks job description typically outlines a role focused on designing, developing, and deploying data solutions on the Azure cloud platform using Databricks. This involves building and optimizing data pipelines, implementing ETL processes, and working with big data technologies like Spark and Delta Lake. The role often requires strong skills in Python, PySpark, and Azure services like Data Factory and Synapse.
Posted 2 months ago
10.0 - 18.0 years
10 - 15 Lacs
Hyderabad, Bengaluru
Hybrid
Role & responsibilities Experience in ADF pipeline creation and testing from Different source to AWS S3 Installation and Configuration new ADF NonProd and Prod environments Experience in SSIS And Synapse Maintaining the health and stability of all ADF environments Creating and managing ADF Pipelines , Collaborating with development teams to deploy ADF pipelines and applications to different environments. Proactively monitoring the performance of ADF jobs and the overall environment Implementing and managing backup and recovery strategies for the ADF environment Planning and executing software upgrades and applying patches to maintain a stable and secure environment Providing technical support to development teams and end-users for ADF related issues. Creating and maintaining documentation related to the ADF environment 10+ Experience with ADF Pipeline creations and Administration
Posted 2 months ago
7.0 - 11.0 years
9 - 11 Lacs
Mumbai, Indore, Hyderabad
Work from Office
We're Hiring: Data Governance LeadLocations:Offices in Austin (USA), Singapore, Hyderabad, Indore, Ahmedabad (India)Primary Job Location: Hyderabad / Indore / Ahmedabad (Onsite Role) Compensation Range: Competitive | Based on experience and expertise To Apply, Share Your Resume With:Current CTCExpected CTCNotice PeriodPreferred Location What You Will Do Role Overview AKey Responsibilities 1 Governance Strategy & Stakeholder EnablementDefine and drive enterprise-level data governance frameworks and policies Align governance objectives with compliance, analytics, and business priorities Work with IT, Legal, Compliance, and Business teams to drive adoption Conduct training, workshops, and change management programs 2 Microsoft Purview Implementation & AdministrationAdminister Microsoft Purview: accounts, collections, RBAC, and scanning policies Design scalable governance architecture for large-scale data environments (>50TB) Integrate with Azure Data Lake, Synapse, SQL DB, Power BI, and Snowflake 3 Metadata & Data Lineage ManagementDesign metadata repositories and workflows Ingest technical/business metadata via ADF, REST APIs, PowerShell, Logic Apps Validate end-to-end lineage (ADF Synapse Power BI), impact analysis, and remediation 4 Data Classification & SecurityImplement and govern sensitivity labels (PII, PCI, PHI) and classification policies Integrate with Microsoft Information Protection (MIP), DLP, Insider Risk, and Compliance Manager Enforce lifecycle policies, records management, and information barriers Working knowledge of GDPR, HIPAA, SOX, CCPA Strong communication and leadership to bridge technical and business governance
Posted 2 months ago
6.0 - 9.0 years
7 - 11 Lacs
Pune
Work from Office
Job Title : Azure Data Factory Engineer Location State : Maharashtra Location City : Pune Experience Required : 6 to 8 Year(s) CTC Range : 7 to 11 LPA Shift: Day Shift Work Mode: Onsite Position Type: C2H Openings: 2 Company Name: VARITE INDIA PRIVATE LIMITED About The Client: Client is an Indian multinational technology company specializing in information technology services and consulting. Headquartered in Mumbai, it is a part of the Tata Group and operates in 150 locations across 46 countries. About The Job: A minimum of 5 years experience with large SQL data marts. Expert relational database experience, Candidate should demonstrate ability to navigate through massive volumes of data to deliver effective and efficient data extraction, design, load, and reporting solutions to business partnersExperience in troubleshooting and Supporting large databases and testing activities; Identifying reporting, and managing database security issues, user access/management; Designing database backup, archiving and storage, performance tunning, ETL importing large volume of data extracted from multiple systems, capacity planning Essential Job Functions: Strong knowledge of Extraction Transformation and Loading (ETL) processes using frameworks like Azure Data Factory or Synapse or Databricks; establishing the cloud connectivity between different system like ADLS ,ADF, Synapse, Databricks etc Qualifications: Skill Required: Digital : PySpark~Azure Data Factory How to Apply: Interested candidates are invited to submit their resume using the apply online button on this job post. About VARITE: VARITE is a global staffing and IT consulting company providing technical consulting and team augmentation services to Fortune 500 Companies in USA, UK, CANADA and INDIA. VARITE is currently a primary and direct vendor to the leading corporations in the verticals of Networking, Cloud Infrastructure, Hardware and Software, Digital Marketing and Media Solutions, Clinical Diagnostics, Utilities, Gaming and Entertainment, and Financial Services. Equal Opportunity Employer: VARITE is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity or expression, national origin, age, marital status, veteran status, or disability status. Unlock Rewards: Refer Candidates and Earn. If you're not available or interested in this opportunity, please pass this along to anyone in your network who might be a good fit and interested in our open positions. VARITE offers a Candidate Referral program, where you'll receive a one-time referral bonus based on the following scale if the referred candidate completes a three-month assignment with VARITE. Exp Req - Referral Bonus 0 - 2 Yrs. - INR 5,000 2 - 6 Yrs. - INR 7,500 6 + Yrs. - INR 10,000
Posted 2 months ago
10.0 - 15.0 years
40 - 65 Lacs
Bengaluru
Work from Office
Design and lead scalable data architectures, cloud solutions, and analytics platforms using Azure. Drive data governance, pipeline optimization, and team leadership to enable business-aligned data strategies in the Oil & Gas sector Required Candidate profile Experienced data architect or leader with 10–15+ years in Azure, big data, and solution design. Strong in stakeholder management, data governance, and Oil & Gas analytics.
Posted 2 months ago
5.0 - 10.0 years
10 - 20 Lacs
Chennai
Work from Office
Notice period: Immediate 15days Profile source: Anywhere in India Timings: 1:00pm-10:00pm Work Mode: WFO (Mon-Fri) Job Summary: We are looking for an experienced and highly skilled Senior Data Engineer to lead the design and development of our data infrastructure and pipelines. As a key member of the Data & Analytics team, you will play a pivotal role in scaling our data ecosystem, driving data engineering best practices, and mentoring junior engineers. This role is ideal for someone who thrives on solving complex data challenges and building systems that power business intelligence, analytics, and advanced data products. Key Responsibilities: Design and build robust, scalable, and secure data pipelines and Lead the complete lifecycle of ETL/ELT processes, encompassing data intake, transformation, and storage including the concept of SCD type2. Collaborate with data scientists, analysts, backend and product teams to define data requirements and deliver impactful data solutions. Maintain and oversee the data infrastructure, including cloud storage, processing frameworks, and orchestration tools. Build logical and physical data model using any data modeling tool Champion data governance practices, focusing on data quality, lineage tracking, and catalog Guarantee adherence of data systems to privacy regulations and organizational Guide junior engineers, conduct code reviews, and foster knowledge sharing and technical best practices within the team. Required Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related Minimum of 5 years of practical experience in a data engineering or comparable Demonstrated expertise in SQL and Python (or similar languages such as Scala/Java). Extensive experience with data pipeline orchestration tools (e.g., Airflow, dbt, ). Proficiency in cloud data platforms, including AWS (Redshift, S3, Glue), or GCP (BigQuery, Dataflow), or Azure (Data Factory, Synapse). Familiarity with big data technologies (e.g., Spark, Kafka, Hive) and other data Solid grasp of data warehousing principles, data modeling techniques, and performance (e.g. Erwin Data Modeler, MySQL Workbench) Exceptional problem-solving abilities coupled with a proactive and team-oriented approach.
Posted 2 months ago
5.0 - 8.0 years
5 - 15 Lacs
Chennai
Work from Office
Notice period: Immediate 15days Profile source: Tamil Nadu Timings: 1:00pm 10:00pm (IST) Work Mode: WFO (Mon-Fri) About the Role We are looking for an experienced and highly skilled Senior Data Engineer to lead the design and development of our data infrastructure and pipelines. As a key member of the Data & Analytics team, you will play a pivotal role in scaling our data ecosystem, driving data engineering best practices, and mentoring junior engineers. This role is ideal for someone who thrives on solving complex data challenges and building systems that power business intelligence, analytics, and advanced data products. Key Responsibilities Design and build robust, scalable, and secure data pipelines and Lead the complete lifecycle of ETL/ELT processes, encompassing data intake, transformation, and Collaborate with data scientists, analysts, and product teams to define data requirements and deliver impactful data solutions. Maintain and oversee the data infrastructure, including cloud storage, processing frameworks, and orchestration tools. Champion data governance practices, focusing on data quality, lineage tracking, and catalog Guarantee adherence of data systems to privacy regulations and organizational Guide junior engineers, conduct code reviews, and foster knowledge sharing and technical best practices within the team. Required Qualifications & Skills: Bachelors or masters degree in computer science, Engineering, or a related Minimum of 5 years of practical experience in a data engineering or comparable Demonstrated expertise in SQL and Python (or similar languages such as Scala/Java). Extensive experience with data pipeline orchestration tools (e.g., Airflow, dbt, Prefect). Proficiency in cloud data platforms, including AWS (Redshift, S3, Glue), GCP (BigQuery, Dataflow), or Azure (Data Factory, Synapse). Familiarity with big data technologies (e.g., Spark, Kafka, Hive) and contemporary data stack Solid grasp of data warehousing principles, data modeling techniques, and performance Exceptional problem-solving abilities coupled with a proactive and team-oriented approach.
Posted 2 months ago
8.0 - 12.0 years
5 - 7 Lacs
Delhi, India
On-site
Required Qualifications: Proven experience in administering and managing Microsoft Fabric. Strong background in Azure Data Services (Data Lake, Synapse, Azure SQL, etc.). Expertise in Power BI service administration, migration, and optimization. Knowledge of Microsoft Purview, Data Security, and Governance frameworks is a definite plus. Experience with Role-Based Access Control (RBAC) and data security best practices. Strong understanding of data integration, ETL, and data warehousing concepts. Familiarity with Azure networking, storage, and identity management (AAD, IAM, etc.). Scripting experience with PowerShell, Python, or other automation tools is a plus. Preferred Qualifications: Hands-on experience with Data Fabric and enterprise-level data migration projects. Experience working in large-scale data environments. Soft Skills: Strong problem-solving and analytical skills. Ability to communicate technical details to both technical and non-technical stakeholders. Collaborative and team-oriented mindset. Proactive and able to work independently on complex tasks.
Posted 2 months ago
2.0 - 4.0 years
4 - 6 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Work from Office
Type: Contract (36 Months Project) Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai Availability: Immediate Joiners Preferred Were looking for a Junior Microsoft Purview Specialist to support our data cataloging and governance initiatives in a fast-paced remote setup. Key Responsibilities: Assist in the configuration and management of Microsoft Purview Support data cataloging, classification, and lineage tracking Work with data owners to ensure proper tagging and metadata management Help implement data governance policies Assist in integrating Purview with Azure and on-premises sources Document governance processes and resolve Purview-related issues Collaborate with project teams for timely delivery Primary Skills Required: Microsoft Purview Data Cataloging & Classification Metadata Management Understanding of Data Governance Azure Data Services (Basic knowledge is a plus) Strong communication and collaboration skills Preferred Qualifications: Certification/training in Microsoft Purview or related tools Exposure to Azure ecosystem: Data Factory, Synapse, Data Lake Ability to work independently in a remote environment If interested, please share your profile with the following details: Full Name: Total Experience: Relevant Microsoft Purview Experience: Current CTC: Expected CTC: Notice Period / Availability: Current Location: Preferred Location (Remote):
Posted 2 months ago
8.0 - 10.0 years
8 - 12 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Job Responsibilities: Collaborate with customers to create scalable and secure Azure solutions. Develop and deploy Java applications on Azure with DevOps integration. Automate infrastructure provisioning using Terraform and manage CI/CD pipelines. Ensure system security and compliance in Azure environments. Provide expert guidance on Azure services, identity management, and DevOps best practices. Design, configure, and manage Azure services, including Azure Synapse, security, DNS, databases, App Gateway, Front Door, Traffic Manager, and Azure Automation. Core Skills: Expertise in Azure services (Synapse, DNS, App Gateway, Traffic Manager, etc.). Experience with Java-based application deployment and CI/CD pipelines. Proficiency in Microsoft Entra ID, Office 365 integration, and Terraform. Strong knowledge of cloud security and DevOps best practices.
Posted 2 months ago
2.0 - 4.0 years
4 - 7 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Type: Contract (36 Months Project) Availability: Immediate Joiners Preferred Were looking for a Junior Microsoft Purview Specialist to support our data cataloging and governance initiatives in a fast-paced remote setup. Key Responsibilities: Assist in the configuration and management of Microsoft Purview Support data cataloging, classification, and lineage tracking Work with data owners to ensure proper tagging and metadata management Help implement data governance policies Assist in integrating Purview with Azure and on-premises sources Document governance processes and resolve Purview-related issues Collaborate with project teams for timely delivery Primary Skills Required: Microsoft Purview Data Cataloging & Classification Metadata Management Understanding of Data Governance Azure Data Services (Basic knowledge is a plus) Strong communication and collaboration skills Preferred Qualifications: Certification/training in Microsoft Purview or related tools Exposure to Azure ecosystem: Data Factory, Synapse, Data Lake Ability to work independently in a remote environment If interested, please share your profile with the following details: Full Name: Total Experience: Relevant Microsoft Purview Experience: Current CTC: Expected CTC: Notice Period / Availability: Current Location: Preferred Location (Remote): Location-remote,Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 2 months ago
10.0 - 15.0 years
10 - 15 Lacs
Ahmedabad, Gujarat, India
On-site
Key Responsibilities Design, develop, and implement end-to-end data architecture solutions. Provide technical leadership in Azure, Databricks, Snowflake, and Microsoft Fabric. Architect scalable, secure, and high-performing data solutions. Work on data strategy, governance, and optimization. Implement and optimize Power BI dashboards and SQL-based analytics. Collaborate with cross-functional teams to deliver robust data solutions. Primary Skills Required Data Architecture & Solutioning Azure Cloud (Data Services, Storage, Synapse, etc.) Databricks & Snowflake (Data Engineering & Warehousing) Power BI (Visualization & Reporting) Microsoft Fabric (Data & AI Integration) SQL (Advanced Querying & Optimization)
Posted 2 months ago
2.0 - 4.0 years
4 - 6 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, remote Experience: 13 Years Preferred Availability: Immediate Joiners Preferred Were looking for a Junior Microsoft Purview Specialist to support our data cataloging and governance initiatives in a fast-paced remote setup. Key Responsibilities: Assist in the configuration and management of Microsoft Purview Support data cataloging, classification, and lineage tracking Work with data owners to ensure proper tagging and metadata management Help implement data governance policies Assist in integrating Purview with Azure and on-premises sources Document governance processes and resolve Purview-related issues Collaborate with project teams for timely delivery Primary Skills Required: Microsoft Purview Data Cataloging & Classification Metadata Management Understanding of Data Governance Azure Data Services (Basic knowledge is a plus) Strong communication and collaboration skills Preferred Qualifications: Certification/training in Microsoft Purview or related tools Exposure to Azure ecosystem: Data Factory, Synapse, Data Lake Ability to work independently in a remote environment If interested, please share your profile with the following details: Full Name: Total Experience: Relevant Microsoft Purview Experience: Current CTC: Expected CTC: Notice Period / Availability: Current Location: Preferred Location (Remote):
Posted 2 months ago
2.0 - 7.0 years
4 - 9 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Job Title: Microsoft Purview Specialist (Junior Level) Type: Contract (36 Months Project) Location: Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote Experience: 1--3 Years Preferred Availability: Immediate Joiners Preferred Were looking for a Junior Microsoft Purview Specialist to support our data cataloging and governance initiatives in a fast-paced remote setup. Key Responsibilities: Assist in the configuration and management of Microsoft Purview Support data cataloging, classification, and lineage tracking Work with data owners to ensure proper tagging and metadata management Help implement data governance policies Assist in integrating Purview with Azure and on-premises sources Document governance processes and resolve Purview-related issues Collaborate with project teams for timely delivery Primary Skills Required: Microsoft Purview Data Cataloging & Classification Metadata Management Understanding of Data Governance Azure Data Services (Basic knowledge is a plus) Strong communication and collaboration skills Preferred Qualifications: Certification/training in Microsoft Purview or related tools Exposure to Azure ecosystem: Data Factory, Synapse, Data Lake Ability to work independently in a remote environment If interested, please share your profile with the following details: Full Name: Total Experience: Relevant Microsoft Purview Experience: Current CTC: Expected CTC: Notice Period / Availability: Current Location: Preferred Location (Remote):
Posted 3 months ago
3.0 - 7.0 years
3 - 7 Lacs
Pune, Maharashtra, India
On-site
We're searching for an experienced ADF/SSIS Developer to join our data engineering team. This role is ideal for a professional with a strong background in data warehousing and a proven track record in cloud computing, particularly within the Azure ecosystem. You'll be instrumental in migrating and developing robust ETL solutions, leveraging Azure services and various database technologies. Key Responsibilities Design, develop, and maintain ETL (Extract, Transform, Load) processes using Azure Data Factory (ADF) and SQL Server Integration Services (SSIS) . Lead and participate in the migration of conventional ETL processes (SSIS) from on-premise environments to the Azure cloud landscape . Work extensively with and manage data in cloud databases such as Azure SQL and Azure Synapse , as well as Snowflake database . Develop and optimize complex SQL queries and stored procedures, preferably within a SQL Server environment. Collaborate with data architects and other developers to ensure data integrity, quality, and performance across all data pipelines. Troubleshoot and resolve data-related issues, ensuring data accuracy and system reliability. Required Skills & Experience Data Engineering/Data Warehousing Development/Operations Experience: 4+ years. Cloud Computing Experience: Minimum 2 years, with a focus on Azure. ETL Migration: Reasonable experience migrating conventional ETL (SSIS) processes from on-premise to Azure. Database Experience:Azure SQL Azure Synapse Snowflake database SQL Development: Proven experience in SQL development, preferably with SQL Server.
Posted 3 months ago
8.0 - 12.0 years
12 - 16 Lacs
Pune
Work from Office
Roles & Responsibilities: Design end-to-end data code development using pyspark, python, SQL and Kafka leveraging Microsoft Fabric's capabilities. Requirements: Hands-on experience with Microsoft Fabric , including Lakehouse, Data Factory, and Synapse . Strong expertise in PySpark and Python for large-scale data processing and transformation. Deep knowledge of Azure data services (ADLS Gen2, Azure Databricks, Synapse, ADF, Azure SQL, etc.). Experience in designing, implementing, and optimizing end-to-end data pipelines on Azure. Understanding of Azure infrastructure setup (networking, security, and access management) is good to have. Healthcare domain knowledge is a plus but not mandatory.
Posted 3 months ago
3.0 - 5.0 years
0 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
Data Engineer - Azure This is a hands on data platform engineering role that places significant emphasis on consultative data engineering engagements with a wide range of customer stakeholders Business Owners, Business Analytics, Data Engineering teams, Application Development, End Users and Management teams. You Will: .Design and build resilient and efficient data pipelines for batch and real-time streaming .Collaborate with product managers, software engineers, data analysts, and data scientists to build scalable and data-driven platforms and tools. .Provide technical product expertise, advise on deployment architectures, and handle in-depth technical questions around data infrastructure, PaaS services, design patterns and implementation approaches. .Collaborate with enterprise architects, data architects, ETL developers & engineers, data scientists, and information designers to lead the identification and definition of required data structures, formats, pipelines, metadata, and workload orchestration capabilities .Address aspects such as data privacy & security, data ingestion & processing, data storage & compute, analytical & operational consumption, data modeling, data virtualization, self-service data preparation & analytics, AI enablement, and API integrations. .Execute projects with an Agile mindset. .Build software frameworks to solve data problems at scale. Technical Requirements: .3+ years of data engineering experience leading implementations of large-scale lakehouses on Databricks, Snowflake, or Synapse. Prior experience using DBT and PowerBI will be a plus. .Extensive experience with Azure data services (Databricks, Synapse, ADF) and related azure infrastructure services like firewall, storage, key vault etc. is required. .Strong programming / scripting experience using SQL and python and Spark. .Knowledge of software configuration management environments and tools such as JIRA, Git, Jenkins, TFS, Shell, PowerShell, Bitbucket. .Experience with Agile development methods in data-oriented projects Other Requirements: .Highly motivated self-starter and team player and demonstrated success in prior roles. .Track record of success working through technical challenges within enterprise organizations .Ability to prioritize deals, training, and initiatives through highly effective time management .Excellent problem solving, analytical, presentation, and whiteboarding skills .Track record of success dealing with ambiguity (internal and external) and working collaboratively with other departments and organizations to solve challenging problems .Strong knowledge of technology and industry trends that affect data analytics decisions for enterprise organizations .Certifications on Azure Data Engineering and related technologies.
Posted 3 months ago
4.0 - 8.0 years
25 - 27 Lacs
Bengaluru
Hybrid
Job Summary: We are looking for a highly skilled Azure Data Engineer with experience in building and managing scalable data pipelines using Azure Data Factory, Synapse, and Databricks . The ideal candidate should be proficient in big data tools and Azure services, with strong programming knowledge and a solid understanding of data architecture and cloud platforms. Key Responsibilities: Design and deliver robust data pipelines using Azure-native tools Work with Azure services like ADLS, Azure SQL DB, Cosmos DB, and Synapse Develop ETL/ELT solutions and collaborate in cloud-native architecture discussions Support real-time and batch data processing using tools like Kafka, Spark, and Stream Analytics Partner with global teams to develop high-performing, secure, and scalable solutions Required Skills: 4 years to 7 years of experience in Data Engineering and Azure platform Expertise in Azure Data Factory, Synapse, Databricks, Stream Analytics, PowerBI Hands-on with Python, Scala, SQL, C#, Java and big data tools like Spark, Hive, Kafka, EventHub Experience with distributed systems, data governance, and large-scale data environments Apply now to join a cutting-edge data engineering team enabling innovation through Azure cloud solutions.
Posted 3 months ago
5.0 - 10.0 years
6 - 15 Lacs
Bengaluru
Work from Office
Urgent Hiring _ Azure Data Engineer with a leading Management Consulting Company @ Bangalore Location. Strong expertise in Databricks & Pyspark while dealing with batch processing or live (streaming) data sources. 4+ relevant years of experience in Databricks & Pyspark/Scala 7+ total years of experience Good in data modelling and designing. Ctc- Hike Shall be considered on Current/Last Drawn Pay Apply - rohita.robert@adecco.com Has worked on real data challenges and handled high volume, velocity, and variety of data. Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges. Contributes to community building initiatives like CoE, CoP. Mandatory skills: Azure - Master ELT - Skill Data Modeling - Skill Data Integration & Ingestion - Skill Data Manipulation and Processing - Skill GITHUB, Action, Azure DevOps - Skill Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest - Skill
Posted 3 months ago
8.0 - 12.0 years
15 - 20 Lacs
Hyderabad, Pune
Work from Office
1. Alteast 6+ years of experience in ETL & Data Warehousing 1. Should have excellent leadership & communication skills 1. Should have strong working experience on Data Lakehouse architecture 1. Should have in depth knowledge on SSIS ETL Tool and good working knowledge on Power BI 1. Should have worked on data sources such as SAP and Salesforce 1. Should have very good knowledge of SSIS (ETL Tool), StreamSets (ETL Tool), Azure Cloud, ADF, Azure Synapse Analytics & Azure Hub Events 1. Should have built solution automations in any of the above ETL tools 1. Should have executed atleast 2 Azure Cloud Data Warehousing projects 1. Should have worked atleast 2 projects using Agile/SAFe methodology 1. Should have demonstrated working knowledge on ITIL V4 concepts such as Incident Management, Problem Management, Change Management & Knowledge Management 1. Should have working experience on any DevOps tools like GitHub, Jenkins, etc & on semi-structured data formats like JSON, Parquet and/or XML files & written complex SQL queries for data analysis and extraction 1. Should have in depth understanding on Data Warehousing, Data Analysis, Data Profiling, Data Quality & Data Mapping 1. Should have cross global location experience and been part of a team with atleast 15+ members in a global delivery model 1. Should have experience in working with product managers, project managers, business users, applications development team members, DBA teams and Data Governance team on a daily basis to analyze requirements, design, development and deployment technical solutions
Posted 3 months ago
8.0 - 12.0 years
32 - 37 Lacs
Hyderabad
Work from Office
Job Overview: As Senior Analyst, Data Modeling , your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analyzing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architects teams. As a member of the data modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics . The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, DataBricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper management: business and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levels: low-latency, relational, and unstructured data stores; analytical and data lakes; data streaming (consumption/production), data in-transit. Develop reusable data models based on cloud-centric, code-first approaches to data management and cleansing. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications: 8+ years of overall technology experience that includes at least 4+ years of data modeling and systems architecture. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience developing enterprise data models. Experience in building solutions in the retail or in the supply chain space. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operatinghighly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or SnowFlake. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI).
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |