Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 10.0 years
10 - 12 Lacs
Chennai
Hybrid
Job Title: Product Owner / Subject Matter Expert (AI & Data) Experience Required: 10+ years Location: The selected candidate is required to work onsite for the initial 1 to 3-month project training and execution period at either our Kovilpatti or Chennai location, which will be confirmed during the onboarding process. After the initial period, remote work opportunities will be offered. Job Description: The Product Owner / Subject Matter Expert (AI & Data) will lead the definition, prioritization, and successful delivery of intelligent, data-driven products by aligning business needs with AI/ML and data platform capabilities. Acting as a bridge between stakeholders, data engineering teams, and AI developers, this role ensures that business goals are translated into actionable technical requirements. The candidate will manage product backlogs, define epics and features, and guide cross-functional teams throughout the product development lifecycle. They will play a crucial role in driving innovation, ensuring data governance, and realizing value through AI-enhanced digital solutions. Key Responsibilities: Define and manage the product roadmap across AI and data domains based on business strategy and stakeholder input. Translate business needs into technical requirements, user stories, and use cases for AI and data-driven applications. Collaborate with data scientists, AI engineers, and data engineers to prioritize features, define MVPs, and validate solution feasibility. Lead backlog refinement, sprint planning, and iteration reviews across multidisciplinary teams. Drive the adoption of AI models (e.g., LLMs, classification, prediction, recommendation) and data pipelines that support operational goals. Ensure inclusion of data governance, lineage, and compliance requirements in product development. Engage with business units to define KPIs and success metrics for AI and analytics products. Document product artifacts such as PRDs, feature definitions, data mappings, model selection criteria, and risk registers. Facilitate workshops, stakeholder demos, and solution walkthroughs to ensure ongoing alignment. Support responsible AI practices and secure data sharing standards. Technical Skills: Product Management Tools: Azure DevOps, Jira, Confluence AI/ML Concepts: LLMs, NLP, predictive analytics, computer vision, generative AI AI Tools: OpenAI, Azure OpenAI, MLflow, LangChain, prompt engineering Data Platforms: Azure Data Factory, Databricks, Synapse Analytics, Purview, SQL, NoSQL Data Governance: Metadata management, data lineage, PII handling, classification standards Documentation: PRDs, data dictionaries, process flows, KPI dashboards Methodologies: Agile/Scrum, backlog management, MVP delivery Qualification: Bachelors or Master’s in Computer Science, Data Science, Information Systems, or a related field. Preferred Certifications: Microsoft Certified (Azure AI Engineer Associate / Azure Data Fundamentals / Azure Data Engineer Associate). 10+ years of experience in product ownership, business analysis, or solution delivery in AI and data-centric environments. Proven success in delivering AI-enabled products and scalable data platforms. Strong communication, stakeholder facilitation, and technical documentation skills.
Posted 3 weeks ago
4.0 - 8.0 years
5 - 15 Lacs
Chennai, Delhi / NCR, Mumbai (All Areas)
Hybrid
Job Description (JD): Azure Databricks / ADF / Synapse , with strong emphasis on Python, SQL, Data Lake, and Data Warehouse : Job Title: Data Engineer Azure (Databricks / ADF / Synapse) Experience: 4 to 7 Years Location: Pan India Employment Type: Full-Time Notice Period: Immediate to 30 Days Job Summary: We are looking for a skilled and experienced Data Engineer with 4 to 8 years of experience in building scalable data solutions on the Microsoft Azure ecosystem . The ideal candidate must have strong hands-on experience with Azure Databricks , Azure Data Factory (ADF) , or Azure Synapse Analytics , along with Python and SQL expertise. Familiarity with Data Lake , Data Warehouse concepts, and end-to-end data pipelines is essential. Key Responsibilities: Requirement gathering and analysis Experience with different databases like Synapse, SQL DB, Snowflake etc. Design and implement data pipelines using Azure Data Factory, Databricks, Synapse Create and manage Azure SQL Data Warehouses and Azure Cosmos DB databases Extract, transform, and load (ETL) data from various sources into Azure Data Lake Storage Implement data security and governance measures Monitor and optimize data pipelines for performance and efficiency Troubleshoot and resolve data engineering issues Provide optimized solution for any problem related to data engineering Ability to work with a variety of sources like Relational DB, API, File System, Realtime streams, CDC etc. Strong knowledge on Databricks, Delta tables Required Skills: 48 years of experience in Data Engineering or related roles. Hands-on experience in Azure Databricks , ADF , or Synapse Analytics Proficiency in Python for data processing and scripting. Strong command over SQL writing complex queries, performance tuning, etc. Experience working with Azure Data Lake Storage and Data Warehouse concepts (e.g., dimensional modeling, star/snowflake schemas). Understanding CI/CD practices in a data engineering context. Excellent problem-solving and communication skills. Good to Have: Experienced in Delta Lake , Power BI , or Azure DevOps . Knowledge of Spark , Scala , or other distributed processing frameworks. Exposure to BI tools like Power BI , Tableau , or Looker . Familiarity with data security and compliance in the cloud. Experience in leading a development team.
Posted 3 weeks ago
8.0 - 10.0 years
5 - 15 Lacs
Chennai, Bengaluru, Mumbai (All Areas)
Hybrid
Job Title: Azure Data Architect Experience: 8 to 10 years Location: Pan India Employment Type: Full-Time Notice period : Immediate to 30 days Technology: SQL, ADF, ADLS, Synapse, Pyspark, Databricks, data modelling Key Responsibilities: Requirement gathering and analysis Design of data architecture and data model to ingest data Experience with different databases like Synapse, SQL DB, Snowflake etc. Design and implement data pipelines using Azure Data Factory, Databricks, Synapse Create and manage Azure SQL Data Warehouses and Azure Cosmos DB databases Extract, transform, and load (ETL) data from various sources into Azure Data Lake Storage Implement data security and governance measures Monitor and optimize data pipelines for performance and efficiency Troubleshoot and resolve data engineering issues Hands on experience on Azure functions and other components like realtime streaming etc Oversee Azure billing processes, conducting analyses to ensure cost-effectiveness and efficiency in data operations. Provide optimized solution for any problem related to data engineering Ability to work with verity of sources like Relational DB, API, File System, Realtime streams, CDC etc. Strong knowledge on Databricks, Delta tables
Posted 3 weeks ago
6.0 - 11.0 years
12 - 22 Lacs
Pune, Gurugram, Bengaluru
Work from Office
Warm welcome from SP Staffing Services! Reaching out to you regarding permanent opportunity !! Job Description: Exp: 6-12 yrs Location: PAN India Skill: Azure Data Factory/SSIS Interested can share your resume to sangeetha.spstaffing@gmail.com with below inline details. Full Name as per PAN: Mobile No: Alt No/ Whatsapp No: Total Exp: Relevant Exp in Data Factory: Rel Exp in Synapse: Rel Exp in SSIS: Rel Exp in Python/Pyspark: Current CTC: Expected CTC: Notice Period (Official): Notice Period (Negotiable)/Reason: Date of Birth: PAN number: Reason for Job Change: Offer in Pipeline (Current Status): Availability for virtual interview on weekdays between 10 AM- 4 PM(plz mention time): Current Res Location: Preferred Job Location: Whether educational % in 10th std, 12th std, UG is all above 50%? Do you have any gaps in between your education or Career? If having gap, please mention the duration in months/year:
Posted 3 weeks ago
5.0 - 10.0 years
7 - 17 Lacs
Pune
Hybrid
Azure Data Engineer Remote/Pune-Hybrid Full time Permanent Company: Academian Job Description: We are seeking a skilled Data Engineer with strong experience in Microsoft Azure Cloud services to design, build, and maintain robust data pipelines and architectures. In this role, you will design, implement, and maintain our data infrastructure, ensuring efficient data processing and availability throughout the organization. Key Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines in Azure. Work with Azure services such as Azure Data Factory, Azure Data Lake, Synapse Analytics, Azure SQL, and Databricks . Implement and optimize data storage and retrieval solutions in the cloud. Ensure data quality, consistency, and governance through robust validation and monitoring. Develop and manage CI/CD pipelines for data workflows using tools like Azure DevOps . Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions. Support and troubleshoot data issues and ensure high availability of data infrastructure. Follow best practices in data security, privacy, and compliance Develop and maintain data architectures (data lakes, data warehouses). Integrate data from a wide variety of sources (APIs, logs, third-party platforms). Monitor data workflows and troubleshoot data-related issues. Required Skills & Experience Bachelors degree in computer science, Information Technology, or related field 5+ years of experience in data engineering or similar role Strong hands-on experience with Azure Data Factory, Azure Data Lake, Azure Synapse Analytics, and Databricks. Proficiency in SQL, Python, and PySpark. Experience with data modeling, schema design, and data warehousing. Familiarity with CI/CD processes, version control (e.g., Git), and deployment in Azure DevOps. Knowledge of data governance tools and practices (e.g., Azure Purview, RBAC). Strong SQL skills and experience with relational databases Proficiency with Apache Kafka and streaming data architectures Knowledge of ETL tools and processes Familiarity with DW-BI Tools PowerBI Strong knowledge of database systems (PostgreSQL, MySQL, NoSQL). Understanding of distributed systems like Kafka or MSK Preferred Skills: Experience of data visualization tools Experience with NoSQL databases Understanding of machine learning pipelines and workflows Regards Manisha Koul mkoul@academian.com www.linkedin.com/in/koul-manisha
Posted 3 weeks ago
10.0 - 12.0 years
25 - 30 Lacs
Noida, Hyderabad
Work from Office
We’re hiring an Azure Data Architect with 10+ years of experience in designing end-to-end data solutions using ADF, Synapse, Databricks, Data Lake, and Python/SQL.
Posted 3 weeks ago
9.0 - 14.0 years
25 - 35 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Role & responsibilities Job Overview: We are looking for a Senior Data Engineer with strong expertise in SQL, Python, Azure Synapse, Azure Data Factory, Snowflake, and Databricks . The ideal candidate should have a solid understanding of SQL (DDL, DML, query optimization) and ETL pipelines while demonstrating a learning mindset to adapt to evolving technologies. Key Responsibilities: Collaborate with business and IT stakeholders to define business and functional requirements for data solutions. Design and implement scalable ETL/ELT pipelines using Azure Data Factory, Databricks, and Snowflake . Develop detailed technical designs, data flow diagrams, and future-state data architecture . Evangelize modern data modelling practices , including entity-relationship models, star schema, and Kimball methodology . Ensure data governance, quality, and validation by working closely with quality engineering teams . Write, optimize, and troubleshoot complex SQL queries , including DDL, DML, and performance tuning . Work with Azure Synapse, Azure Data Lake, and Snowflake for large-scale data processing . Implement DevOps and CI/CD best practices for automated data pipeline deployments. Support real-time streaming data processing with Spark, Kafka, or similar technologies . Provide technical mentorship and guide team members on best practices in SQL, ETL, and cloud data solutions . Stay up to date with emerging cloud and data engineering technologies and demonstrate a continuous learning mindset .
Posted 1 month ago
3.0 - 6.0 years
4 - 7 Lacs
Indore, Hyderabad, Ahmedabad
Work from Office
We're Hiring: Data Governance Developer Microsoft Purview, Locations: Hyderabad / Indore / Ahmedabad (Work from Office) Role Overview As a Data Governance Developer at Kanerika, you will be responsible for developing and managing robust metadata, lineage, and compliance frameworks using Microsoft Purview and other leading tools. Youll work closely with engineering and business teams to ensure data integrity, regulatory compliance, and operational transparency. Key Responsibilities Set up and manage Microsoft Purview: accounts, collections, RBAC, and policies. Integrate Purview with Azure Data Lake, Synapse, SQL DB, Power BI, Snowflake. Schedule & monitor metadata scanning, classification, and lineage tracking jobs. Build ingestion workflows for technical, business, and operational metadata. Tag, enrich, and organize assets with glossary terms and metadata. Automate lineage, glossary, and scanning processes via REST APIs, PowerShell, ADF, and Logic Apps. Design and enforce classification rules for PII, PCI, PHI. Collaborate with domain owners for glossary and metadata quality governance. Generate compliance dashboards and lineage maps in Power BI. Tools & Technologies Governance Platforms: Microsoft Purview, Collibra, Atlan, Informatica Axon, IBM IG Catalog Integration Tools: Azure Data Factory, dbt, Talend Automation & Scripting: PowerShell, Azure Functions, Logic Apps, REST APIs Compliance Areas in Purview: Sensitivity Labels, Policy Management, Auto-labeling Data Loss Prevention (DLP), Insider Risk Mgmt, Records Management Compliance Manager, Lifecycle Mgmt, eDiscovery, Audit DSPM, Information Barriers, Unified Catalog Required Qualifications 4-6 years of experience in Data Governance / Data Management. Hands-on with Microsoft Purview, especially lineage and classification workflows. Strong understanding of metadata management, glossary governance, and data classification. Familiarity with Azure Data Factory, dbt, Talend. Working knowledge of data compliance regulations: GDPR, CCPA, SOX, HIPAA. Strong communication skills to collaborate across technical and non-technical teams. Apply by sharing your resume with: Current CTC Expected CTC Notice Period Preferred Location Email your profile to: navaneetha@suzva.com Contact: +91 90329 56160
Posted 1 month ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad
Work from Office
Lead Data Engineer Data Management Job description Company Overview Accordion works at the intersection of sponsors and management teams throughout every stage of the investment lifecycle, providing hands-on, execution-focused support to elevate data and analytics capabilities. So, what does it mean to work at Accordion? It means joining 1,000+ analytics, data science, finance & technology experts in a high-growth, agile, and entrepreneurial environment while transforming how portfolio companies drive value. It also means making your mark on Accordions futureby embracing a culture rooted in collaboration and a firm-wide commitment to building something great, together. Headquartered in New York City with 10 offices worldwide, Accordion invites you to join our journey. Data & Analytics (Accordion | Data & Analytics) Accordion's Data & Analytics (D&A) team delivers cutting-edge, intelligent solutions to a global clientele, leveraging a blend of domain knowledge, sophisticated technology tools, and deep analytics capabilities to tackle complex business challenges. We partner with Private Equity clients and their Portfolio Companies across diverse sectors, including Retail, CPG, Healthcare, Media & Entertainment, Technology, and Logistics. D&A team delivers data and analytical solutions designed to streamline reporting capabilities and enhance business insights across vast and complex data sets ranging from Sales, Operations, Marketing, Pricing, Customer Strategies, and more. Location: Hyderabad Role Overview: Accordion is looking for Lead Data Engineer. He/she will be responsible for the design, development, configuration/deployment, and maintenance of the above technology stack. He/she must have in-depth understanding of various tools & technologies in the above domain to design and implement robust and scalable solutions which address client current and future requirements at optimal costs. The Lead Data Engineer should be able to evaluate existing architectures and recommend way to upgrade and improve the performance of architectures both on-premises and cloud-based solutions. A successful Lead Data Engineer should possess strong working business knowledge, familiarity with multiple tools and techniques along with industry standards and best practices in Business Intelligence and Data Warehousing environment. He/she should have strong organizational, critical thinking, and communication skills. What You will do: Partners with clients to understand their business and create comprehensive business requirements. Develops end-to-end Business Intelligence framework based on requirements including recommending appropriate architecture (on-premises or cloud), analytics and reporting. Works closely with the business and technology teams to guide in solution development and implementation. Work closely with the business teams to arrive at methodologies to develop KPIs and Metrics. Work with Project Manager in developing and executing project plans within assigned schedule and timeline. Develop standard reports and functional dashboards based on business requirements. Conduct training programs and knowledge transfer sessions to junior developers when needed. Recommend improvements to provide optimum reporting solutions. Curiosity to learn new tools and technologies to provide futuristic solutions for clients. Ideally, you have: Undergraduate degree (B.E/B.Tech.) from tier-1/tier-2 colleges are preferred. More than 5 years of experience in related field. Proven expertise in SSIS, SSAS and SSRS (MSBI Suite.) In-depth knowledge of databases (SQL Server, MySQL, Oracle etc.) and data warehouse (any one of Azure Synapse, AWS Redshift, Google BigQuery, Snowflake etc.) In-depth knowledge of business intelligence tools (any one of Power BI, Tableau, Qlik, DOMO, Looker etc.) Good understanding of Azure (OR) AWS: Azure (Data Factory & Pipelines, SQL Database & Managed Instances, DevOps, Logic Apps, Analysis Services) or AWS (Glue, Aurora Database, Dynamo Database, Redshift, QuickSight). Proven abilities to take on initiative and be innovative. Analytical mind with problem solving attitude. Why Explore a Career at Accordion: High growth environment: Semi-annual performance management and promotion cycles coupled with a strong meritocratic culture, enables fast track to leadership responsibility. Cross Domain Exposure: Interesting and challenging work streams across industries and domains that always keep you excited, motivated, and on your toes. Entrepreneurial Environment : Intellectual freedom to make decisions and own them. We expect you to spread your wings and assume larger responsibilities. Fun culture and peer group: Non-bureaucratic and fun working environment; Strong peer environment that will challenge you and accelerate your learning curve. Other benefits for full time employees: Health and wellness programs that include employee health insurance covering immediate family members and parents, term life insurance for employees, free health camps for employees, discounted health services (including vision, dental) for employee and family members, free doctors consultations, counsellors, etc. Corporate Meal card options for ease of use and tax benefits. Team lunches, company sponsored team outings and celebrations. Cab reimbursement for women employees beyond a certain time of the day. Robust leave policy to support work-life balance. Specially designed leave structure to support woman employees for maternity and related requests. Reward and recognition platform to celebrate professional and personal milestones. A positive & transparent work environment including various employee engagement and employee benefit initiatives to support personal and professional learning and development.
Posted 1 month ago
5.0 - 10.0 years
18 - 33 Lacs
Bengaluru
Hybrid
Neudesic, an IBM Company is home to some very smart, talented and motivated people. People who want to work for an innovative company that values their skills and keeps their passions alive with new challenges and opportunities. We have created a culture of innovation that makes Neudesic not only an industry leader, but also a career destination for todays brightest technologists. You can see it in our year-over-year growth, made possible by satisfied employees dedicated to delivering the right solutions to our clients Must Have Skills: Prior experience in ETL, data pipelines, data flow techniques using Azure Data Services Working experience in Python, Scala, PySpark, Azure Data Factory, Azure Data Lake Gen2, Databricks, Azure Synapse and file formats like JSON & Parquet Experience in creating ADF Pipelines to source and process data sets. Experience in creating Databricks notebooks to cleanse, transform and enrich data sets. Good understanding about SQL, Databases, NO-SQL DBs, Data Warehouse, Hadoop and various data storage options on the cloud. Development experience in orchestration of pipelines Experience in deployment and monitoring techniques Working experience with Azure DevOps CI/CD pipelines to deploy Azure resources. Experience in handling operations/Integration with source repository Must have good knowledge on Datawarehouse concepts and Datawarehouse modelling Good to Have Skills: Familiarity with DevOps, Agile Scrum methodologies and CI/CD Domain-driven development exposure Analytical / problem solving skills Strong communication skills Good experience with unit, integration and UAT support Able to design and code reusable components and functions Should able to review design, code & provide review comments with justification Zeal to learn new tool/technologies and adoption Power BI and Data Catalog experience
Posted 1 month ago
5.0 - 10.0 years
18 - 33 Lacs
Bengaluru
Hybrid
Neudesic, an IBM Company is home to some very smart, talented and motivated people. People who want to work for an innovative company that values their skills and keeps their passions alive with new challenges and opportunities. We have created a culture of innovation that makes Neudesic not only an industry leader, but also a career destination for todays brightest technologists. You can see it in our year-over-year growth, made possible by satisfied employees dedicated to delivering the right solutions to our clients Must Have Skills: Prior experience in ETL, data pipelines, data flow techniques using Azure Data Services Working experience in Python, Scala, PySpark, Azure Data Factory, Azure Data Lake Gen2, Databricks, Azure Synapse and file formats like JSON & Parquet Experience in creating ADF Pipelines to source and process data sets. Experience in creating Databricks notebooks to cleanse, transform and enrich data sets. Good understanding about SQL, Databases, NO-SQL DBs, Data Warehouse, Hadoop and various data storage options on the cloud. Development experience in orchestration of pipelines Experience in deployment and monitoring techniques Working experience with Azure DevOps CI/CD pipelines to deploy Azure resources. Experience in handling operations/Integration with source repository Must have good knowledge on Datawarehouse concepts and Datawarehouse modelling Good to Have Skills: Familiarity with DevOps, Agile Scrum methodologies and CI/CD Domain-driven development exposure Analytical / problem solving skills Strong communication skills Good experience with unit, integration and UAT support Able to design and code reusable components and functions Should able to review design, code & provide review comments with justification Zeal to learn new tool/technologies and adoption Power BI and Data Catalog experience
Posted 1 month ago
7.0 - 9.0 years
9 - 11 Lacs
Indore, Hyderabad, Ahmedabad
Work from Office
We're Hiring: Data Governance LeadLocations:Offices in Austin (USA), Singapore, Hyderabad, Indore, Ahmedabad (India)Primary Job Location: Mumbai/ Hyderabad / Indore / Ahmedabad (Work from Office) Compensation Range: Competitive | Based on experience and expertise To Apply, Share Your Resume With:Current CTCExpected CTCNotice PeriodPreferred Location What You Will Do Role Overview A Key Responsibilities 1 Governance Strategy & Stakeholder EnablementDefine and drive enterprise-level data governance frameworks and policies Align governance objectives with compliance, analytics, and business priorities Work with IT, Legal, Compliance, and Business teams to drive adoption Conduct training, workshops, and change management programs 2 Microsoft Purview Implementation & AdministrationAdminister Microsoft Purview: accounts, collections, RBAC, and scanning policies Design scalable governance architecture for large-scale data environments (>50TB) Integrate with Azure Data Lake, Synapse, SQL DB, Power BI, and Snowflake 3 Metadata & Data Lineage ManagementDesign metadata repositories and workflows Ingest technical/business metadata via ADF, REST APIs, PowerShell, Logic Apps Validate end-to-end lineage (ADF Synapse Power BI), impact analysis, and remediation 4 Data Classification & SecurityImplement and govern sensitivity labels (PII, PCI, PHI) and classification policies.Integrate with Microsoft Information Protection (MIP), DLP, Insider Risk, and Compliance Manager.Enforce lifecycle policies, records management, and information barriers. Working knowledge of GDPR, HIPAA, SOX, CCPA.Strong communication and leadership to bridge technical and business governance.
Posted 1 month ago
6.0 - 10.0 years
30 - 35 Lacs
Bengaluru
Work from Office
We are seeking an experienced PySpark Developer / Data Engineer to design, develop, and optimize big data processing pipelines using Apache Spark and Python (PySpark). The ideal candidate should have expertise in distributed computing, ETL workflows, data lake architectures, and cloud-based big data solutions. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark on distributed computing platforms (Hadoop, Databricks, EMR, HDInsight). Work with structured and unstructured data to perform data transformation, cleansing, and aggregation. Implement data lake and data warehouse solutions on AWS (S3, Glue, Redshift), Azure (ADLS, Synapse), or GCP (BigQuery, Dataflow). Optimize PySpark jobs for performance tuning, partitioning, and caching strategies. Design and implement real-time and batch data processing solutions. Integrate data pipelines with Kafka, Delta Lake, Iceberg, or Hudi for streaming and incremental updates. Ensure data security, governance, and compliance with industry best practices. Work with data scientists and analysts to prepare and process large-scale datasets for machine learning models. Collaborate with DevOps teams to deploy, monitor, and scale PySpark jobs using CI/CD pipelines, Kubernetes, and containerization. Perform unit testing and validation to ensure data integrity and reliability. Required Skills & Qualifications: 6+ years of experience in big data processing, ETL, and data engineering. Strong hands-on experience with PySpark (Apache Spark with Python). Expertise in SQL, DataFrame API, and RDD transformations. Experience with big data platforms (Hadoop, Hive, HDFS, Spark SQL). Knowledge of cloud data processing services (AWS Glue, EMR, Databricks, Azure Synapse, GCP Dataflow). Proficiency in writing optimized queries, partitioning, and indexing for performance tuning. Experience with workflow orchestration tools like Airflow, Oozie, or Prefect. Familiarity with containerization and deployment using Docker, Kubernetes, and CI/CD pipelines. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, CCPA, etc.). Excellent problem-solving, debugging, and performance optimization skills.
Posted 1 month ago
0.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Data Engineering roles with Databricks + Azure skill sets (and few more) but specifically looking for Mumbai/Pune and immediately available candidates only. Skills: Azure Data Engineer Azure Databricks Azure Data Factory SQL Workmode: Hybrid Azure Databricks job description typically outlines a role focused on designing, developing, and deploying data solutions on the Azure cloud platform using Databricks. This involves building and optimizing data pipelines, implementing ETL processes, and working with big data technologies like Spark and Delta Lake. The role often requires strong skills in Python, PySpark, and Azure services like Data Factory and Synapse.
Posted 1 month ago
10.0 - 18.0 years
10 - 15 Lacs
Hyderabad, Bengaluru
Hybrid
Role & responsibilities Experience in ADF pipeline creation and testing from Different source to AWS S3 Installation and Configuration new ADF NonProd and Prod environments Experience in SSIS And Synapse Maintaining the health and stability of all ADF environments Creating and managing ADF Pipelines , Collaborating with development teams to deploy ADF pipelines and applications to different environments. Proactively monitoring the performance of ADF jobs and the overall environment Implementing and managing backup and recovery strategies for the ADF environment Planning and executing software upgrades and applying patches to maintain a stable and secure environment Providing technical support to development teams and end-users for ADF related issues. Creating and maintaining documentation related to the ADF environment 10+ Experience with ADF Pipeline creations and Administration
Posted 1 month ago
7.0 - 11.0 years
9 - 11 Lacs
Mumbai, Indore, Hyderabad
Work from Office
We're Hiring: Data Governance LeadLocations:Offices in Austin (USA), Singapore, Hyderabad, Indore, Ahmedabad (India)Primary Job Location: Hyderabad / Indore / Ahmedabad (Onsite Role) Compensation Range: Competitive | Based on experience and expertise To Apply, Share Your Resume With:Current CTCExpected CTCNotice PeriodPreferred Location What You Will Do Role Overview AKey Responsibilities 1 Governance Strategy & Stakeholder EnablementDefine and drive enterprise-level data governance frameworks and policies Align governance objectives with compliance, analytics, and business priorities Work with IT, Legal, Compliance, and Business teams to drive adoption Conduct training, workshops, and change management programs 2 Microsoft Purview Implementation & AdministrationAdminister Microsoft Purview: accounts, collections, RBAC, and scanning policies Design scalable governance architecture for large-scale data environments (>50TB) Integrate with Azure Data Lake, Synapse, SQL DB, Power BI, and Snowflake 3 Metadata & Data Lineage ManagementDesign metadata repositories and workflows Ingest technical/business metadata via ADF, REST APIs, PowerShell, Logic Apps Validate end-to-end lineage (ADF Synapse Power BI), impact analysis, and remediation 4 Data Classification & SecurityImplement and govern sensitivity labels (PII, PCI, PHI) and classification policies Integrate with Microsoft Information Protection (MIP), DLP, Insider Risk, and Compliance Manager Enforce lifecycle policies, records management, and information barriers Working knowledge of GDPR, HIPAA, SOX, CCPA Strong communication and leadership to bridge technical and business governance
Posted 1 month ago
6.0 - 9.0 years
7 - 11 Lacs
Pune
Work from Office
Job Title : Azure Data Factory Engineer Location State : Maharashtra Location City : Pune Experience Required : 6 to 8 Year(s) CTC Range : 7 to 11 LPA Shift: Day Shift Work Mode: Onsite Position Type: C2H Openings: 2 Company Name: VARITE INDIA PRIVATE LIMITED About The Client: Client is an Indian multinational technology company specializing in information technology services and consulting. Headquartered in Mumbai, it is a part of the Tata Group and operates in 150 locations across 46 countries. About The Job: A minimum of 5 years experience with large SQL data marts. Expert relational database experience, Candidate should demonstrate ability to navigate through massive volumes of data to deliver effective and efficient data extraction, design, load, and reporting solutions to business partnersExperience in troubleshooting and Supporting large databases and testing activities; Identifying reporting, and managing database security issues, user access/management; Designing database backup, archiving and storage, performance tunning, ETL importing large volume of data extracted from multiple systems, capacity planning Essential Job Functions: Strong knowledge of Extraction Transformation and Loading (ETL) processes using frameworks like Azure Data Factory or Synapse or Databricks; establishing the cloud connectivity between different system like ADLS ,ADF, Synapse, Databricks etc Qualifications: Skill Required: Digital : PySpark~Azure Data Factory How to Apply: Interested candidates are invited to submit their resume using the apply online button on this job post. About VARITE: VARITE is a global staffing and IT consulting company providing technical consulting and team augmentation services to Fortune 500 Companies in USA, UK, CANADA and INDIA. VARITE is currently a primary and direct vendor to the leading corporations in the verticals of Networking, Cloud Infrastructure, Hardware and Software, Digital Marketing and Media Solutions, Clinical Diagnostics, Utilities, Gaming and Entertainment, and Financial Services. Equal Opportunity Employer: VARITE is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity or expression, national origin, age, marital status, veteran status, or disability status. Unlock Rewards: Refer Candidates and Earn. If you're not available or interested in this opportunity, please pass this along to anyone in your network who might be a good fit and interested in our open positions. VARITE offers a Candidate Referral program, where you'll receive a one-time referral bonus based on the following scale if the referred candidate completes a three-month assignment with VARITE. Exp Req - Referral Bonus 0 - 2 Yrs. - INR 5,000 2 - 6 Yrs. - INR 7,500 6 + Yrs. - INR 10,000
Posted 1 month ago
10.0 - 15.0 years
40 - 65 Lacs
Bengaluru
Work from Office
Design and lead scalable data architectures, cloud solutions, and analytics platforms using Azure. Drive data governance, pipeline optimization, and team leadership to enable business-aligned data strategies in the Oil & Gas sector Required Candidate profile Experienced data architect or leader with 10–15+ years in Azure, big data, and solution design. Strong in stakeholder management, data governance, and Oil & Gas analytics.
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Chennai
Work from Office
Notice period: Immediate 15days Profile source: Anywhere in India Timings: 1:00pm-10:00pm Work Mode: WFO (Mon-Fri) Job Summary: We are looking for an experienced and highly skilled Senior Data Engineer to lead the design and development of our data infrastructure and pipelines. As a key member of the Data & Analytics team, you will play a pivotal role in scaling our data ecosystem, driving data engineering best practices, and mentoring junior engineers. This role is ideal for someone who thrives on solving complex data challenges and building systems that power business intelligence, analytics, and advanced data products. Key Responsibilities: Design and build robust, scalable, and secure data pipelines and Lead the complete lifecycle of ETL/ELT processes, encompassing data intake, transformation, and storage including the concept of SCD type2. Collaborate with data scientists, analysts, backend and product teams to define data requirements and deliver impactful data solutions. Maintain and oversee the data infrastructure, including cloud storage, processing frameworks, and orchestration tools. Build logical and physical data model using any data modeling tool Champion data governance practices, focusing on data quality, lineage tracking, and catalog Guarantee adherence of data systems to privacy regulations and organizational Guide junior engineers, conduct code reviews, and foster knowledge sharing and technical best practices within the team. Required Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related Minimum of 5 years of practical experience in a data engineering or comparable Demonstrated expertise in SQL and Python (or similar languages such as Scala/Java). Extensive experience with data pipeline orchestration tools (e.g., Airflow, dbt, ). Proficiency in cloud data platforms, including AWS (Redshift, S3, Glue), or GCP (BigQuery, Dataflow), or Azure (Data Factory, Synapse). Familiarity with big data technologies (e.g., Spark, Kafka, Hive) and other data Solid grasp of data warehousing principles, data modeling techniques, and performance (e.g. Erwin Data Modeler, MySQL Workbench) Exceptional problem-solving abilities coupled with a proactive and team-oriented approach.
Posted 1 month ago
5.0 - 8.0 years
5 - 15 Lacs
Chennai
Work from Office
Notice period: Immediate 15days Profile source: Tamil Nadu Timings: 1:00pm 10:00pm (IST) Work Mode: WFO (Mon-Fri) About the Role We are looking for an experienced and highly skilled Senior Data Engineer to lead the design and development of our data infrastructure and pipelines. As a key member of the Data & Analytics team, you will play a pivotal role in scaling our data ecosystem, driving data engineering best practices, and mentoring junior engineers. This role is ideal for someone who thrives on solving complex data challenges and building systems that power business intelligence, analytics, and advanced data products. Key Responsibilities Design and build robust, scalable, and secure data pipelines and Lead the complete lifecycle of ETL/ELT processes, encompassing data intake, transformation, and Collaborate with data scientists, analysts, and product teams to define data requirements and deliver impactful data solutions. Maintain and oversee the data infrastructure, including cloud storage, processing frameworks, and orchestration tools. Champion data governance practices, focusing on data quality, lineage tracking, and catalog Guarantee adherence of data systems to privacy regulations and organizational Guide junior engineers, conduct code reviews, and foster knowledge sharing and technical best practices within the team. Required Qualifications & Skills: Bachelors or masters degree in computer science, Engineering, or a related Minimum of 5 years of practical experience in a data engineering or comparable Demonstrated expertise in SQL and Python (or similar languages such as Scala/Java). Extensive experience with data pipeline orchestration tools (e.g., Airflow, dbt, Prefect). Proficiency in cloud data platforms, including AWS (Redshift, S3, Glue), GCP (BigQuery, Dataflow), or Azure (Data Factory, Synapse). Familiarity with big data technologies (e.g., Spark, Kafka, Hive) and contemporary data stack Solid grasp of data warehousing principles, data modeling techniques, and performance Exceptional problem-solving abilities coupled with a proactive and team-oriented approach.
Posted 1 month ago
8.0 - 12.0 years
5 - 7 Lacs
Delhi, India
On-site
Required Qualifications: Proven experience in administering and managing Microsoft Fabric. Strong background in Azure Data Services (Data Lake, Synapse, Azure SQL, etc.). Expertise in Power BI service administration, migration, and optimization. Knowledge of Microsoft Purview, Data Security, and Governance frameworks is a definite plus. Experience with Role-Based Access Control (RBAC) and data security best practices. Strong understanding of data integration, ETL, and data warehousing concepts. Familiarity with Azure networking, storage, and identity management (AAD, IAM, etc.). Scripting experience with PowerShell, Python, or other automation tools is a plus. Preferred Qualifications: Hands-on experience with Data Fabric and enterprise-level data migration projects. Experience working in large-scale data environments. Soft Skills: Strong problem-solving and analytical skills. Ability to communicate technical details to both technical and non-technical stakeholders. Collaborative and team-oriented mindset. Proactive and able to work independently on complex tasks.
Posted 1 month ago
2.0 - 4.0 years
4 - 6 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Work from Office
Type: Contract (36 Months Project) Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahmedabad,Mumbai Availability: Immediate Joiners Preferred Were looking for a Junior Microsoft Purview Specialist to support our data cataloging and governance initiatives in a fast-paced remote setup. Key Responsibilities: Assist in the configuration and management of Microsoft Purview Support data cataloging, classification, and lineage tracking Work with data owners to ensure proper tagging and metadata management Help implement data governance policies Assist in integrating Purview with Azure and on-premises sources Document governance processes and resolve Purview-related issues Collaborate with project teams for timely delivery Primary Skills Required: Microsoft Purview Data Cataloging & Classification Metadata Management Understanding of Data Governance Azure Data Services (Basic knowledge is a plus) Strong communication and collaboration skills Preferred Qualifications: Certification/training in Microsoft Purview or related tools Exposure to Azure ecosystem: Data Factory, Synapse, Data Lake Ability to work independently in a remote environment If interested, please share your profile with the following details: Full Name: Total Experience: Relevant Microsoft Purview Experience: Current CTC: Expected CTC: Notice Period / Availability: Current Location: Preferred Location (Remote):
Posted 1 month ago
8.0 - 10.0 years
8 - 12 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Job Responsibilities: Collaborate with customers to create scalable and secure Azure solutions. Develop and deploy Java applications on Azure with DevOps integration. Automate infrastructure provisioning using Terraform and manage CI/CD pipelines. Ensure system security and compliance in Azure environments. Provide expert guidance on Azure services, identity management, and DevOps best practices. Design, configure, and manage Azure services, including Azure Synapse, security, DNS, databases, App Gateway, Front Door, Traffic Manager, and Azure Automation. Core Skills: Expertise in Azure services (Synapse, DNS, App Gateway, Traffic Manager, etc.). Experience with Java-based application deployment and CI/CD pipelines. Proficiency in Microsoft Entra ID, Office 365 integration, and Terraform. Strong knowledge of cloud security and DevOps best practices.
Posted 1 month ago
2.0 - 4.0 years
4 - 7 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Type: Contract (36 Months Project) Availability: Immediate Joiners Preferred Were looking for a Junior Microsoft Purview Specialist to support our data cataloging and governance initiatives in a fast-paced remote setup. Key Responsibilities: Assist in the configuration and management of Microsoft Purview Support data cataloging, classification, and lineage tracking Work with data owners to ensure proper tagging and metadata management Help implement data governance policies Assist in integrating Purview with Azure and on-premises sources Document governance processes and resolve Purview-related issues Collaborate with project teams for timely delivery Primary Skills Required: Microsoft Purview Data Cataloging & Classification Metadata Management Understanding of Data Governance Azure Data Services (Basic knowledge is a plus) Strong communication and collaboration skills Preferred Qualifications: Certification/training in Microsoft Purview or related tools Exposure to Azure ecosystem: Data Factory, Synapse, Data Lake Ability to work independently in a remote environment If interested, please share your profile with the following details: Full Name: Total Experience: Relevant Microsoft Purview Experience: Current CTC: Expected CTC: Notice Period / Availability: Current Location: Preferred Location (Remote): Location-remote,Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 1 month ago
10.0 - 15.0 years
10 - 15 Lacs
Ahmedabad, Gujarat, India
On-site
Key Responsibilities Design, develop, and implement end-to-end data architecture solutions. Provide technical leadership in Azure, Databricks, Snowflake, and Microsoft Fabric. Architect scalable, secure, and high-performing data solutions. Work on data strategy, governance, and optimization. Implement and optimize Power BI dashboards and SQL-based analytics. Collaborate with cross-functional teams to deliver robust data solutions. Primary Skills Required Data Architecture & Solutioning Azure Cloud (Data Services, Storage, Synapse, etc.) Databricks & Snowflake (Data Engineering & Warehousing) Power BI (Visualization & Reporting) Microsoft Fabric (Data & AI Integration) SQL (Advanced Querying & Optimization)
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough