Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 4.0 years
7 - 11 Lacs
Hyderabad
Work from Office
Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity.
Posted 3 weeks ago
9.0 - 14.0 years
25 - 40 Lacs
Noida, Bengaluru
Hybrid
Role & responsibilities We are seeking an experienced and visionary Technical Expert (Architect) with deep expertise in Microsoft technologies and a strong focus on Microsoft Analytics solutions. The ideal candidate will design, implement, and optimize end-to-end analytics architectures, enabling organizations to derive actionable insights from their data. This role requires a blend of technical prowess, strategic thinking, and leadership capabilities to guide teams and stakeholders toward innovative solutions. Key Responsibilities Architectural Design: Lead the design and development of scalable and secure data analytics architectures using Microsoft technologies (e.g., Power BI, Azure Synapse Analytics, SQL Server). Define the data architecture, integration strategies, and frameworks to meet organizational goals. Technical Leadership: Serve as the technical authority on Microsoft Analytics solutions, ensuring best practices in performance, scalability, and reliability. Guide cross-functional teams in implementing analytics platforms and solutions. Solution Development: Oversee the development of data models, dashboards, and reports using Power BI and Azure Data Services. Implement data pipelines leveraging Azure Data Factory, Data Lake, and other Microsoft technologies. Stakeholder Engagement: Collaborate with business leaders to understand requirements and translate them into robust technical solutions. Present architectural designs, roadmaps, and innovations to technical and non-technical audiences. Continuous Optimization: Monitor and optimize analytics solutions for performance and cost-effectiveness. Stay updated on the latest Microsoft technologies and analytics trends to ensure the organization remains competitive. Mentorship and Training: Mentor junior team members and provide technical guidance on analytics projects. Conduct training sessions to enhance the technical capabilities of internal teams. Required Skills and Qualifications Experience: 9+ years of experience working with Microsoft analytics and related technologies. Proven track record of designing and implementing analytics architectures. Technical Expertise: Deep knowledge of Power BI, Azure Synapse Analytics, Azure Data Factory, SQL Server, Azure Data Lake and Fabric. Proficiency in data modeling, ETL processes, and performance tuning. Soft Skills: Strong problem-solving and analytical abilities. Excellent communication and interpersonal skills for stakeholder management. Certifications (Preferred): Microsoft Certified: Azure Solutions Architect Expert Microsoft Certified: Data Analyst Associate Microsoft Certified: Azure Data Engineer Associate
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Microsoft Fabric Professional at YASH Technologies, you will be responsible for working with cutting-edge technologies to bring about real positive changes in an increasingly virtual world. You will have the opportunity to contribute to business transformation by leveraging your experience in Azure Fabric, Azure Data factory, Azure Databricks, Azure Synapse, Azure Storage Services, Azure SQL, ETL, Azure Cosmos DB, Event HUB, Azure Data Catalog, Azure Functions, and Azure Purview. With 5-8 years of experience in Microsoft Cloud solutions, you will be involved in creating pipelines, datasets, dataflows, Integration runtimes, and monitoring Pipelines. Your role will also entail extracting, transforming, and loading data from source systems using Azure Databricks, as well as preparing DB Design Documents based on client requirements. Collaborating with the development team, you will create database structures, queries, and triggers while working on SQL scripts and Synapse pipelines for data migration to Azure SQL. Your responsibilities will include data migration pipeline to Azure cloud, database migration from on-prem SQL server to Azure Dev Environment, and implementing data governance in Azure. Additionally, you will work on data migration pipelines for on-prem SQL server data to Azure cloud, along with utilizing Azure data catalog and experience in Big Data Batch Processing Solutions, Interactive Processing Solutions, and Real-Time Processing Solutions. To excel in this role, mandatory certifications are required. At YASH Technologies, you will have the opportunity to create a career path tailored to your aspirations within an inclusive team environment. Our Hyperlearning workplace is built on principles of flexible work arrangements, free spirit, emotional positivity, agile self-determination, trust, transparency, open collaboration, support for business goals realization, stable employment, and ethical corporate culture. Join us to embark on a journey of continuous learning, unlearning, and relearning in a dynamic and evolving technology landscape.,
Posted 3 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad
Hybrid
Job Overview: We are looking for a Microsoft Fabric Data Engineer to join our dynamic team. In this role, you will be responsible for designing and delivering cutting-edge data solutions using Microsoft Fabric and related technologies. This role will be responsible for designing, developing, and optimizing data solutions using Microsoft Fabric. This role involves working with technical teams, ensuring best practices, and driving innovation in data engineering and hands on development of data pipelines. Key Responsibilities: Work with large datasets to solve complex analytical problems. Conduct end-to-end data analyses, including collection, processing, and visualization. Collaborate with cross-functional teams, including data scientists, software engineers, and business stakeholders, to develop data-driven solutions. Implement data pipelines, ETL processes, and data models using Microsoft Fabric. Development of data lakehouses, Delta Lake tables, and data warehouses. Optimize data storage, query performance, and cost efficiency. Provide technical knowledge, collaboration and guidance to other data engineers. Implement data security, governance, and compliance with industry standards. Required Technical Skills: 3-5 years of experience in data engineering in Microsoft Cloud solutions. Expertise in Microsoft Fabric, Azure Data Services, and data integration. Experience with Azure Data Factory, Synapse Analytics, Databricks, and DevOps practices. Knowledge of data governance, compliance frameworks, and CI/CD pipelines. Excellent communication, and problem-solving skills. Strong proficiency in SQL, Python, and Power BI. Experience with data warehousing, ETL processes, and data modeling. Knowledge of cloud platforms, particularly Microsoft Azure and Fabric. Microsoft Certifications (e.g., DP-600, DP-203) are highly desirable Experience in data warehousing, data modelling, and dimensional schemas. Preferred Qualifications: Microsoft Fabric Data Engineer, Fabric Analytics Engineer Certification or similar. Bachelors or master’s degree in computers, engineering or relevant areas. Knowledge of CI/CD pipelines for data engineering workflows. Strong analytical and problem-solving skills. Using DevOps for development and deployment. Agile methodologies like Scrum, XP or similar. Experience in AI and ML model deployment is a plus.
Posted 3 weeks ago
5.0 - 8.0 years
6 - 10 Lacs
Telangana
Work from Office
Key Responsibilities: Team Leadership: Lead and mentor a team of Azure Data Engineers, providing technical guidance and support. Foster a collaborative and innovative team environment. Conduct regular performance reviews and set development goals for team members. Organize training sessions to enhance team skills and technical capabilities. Azure Data Platform: Design, implement, and optimize scalable data solutions using Azure data services such as Azure Databricks, Azure Data Factory, Azure SQL Database, and Azure Synapse Analytics. Ensure data engineering best practices and data governance are followed. Stay up-to-date with Azure data technologies and recommend improvements to enhance data processing capabilities. Data Architecture: Collaborate with data architects to design efficient and scalable data architectures. Define data modeling standards and ensure data integrity, security, and governance compliance. Project Management: Work with project managers to define project scope, goals, and deliverables. Develop project timelines, allocate resources, and track progress. Identify and mitigate risks to ensure successful project delivery. Collaboration & Communication: Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders to deliver data-driven solutions. Communicate effectively with stakeholders to understand requirements and provide updates. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Proven experience as a Team Lead or Manager in data engineering. Extensive experience with Azure data services and cloud technologies. Expertise in Azure Databricks, PySpark, and SQL. Strong understanding of data engineering best practices, data modeling, and ETL processes. Experience with agile development methodologies. Certifications in Azure data services (preferred). Preferred Skills: Experience with big data technologies and data warehousing solutions. Familiarity with industry standards and compliance requirements. Ability to lead and mentor a team.
Posted 3 weeks ago
8.0 - 13.0 years
8 - 13 Lacs
Telangana
Work from Office
Key Responsibilities: Team Leadership: Lead and mentor a team of Azure Data Engineers, providing technical guidance and support. Foster a collaborative and innovative team environment. Conduct regular performance reviews and set development goals for team members. Organize training sessions to enhance team skills and technical capabilities. Azure Data Platform: Design, implement, and optimize scalable data solutions using Azure data services such as Azure Databricks, Azure Data Factory, Azure SQL Database, and Azure Synapse Analytics. Ensure data engineering best practices and data governance are followed. Stay up-to-date with Azure data technologies and recommend improvements to enhance data processing capabilities. Data Architecture: Collaborate with data architects to design efficient and scalable data architectures. Define data modeling standards and ensure data integrity, security, and governance compliance. Project Management: Work with project managers to define project scope, goals, and deliverables. Develop project timelines, allocate resources, and track progress. Identify and mitigate risks to ensure successful project delivery. Collaboration & Communication: Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders to deliver data-driven solutions. Communicate effectively with stakeholders to understand requirements and provide updates. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Proven experience as a Team Lead or Manager in data engineering. Extensive experience with Azure data services and cloud technologies. Expertise in Azure Databricks, PySpark, and SQL. Strong understanding of data engineering best practices, data modeling, and ETL processes. Experience with agile development methodologies. Certifications in Azure data services (preferred). Preferred Skills: Experience with big data technologies and data warehousing solutions. Familiarity with industry standards and compliance requirements. Ability to lead and mentor a team.
Posted 3 weeks ago
6.0 - 11.0 years
11 - 21 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
We are looking Data engineer. 6-8 years of experience in Data Engineering or a related field. • Proficiency in PySpark, Python, and Spark SQL for data processing and transformation. • Strong expertise in Azure technologies, including: • Azure Data Lake Storage (ADLS) Gen2 • Azure Synapse Analytics • Azure Data Factory (ADF) • Azure Databricks • Azure DevOps (CI/CD, pipelines, automation) • Hands-on experience with and PL/SQL for database development and optimization. • Experience in data modeling, data warehousing concepts, and ETL development. • Strong understanding of big data processing, performance tuning, and cloud architecture. • Experience with Git for version control and collaboration. • Knowledge of data security, encryption, and access control in cloud environments
Posted 3 weeks ago
4.0 - 8.0 years
0 Lacs
kerala
On-site
At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. As part of our GDS Consulting team, you will be part of NCLC team delivering specific to Microsoft account. You will be working on latest Microsoft BI technologies and will collaborate with other teams within Consulting services. The opportunity We're looking for resources with expertise in Microsoft BI, Power BI, Azure Data Factory, Data Bricks to join the group of our Data Insights team. This is a fantastic opportunity to be part of a leading firm whilst being instrumental in the growth of our service offering. Your Key Responsibilities Responsible for managing multiple client engagements. Understand and analyse business requirements by working with various stakeholders and create the appropriate information architecture, taxonomy and solution approach. Work independently to gather requirements, cleansing extraction and loading of data. Translate business and analyst requirements into technical code. Create interactive and insightful dashboards and reports using Power BI, connecting to various data sources and implementing DAX calculations. Design and build complete ETL/Azure Data Factory processes moving and transforming data for ODS, Staging, and Data Warehousing. Design and development of solutions in Data Bricks, Scala, Spark, SQL to process and analyze large datasets, perform data transformations, and build data models. Design SQL Schema, Database Schema, Stored procedures, function, and T-SQL queries. Skills And Attributes For Success Collaborating with other members of the engagement team to plan the engagement and develop work program timelines, risk assessments and other documents/templates. Able to manage Senior stakeholders. Experience in leading teams to execute high quality deliverables within stipulated timeline. Skills in PowerBI, Azure Data Factory, Databricks, Azure Synapse, Data Modelling, DAX, Power Query, Microsoft Fabric. Strong proficiency in Power BI, including data modelling, DAX, and creating interactive visualizations. Solid experience with Azure Databricks, including working with Spark, PySpark (or Scala), and optimizing big data processing. Good understanding of various Azure services relevant to data engineering, such as Azure Blob Storage, ADLS Gen2, Azure SQL Database/Synapse Analytics. Strong SQL Skills and experience with one of the following: Oracle, SQL, Azure SQL. Good to have experience in SSAS or Azure SSAS and Agile Project Management. Basic Knowledge on Azure Machine Learning services. Excellent Written and Communication Skills and ability to deliver technical demonstrations. Quick learner with a can-do attitude. Demonstrating and applying strong project management skills, inspiring teamwork and responsibility with engagement team members. To qualify for the role, you must have A bachelor's or master's degree. A minimum of 4-7 years of experience, preferably background in a professional services firm. Excellent communication skills with consulting experience preferred. Ideally, you'll also have Analytical ability to manage multiple projects and prioritize tasks into manageable work products. Can operate independently or with minimum supervision. What Working At EY Offers At EY, we're dedicated to helping our clients, from startups to Fortune 500 companies and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around. Opportunities to develop new skills and progress your career. The freedom and flexibility to handle your role in a way that's right for you. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Engineer at our company, you will be responsible for handling ETL processes using PySpark, SQL, Microsoft Fabric, and other relevant technologies. Your primary role will involve collaborating with clients and stakeholders to understand data requirements, designing efficient data models, and optimizing existing data pipelines for performance and scalability. Ensuring data quality and integrity throughout the data pipeline will be a key aspect of your responsibilities. Additionally, you will be expected to document technical designs, processes, and procedures while staying updated on emerging technologies and best practices in data engineering. Building CICD pipelines using Github will also be part of your tasks. To qualify for this role, you should hold a Bachelor's degree in computer science, engineering, or a related field along with at least 3 years of experience in data engineering or a similar role. A strong understanding of ETL concepts and best practices is essential, as well as proficiency in Azure Synapse, Microsoft Fabric, and other data processing technologies. Experience with cloud-based data platforms such as Azure or AWS, knowledge of data warehousing concepts and methodologies, and proficiency in Python, PySpark, and SQL programming language for data manipulation and scripting are also required. Nice-to-have qualifications include experience with data lake concepts, familiarity with data visualization tools such as Power BI or Tableau, and certifications in relevant technologies like Microsoft Certified: Azure Data Engineer Associate. In addition to a challenging and rewarding work environment, we offer company benefits including group medical insurance, cab facility, meals/snacks, and a continuous learning program. Stratacent is a Global IT Consulting and Services firm with headquarters in Jersey City, NJ, global delivery centers in Pune and Gurugram, and offices in USA, London, Canada, and South Africa. Specializing in Financial Services, Insurance, Healthcare, and Life Sciences, we assist our customers in their transformation journey with services focusing on Information Security, Cloud Services, Data and AI, Automation, Application Development, and IT Operations. Learn more about us at http://stratacent.com.,
Posted 3 weeks ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
Candescent is the largest non-core digital banking provider, specializing in transformative technologies that connect account opening, digital banking, and branch solutions for banks and credit unions of all sizes. Our Candescent solutions are the driving force behind the top three U.S. mobile banking apps, trusted by financial institutions nationwide. We offer an extensive portfolio of industry-leading products and services, along with an ecosystem of out-of-the-box and integrated partner solutions. Our API-first architecture and developer tools empower financial institutions to enhance their capabilities by seamlessly integrating custom-built or third-party solutions. Our commitment to connected experiences across in-person, remote, and digital channels revolutionizes customer service. As part of the team, your essential duties and responsibilities will include: - Data Lake Organization: Structuring data lake assets using medallion architecture with a Domain-driven and Source-driven approach. - Data Pipeline Design and Development: Developing, deploying, and orchestrating data pipelines using Data factory, pyspark/ sql notebooks to ensure smooth data flow. - Design and Build Data Systems: Creating and maintaining Candescent's data systems, databases, and data warehouses for managing large volumes of data. - Data Compliance and Security: Ensuring data systems comply with security standards to protect sensitive information. - Collaboration: Working closely with the Data Management team to implement approved data solutions. - Troubleshooting and Optimization: Identifying and resolving data-related issues while continuously optimizing data systems for better performance. To excel in this role, you must meet the following requirements: - 8+ years of IT experience in implementing Design patterns for Data Systems. - Extensive experience in building API-based data pipelines using the Azure ecosystem. - Proficiency in ETL/ELT technologies with a focus on Microsoft Fabric stack (ADF, Spark, SQL). - Expertise in building data warehouse models utilizing Azure Synapse and Azure Delta lakehouse. - Programming experience in data processing languages such as SQL/T-SQL, Python, or Scala. - Experience with code management using GitHub as the primary repository. - Familiarity with DevOps practices, configuration frameworks, and CI/CD automation tooling. At Candescent, we value collaboration with report developers/analysts and business teams to enhance data models feeding BI tools and improve data accessibility. Offers of employment are subject to the successful completion of applicable screening criteria. Candescent is an equal opportunity employer committed to diversity and inclusion. We do not accept unsolicited resumes from recruitment agencies not on our preferred supplier list.,
Posted 3 weeks ago
6.0 - 10.0 years
0 - 2 Lacs
Hyderabad
Work from Office
Job Title: Senior Data Engineer Azure Databricks & Azure Stack Location: [Onsite - Hyderabad] Experience: 6-8 years Employment Type: Full-Time Job Summary: TechNavitas seeking a highly skilled Senior Data Engineer with 6-8 years of experience in designing and implementing modern data engineering solutions on Azure Cloud. The ideal candidate will have deep expertise in Azure Databricks, Azure Stack, and building data dashboards using Databricks. You will play a critical role in developing scalable, secure, and high-performance data pipelines that power advanced analytics and machine learning workloads. Key Responsibilities: Design and Develop Data Pipelines: Build and optimize robust ETL/ELT workflows using Azure Databricks to process large-scale datasets from diverse sources. Azure Stack Integration: Implement and manage data workflows within Azure Stack environments for hybrid cloud scenarios. Dashboards & Visualization: Develop interactive dashboards and visualizations in Databricks for business and technical stakeholders. Performance Optimization: Tune Spark jobs for performance and cost efficiency, leveraging Delta Lake, Parquet, and advanced caching strategies. Data Modeling: Design and maintain logical and physical data models that support structured and unstructured data needs. Collaboration: Work closely with data scientists, analysts, and business teams to understand requirements and deliver data solutions that enable insights. Security & Compliance: Ensure adherence to enterprise data security, privacy, and governance standards, especially in hybrid Azure environments. Automation & CI/CD: Implement CI/CD pipelines for Databricks workflows using Azure DevOps or similar tools. Required Skills and Experience: Technical Skills: Min 6-8 years of data engineering experience with strong focus on Azure ecosystem. Deep expertise in Azure Databricks (PySpark/Scala/SparkSQL) for big data processing. Solid understanding of Azure Stack Hub/Edge for hybrid cloud architecture. Hands-on experience with Delta Lake, data lakes, and data lakehouse architectures. Proficiency in developing dashboards within Databricks SQL and integrating with BI tools like Power BI or Tableau Strong knowledge of data modeling, data warehousing (e.g., Synapse Analytics), and ELT/ETL best practices. Experience with event-driven architectures and streaming data pipelines using Azure Event Hubs, Kafka, or Databricks Structured Streaming. Familiarity with Git, Azure DevOps, and CI/CD automation for data workflows. Soft Skills: Strong problem-solving and analytical thinking. Ability to communicate technical concepts effectively to non-technical stakeholders. Proven track record of working in Agile/Scrum teams. Preferred Qualifications: Experience working with hybrid or multi-cloud environments (Azure Stack + Azure Public Cloud). Knowledge of ML lifecycle and MLOps practices for data pipelines feeding ML models. Azure certifications such as Azure Data Engineer Associate or Azure Solutions Architect Expert. Why Join Us? Work on cutting-edge data engineering projects across hybrid cloud environments. Be part of a dynamic team driving innovation in big data and advanced analytics. Competitive compensation and professional growth opportunities.
Posted 3 weeks ago
5.0 - 7.0 years
20 - 35 Lacs
Bengaluru
Work from Office
Key Responsibilities: Design, develop, and optimize scalable data pipelines using Databricks (PySpark, Scala, SQL). Implement ETL/ELT workflows for large-scale data integration across cloud and on-premise environments. Leverage Microsoft Fabric (Data Factory, OneLake, Lakehouse, DirectLake, etc.) to build unified data solutions. Collaborate with data architects, analysts, and stakeholders to deliver business-critical data models and pipelines. Monitor and troubleshoot performance issues in data pipelines. Ensure data governance, quality, and security across all data assets. Work with Delta Lake, Unity Catalog, and other modern data lakehouse components. Automate and orchestrate workflows using Azure Data Factory, Databricks Workflows, or Microsoft Fabric pipelines. Participate in code reviews, CI/CD practices, and agile ceremonies. Required Skills: 5–7 years of experience in data engineering, with strong exposure to Databricks . Proficient in PySpark, SQL, and performance tuning of Spark jobs. Hands-on experience with Microsoft Fabric components . Experience with Azure Synapse, Data Factory, and Azure Data Lake. Understanding of Lakehouse architecture and modern data mesh principles. Familiarity with Power BI integration and semantic modeling (preferred). Knowledge of DevOps, CI/CD for data pipelines (e.g., using GitHub Actions, Azure DevOps). Excellent problem-solving, communication, and collaboration skills. Roles and Responsibilities Key Responsibilities: Design, develop, and optimize scalable data pipelines using Databricks (PySpark, Scala, SQL). Implement ETL/ELT workflows for large-scale data integration across cloud and on-premise environments. Leverage Microsoft Fabric (Data Factory, OneLake, Lakehouse, DirectLake, etc.) to build unified data solutions. Collaborate with data architects, analysts, and stakeholders to deliver business-critical data models and pipelines. Monitor and troubleshoot performance issues in data pipelines. Ensure data governance, quality, and security across all data assets. Work with Delta Lake, Unity Catalog, and other modern data lakehouse components. Automate and orchestrate workflows using Azure Data Factory, Databricks Workflows, or Microsoft Fabric pipelines. Participate in code reviews, CI/CD practices, and agile ceremonies. Required Skills: 5–7 years of experience in data engineering, with strong exposure to Databricks . Proficient in PySpark, SQL, and performance tuning of Spark jobs. Hands-on experience with Microsoft Fabric components . Experience with Azure Synapse, Data Factory, and Azure Data Lake. Understanding of Lakehouse architecture and modern data mesh principles. Familiarity with Power BI integration and semantic modeling (preferred). Knowledge of DevOps, CI/CD for data pipelines (e.g., using GitHub Actions, Azure DevOps). Excellent problem-solving, communication, and collaboration skills.
Posted 3 weeks ago
5.0 - 10.0 years
19 - 25 Lacs
Hyderabad
Work from Office
Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity.
Posted 3 weeks ago
12.0 - 17.0 years
19 - 22 Lacs
Hyderabad
Work from Office
Overview Seeking a Manager, Data Operations, to support our growing data organization. In this role, you will play a key role in maintaining data pipelines and corresponding platforms (on-prem and cloud) while collaborating with global teams on DataOps initiatives. Manage the day-to-day operations of data pipelines, ensuring governance, reliability, and performance optimization on Microsoft Azure. This role requires hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, real-time streaming architectures, and DataOps methodologies. Ensure availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Support DataOps programs, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. Assist in implementing real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Contribute to the development of governance models and execution roadmaps to optimize efficiency across data platforms, including Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to enhance enterprise-wide data operations. Collaborate on building and supporting next-generation Data & Analytics platforms while fostering an agile and high-performing DataOps team. Support the adoption of Data & Analytics technology transformations, ensuring full sustainment capabilities and automation for proactive issue identification and resolution. Partner with cross-functional teams to drive process improvements, best practices, and operational excellence within DataOps. Responsibilities Support the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Assist in managing end-to-end data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Ensure seamless batch, real-time, and streaming data processing while focusing on high availability and fault tolerance. Contribute to DataOps automation initiatives, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps, Terraform, and Infrastructure-as-Code (IaC). Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to enable data-driven decision-making. Work with IT, data stewards, and compliance teams to align DataOps practices with regulatory and security requirements. Support data operations and sustainment efforts, including testing and monitoring processes to support global products and projects. Assist in data capture, storage, integration, governance, and analytics initiatives, collaborating with cross-functional teams. Manage day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to align data platform capabilities with business needs. Participate in the Agile work intake and management process to support execution excellence for data platform teams. Collaborate with cross-functional teams to troubleshoot and resolve issues related to cloud infrastructure and data services. Assist in developing and automating operational policies and procedures to improve efficiency and service resilience. Support incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric environment, advocating for operational excellence and continuous service improvements. Contribute to building a collaborative, high-performing team culture focused on automation and efficiency in DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity while meeting business goals. Leverage technical expertise in cloud and data operations to improve service reliability and scalability. Qualifications 12+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 12+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 8+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. 5+ years of experience in a management or lead role, with a focus on DataOps execution and delivery. Hands-on experience with Azure Data Factory (ADF) for orchestrating data pipelines and ETL workflows. Proficiency in Azure Synapse Analytics, Azure Data Lake Storage (ADLS), and Azure SQL Database. Familiarity with Azure Databricks for large-scale data processing (basic troubleshooting or support scope is sufficient if not engineering-focused). Exposure to cloud environments (AWS, Azure, GCP) and understanding of CI/CD pipelines for data operations. Knowledge of structured and semi-structured data storage formats (e.g., Parquet, JSON, Delta). Excellent communication skills, with the ability to empathize with stakeholders and articulate technical concepts to non-technical audiences. Strong problem-solving abilities, prioritizing customer needs and advocating for operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational excellence. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience in supporting mission-critical solutions in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) practices, such as automated issue remediation and scalability improvements. Experience driving operational excellence in complex, high-availability data environments. Ability to collaborate across teams, fostering strong relationships with business and IT stakeholders. Experience in data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong analytical and strategic thinking skills, with the ability to execute plans effectively and drive results. Proven ability to work in a fast-changing, complex environment, adapting to shifting priorities while maintaining productivity.
Posted 3 weeks ago
5.0 - 10.0 years
17 - 20 Lacs
Hyderabad
Work from Office
Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity.
Posted 3 weeks ago
6.0 - 8.0 years
9 - 13 Lacs
Chennai
Work from Office
Job Title:Data Engineering Lead Experience6-8 YearsLocationChennai : ADF (Azure Data Factory) Azure Data Bricks Azure Synapse Strong ETL experience Power BI
Posted 3 weeks ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad/Secunderabad
Hybrid
Job Objective We 're looking for a skilled and passionate Data Engineer to build robust, scalable data platforms using cutting-edge technologies. If you have expertise in Databricks, Python, PySpark, Azure Data Factory, Azure Synapse, SQL Server , and a deep understanding of data modeling, orchestration, and pipeline development, this is your opportunity to make a real impact. Youll thrive in our cloud-first, innovation-driven environment, designing and optimizing end-to-end data workflows that drive meaningful business outcomes. If you're committed to high performance, clean data architecture, and continuous learning, we want to hear from you! Required Qualifications Education: BE, ME/MTech, MCA, MSc, MBA, or equivalent industry experience Experience: 5 to 10 years working with data engineering technologies ( Databricks, Azure, Python, SQL Server, PySpark, Azure Data Factory, Synapse, Delta Lake, Git, CI/CD Tech Stack, MSBI etc. ) Preferred Qualifications & Skills: Must-Have Skills: Expertise in relational & multi-dimensional database architectures Proficiency in Microsoft BI tools (SQL Server SSRS, SSAS, SSIS), Power BI , and SharePoint Strong experience in Power BI MDX, SSAS, SSIS, SSRS , Tabular & DAX Queries Deep understanding of SQL Server Tabular Model & multidimensional database design Excellent SQL-based data analysis skills Strong hands-on experience with Azure Data Factory, Databricks, PySpark/Python Nice-to-Have Skills: Exposure to AWS or GCP Experience with Lakehouse Architecture, Real-time Streaming (Kafka/Event Hubs), Infrastructure as Code (Terraform/ARM) Familiarity with Cognos, Qlik, Tableau, MDM, DQ, Data Migration MS BI, Power BI, or Azure Certifications
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Engineer at our company, you will be responsible for handling ETL processes using PySpark, SQL, Microsoft Fabric, and other relevant technologies. You will collaborate with clients and stakeholders to comprehend data requirements and devise efficient data models and solutions. Additionally, optimizing and tuning existing data pipelines for enhanced performance and scalability will be a crucial part of your role. Ensuring data quality and integrity throughout the data pipeline and documenting technical designs, processes, and procedures will also be part of your responsibilities. It is essential to stay updated on emerging technologies and best practices in data engineering and contribute to building CICD pipelines using Github. To qualify for this role, you should hold a Bachelor's degree in computer science, engineering, or a related field, along with a minimum of 3 years of experience in data engineering or a similar role. A strong understanding of ETL concepts and best practices is required, as well as proficiency in Azure Synapse, Microsoft Fabric, and other data processing technologies. Experience with cloud-based data platforms such as Azure or AWS, knowledge of data warehousing concepts and methodologies, and proficiency in Python, PySpark, and SQL programming languages for data manipulation and scripting are also essential. Desirable qualifications include experience with data lake concepts, familiarity with data visualization tools like Power BI or Tableau, and certifications in relevant technologies such as Microsoft Certified: Azure Data Engineer Associate. Our company offers various benefits including group medical insurance, cab facility, meals/snacks, and a continuous learning program. Stratacent is a Global IT Consulting and Services firm with headquarters in Jersey City, NJ, and global delivery centers in Pune and Gurugram, along with offices in the USA, London, Canada, and South Africa. Specializing in Financial Services, Insurance, Healthcare, and Life Sciences, we assist our customers in their transformation journey by providing services in Information Security, Cloud Services, Data and AI, Automation, Application Development, and IT Operations. For more information, you can visit our website at http://stratacent.com.,
Posted 3 weeks ago
12.0 - 22.0 years
35 - 100 Lacs
Noida, Hyderabad, Jaipur
Hybrid
Databricks Data Architect Experience : 12+ Years Location: Mumbai Onsite Salary: best in Industry! Immediate Joiners We are seeking an experienced Databricks Data Architect with a strong background in designing scalable data platforms in the manufacturing or energy sector . The ideal candidate will have over 10 years of experience in designing and implementing enterprise-grade data solutions, with strong proficiency in Azure Databricks and big data technologies . Key Responsibilities: Architect and deliver scalable, cloud-native data solutions to support both real-time and batch processing needs. Work closely with business and technical stakeholders to understand business requirements, define data strategy, governance, and architecture standards. Ensure data quality, integrity, and security across platforms and systems. Define data models, data integration patterns, and governance frameworks to support analytics use cases. Collaborate with DevOps and Engineering teams to ensure robust CI/CD pipelines and deliver production-grade deployments. Define and enforce data architecture standards, frameworks, and best practices across data engineering and analytics teams. Implement data governance, security, and compliance measures, including data cataloguing, access controls, and regulatory adherence. Lead capacity planning and performance tuning efforts to optimize data processing and query performance. Create and maintain architecture documentation, including data flow diagrams, data models, entity-relationship diagrams, system interfaces etc. Design clear and impactful visualizations to support key analytical objectives. Required Skills and Experience: Strong proficiency in Azure Databricks and big data technologies (Apache Spark, Kafka, Event Hub). Deep understanding of data modeling, data lakes, batch and real-time/streaming data processing. Proven experience with high volume data pipeline orchestration and ETL/ELT workflows. Experience designing and implementing data lakes, data warehouses, and lakehouse architectures. Proven experience in designing and implementing data visualization solutions for actionable insights. Strong understanding of data integration patterns, APIs, and message streaming (e.g., Event Hub, Kafka). Experience with metadata management, and data quality frameworks. Excellent problem-solving skills and the ability to translate business needs into technical solutions. Experience with structured and unstructured data ingestion, transformation, and processing at scale. Excellent communication, documentation, and stakeholder management skills. Preferred Qualifications: Familiarity with lakehouse architectures using Delta Lake. Knowledge of manufacturing/energy domain-specific standards and protocols. Experience with IoT data and time-series analysis. Knowledge of data governance, security, and compliance best practices.
Posted 3 weeks ago
12.0 - 22.0 years
35 - 100 Lacs
Navi Mumbai, Pune, Bengaluru
Hybrid
Databricks Data Architect Experience : 12+ Years Location: Mumbai Onsite Salary: best in Industry! Immediate Joiners We are seeking an experienced Databricks Data Architect with a strong background in designing scalable data platforms in the manufacturing or energy sector . The ideal candidate will have over 10 years of experience in designing and implementing enterprise-grade data solutions, with strong proficiency in Azure Databricks and big data technologies . Key Responsibilities: Architect and deliver scalable, cloud-native data solutions to support both real-time and batch processing needs. Work closely with business and technical stakeholders to understand business requirements, define data strategy, governance, and architecture standards. Ensure data quality, integrity, and security across platforms and systems. Define data models, data integration patterns, and governance frameworks to support analytics use cases. Collaborate with DevOps and Engineering teams to ensure robust CI/CD pipelines and deliver production-grade deployments. Define and enforce data architecture standards, frameworks, and best practices across data engineering and analytics teams. Implement data governance, security, and compliance measures, including data cataloguing, access controls, and regulatory adherence. Lead capacity planning and performance tuning efforts to optimize data processing and query performance. Create and maintain architecture documentation, including data flow diagrams, data models, entity-relationship diagrams, system interfaces etc. Design clear and impactful visualizations to support key analytical objectives. Required Skills and Experience: Strong proficiency in Azure Databricks and big data technologies (Apache Spark, Kafka, Event Hub). Deep understanding of data modeling, data lakes, batch and real-time/streaming data processing. Proven experience with high volume data pipeline orchestration and ETL/ELT workflows. Experience designing and implementing data lakes, data warehouses, and lakehouse architectures. Proven experience in designing and implementing data visualization solutions for actionable insights. Strong understanding of data integration patterns, APIs, and message streaming (e.g., Event Hub, Kafka). Experience with metadata management, and data quality frameworks. Excellent problem-solving skills and the ability to translate business needs into technical solutions. Experience with structured and unstructured data ingestion, transformation, and processing at scale. Excellent communication, documentation, and stakeholder management skills. Preferred Qualifications: Familiarity with lakehouse architectures using Delta Lake. Knowledge of manufacturing/energy domain-specific standards and protocols. Experience with IoT data and time-series analysis. Knowledge of data governance, security, and compliance best practices.
Posted 3 weeks ago
5.0 - 8.0 years
15 - 20 Lacs
Pune
Work from Office
Critical Skills to Possess: Expertise in data ingestion, data processing and analytical pipelines for big data, relational databases, and data warehouse solutions Hands-on experience with Agile software development Experience in designing and hands-on development in cloud-based analytics solutions. Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. Designing and building of data pipelines using API ingestion and Streaming ingestion methods. Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. Thorough understanding of Azure and AWS Cloud Infrastructure offerings. Expertise in Azure Databricks, Azure Stream Analytics, Power BI is desirable Knowledge of SAP and BW/ BPC is desirable. Expert in Python, Scala, SQL is desirable Experience developing security models. Preferred Qualifications: BS degree in Computer Science or Engineering or equivalent experience Roles and Responsibilities Roles and Responsibilities: Design, develop, and deploy data pipelines and ETL processes using Azure Data Factory. Implement data integration solutions, ensuring data flows efficiently and reliably between various data sources and destinations. Collaborate with data architects and analysts to understand data requirements and translate them into technical specifications. Build and maintain scalable and optimized data storage solutions using Azure Data Lake Storage, Azure SQL Data Warehouse, and other relevant Azure services. Develop and manage data transformation and cleansing processes to ensure data quality and accuracy. Monitor and troubleshoot data pipelines to identify and resolve issues in a timely manner. Optimize data pipelines for performance, cost, and scalability
Posted 3 weeks ago
5.0 - 7.0 years
15 - 20 Lacs
Pune
Work from Office
Critical Skills to Possess: Advanced working knowledge and experience with relational and non-relational databases. Advanced working knowledge and experience with API data providers Experience building and optimizing Big Data pipelines, architectures, and datasets. Strong analytic skills related to working with structured and unstructured datasets. Hands-on experience in Azure Databricks utilizing Spark to develop ETL pipelines. Strong proficiency in data analysis, manipulation, and statistical modeling using tools like Spark, Python, Scala, SQL, or similar languages. Strong experience in Azure Data Lake Storage Gen2, Azure Data Factory, Databricks, Event Hub, Azure Synapse. Familiarity with several of the following technologies: Event Hub, Docker, Azure Kubernetes Service, Azure DWH, API Azure, Azure Function, Power BI, Azure Cognitive Services. Azure DevOps experience to deploy the data pipelines through CI/CD. Preferred Qualifications: BS degree in Computer Science or Engineering or equivalent experience Roles and Responsibilities Roles and Responsibilities: You are detailed reviewing and analyzing structured, semi-structured and unstructured data sources for quality, completeness, and business value. You design, architect, implement and test rapid prototypes that demonstrate value of the data and present them to diverse audiences. You participate in early state design and feature definition activities. Responsible for implementing robust data pipeline using Microsoft, Databricks Stack Responsible for creating reusable and scalable data pipelines. You are a Team-Player, collaborating with team members across multiple engineering teams to support the integration of proven prototypes into core intelligence products. You have strong communication skills to effectively convey complex data insights to non-technical stakeholders.
Posted 3 weeks ago
7.0 - 10.0 years
15 - 30 Lacs
Pune, Chennai
Work from Office
Exp - 7 10 Yrs SSIS , ETL , SQL , Azure Synapse , ADF Pune chennai
Posted 3 weeks ago
8.0 - 10.0 years
10 - 20 Lacs
Pune
Remote
Job Summary: We are seeking an experienced Azure Data Governance Specialist to design, implement, and manage data governance frameworks and infrastructure across Azure-based platforms. The ideal candidate will ensure enterprise data is high-quality, secure, compliant, and aligned with business and regulatory requirements. This role combines deep technical expertise in Azure with a strong understanding of data governance principles, MDM, and data quality management. Key Responsibilities: Data Governance & Compliance: Design and enforce data governance policies, standards, and frameworks aligned with enterprise objectives and compliance requirements (e.g., GDPR, HIPAA). Master Data Management (MDM): Implement and manage MDM strategies and solutions within the Azure ecosystem to ensure consistency, accuracy, and accountability of key business data. Azure Data Architecture: Develop and maintain scalable data architecture on Azure (e.g., Azure Data Lake, Synapse, Purview, Alation, Anomalo) to support governance needs. Tooling & Automation: Deploy and manage Azure-native data governance tools such as Azure Purview, Microsoft Fabric, and Data Factory to classify, catalog, and monitor data assets including third party tools like Alation. Data Quality (DQ): Lead and contribute to Data Quality forums, establish DQ metrics, and integrate DQ checks and dashboards within Azure platforms. Security & Access Management: Collaborate with security teams to implement data security measures, role-based access controls, and data encryption in accordance with Azure best practices. Technical Leadership: Guide teams in best practices for designing data pipelines, metadata management, and lineage tracking with Azure tooling. Continuous Improvement: Drive improvements in data management processes and tooling to enhance governance efficiency and compliance posture. Mentorship & Collaboration: Provide technical mentorship to data engineers and analysts, promoting data stewardship and governance awareness across the organization. Qualifications: Education: Bachelors degree in Computer Science, Information Systems, or a related field. Experience: 8+ years of experience in data infrastructure and governance, with 3+ years focused on Azure data services and tools. Technical Skills: Proficiency with data governance tools: Alation, Purview, Synapse, Data Factory, Azure SQL, etc. Strong understanding of data modeling (conceptual, logical, and physical models). Experience with programming languages such as Python, C#, or Java. In-depth knowledge of SQL and metadata management. Leadership: Proven experience leading or influencing cross-functional teams in data governance and architecture initiatives. Certifications (preferred): Azure Data Engineer Associate, Azure Solutions Architect Expert, or Azure Purview-related certifications.
Posted 4 weeks ago
4.0 - 8.0 years
0 - 1 Lacs
Hyderabad, Navi Mumbai, Pune
Work from Office
Role & responsibilities Key Responsibilities: Design, develop, and deploy interactive dashboards and visualizations using TIBCO Spotfire . Work with stakeholders to gather business requirements and translate them into scalable BI solutions. Optimize Spotfire performance and apply best practices in visualization and data storytelling. Integrate data from multiple sources such as SQL databases, APIs, Excel, SAP , or cloud platforms. Implement advanced analytics using IronPython scripting , data functions , and R/Statistical integration . Conduct data profiling, cleansing, and validation to ensure accuracy and consistency. Support end-users with training, troubleshooting, and dashboard enhancements. Must-Have Skills: 58 years of experience in BI and Data Visualization . Minimum 4 years hands-on with TIBCO Spotfire including custom expressions and calculated columns. Strong knowledge of data modeling , ETL processes , and SQL scripting . Expertise in IronPython scripting for interactivity and automation within Spotfire. Experience working with large datasets and performance tuning visualizations. Good to Have: Experience with R , Python , or Statistica for advanced analytics in Spotfire. Familiarity with cloud-based data platforms (AWS Redshift, Snowflake, Azure Synapse). Understanding of data governance , metadata management , and access controls . Exposure to other BI tools like Tableau, Power BI , or QlikView .
Posted 4 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France