Jobs
Interviews

1265 Azure Databricks Jobs - Page 12

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 15.0 years

12 - 16 Lacs

Hyderabad

Work from Office

Overview As a member of the Platform engineering team, you will be the key techno functional expert leading and overseeing PepsiCo's Platforms & operations and drive a strong vision for how Platforms engineering can proactively create a positive impact on the business. You'll be an empowered Leader of a team of Platform engineers who build Platform products for platform optimization and cost optimization and build tools for Platform ops and Data Ops on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As Leader of the Platform engineering team, you will help in managing platform Governance team that builds frameworks to guardrail the platforms of very large and complex data applications in public cloud environments and directly impact the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Active contributor to cost optimization of platforms and services. Manage and scale Azure Data Platforms to support new product launches and drive Platform Stability and Observability across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for Data Platforms for cost and performance. Responsible for implementing best practices around systems integration, security, performance and Platform management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to production Alize data science models. Define and manage SLAs for Platforms and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 10+ years of overall technology experience that includes at least 4+ years of hands-on software development, Program management, and Advanced Analytics. 4+ years of experience with Power BI, Tableau, Data Warehousing, and Data Analytics tools. 4+ years of experience in Platform optimization and performance tuning Experience in managing multiple teams and coordinating with different stakeholders to implement the vision of the team. Fluent with Azure cloud services. Azure Certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modeling, data warehousing, and building Symantec Models. Proficient in DAX queries, Copilot and AI Skills Experience building/operating highly available, distributed systems of data Visualization . Experience with at least one MPP database technology such as Redshift, Synapse or Snowflake. Experience with version control systems like Github and deployment & CI tools. Knowledge of Azure Data Factory, Azure Databricks. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with Augmented Analytics tools is Plus (such as ThoughtSpot, Tellius).

Posted 2 weeks ago

Apply

6.0 - 11.0 years

25 - 27 Lacs

Hyderabad

Work from Office

Overview We are seeking a highly skilled and experienced Azure Data Engineer to join our dynamic team. In this critical role, you will be responsible for designing, developing, and maintaining robust and scalable data solutions on the Microsoft Azure platform. You will work closely with data scientists, analysts, and business stakeholders to translate business requirements into effective data pipelines and data models. Responsibilities Design, develop, and implement data pipelines and ETL/ELT processes using Azure Data Factory, Azure Databricks, and other relevant Azure services. Develop and maintain data lakes and data warehouses on Azure, including Azure Data Lake Storage Gen2 and Azure Synapse Analytics. Build and optimize data models for data warehousing, data marts, and data lakes. Develop and implement data quality checks and data governance processes. Troubleshoot and resolve data-related issues. Collaborate with data scientists and analysts to support data exploration and analysis. Stay current with the latest advancements in cloud computing and data engineering technologies. Participate in all phases of the software development lifecycle, from requirements gathering to deployment and maintenance Qualifications 6+ years of experience in data engineering, with at least 3 years of experience working with Azure data services. Strong proficiency in SQL, Python, and other relevant programming languages. Experience with data warehousing and data lake architectures. Experience with ETL/ELT tools and technologies, such as Azure Data Factory, Azure Databricks, and Apache Spark. Experience with data modeling and data warehousing concepts. Experience with data quality and data governance best practices. Strong analytical and problem-solving skills. Excellent communication and collaboration skills. Experience with Agile development methodologies. Bachelor's degree in Computer Science, Engineering, or a related field (Master's degree preferred). Relevant Azure certifications (e.g., Azure Data Engineer Associate) are a plus

Posted 2 weeks ago

Apply

8.0 - 13.0 years

11 - 15 Lacs

Hyderabad

Work from Office

Overview PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCos global business scale to enable business insights, advanced analytics, and new product development. PepsiCos Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations, and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. Increase awareness about available data and democratize access to it across the company. As a data engineering lead, you will be the key technical expert overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be empowered to create & lead a strong team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Ideally Candidate must be flexible to work an alternative schedule either on tradition work week from Monday to Friday; or Tuesday to Saturday or Sunday to Thursday depending upon coverage requirements of the job. The candidate can work with immediate supervisor to change the work schedule on rotational basis depending on the product and project requirements. Responsibilities Provide leadership and management to a team of data engineers, managing processes and their flow of work, vetting their designs, and mentoring them to realize their full potential. Act as a subject matter expert across different digital projects. Overseework with internal clients and external partners to structure and store data into unified taxonomies and link them together with standard identifiers. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance, and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to productionalize data science models. Define and manage SLAs for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 8+ years of overall technology experience that includes at least 4+ years of hands-on software development, data engineering, and systems architecture. 4+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in Azure. Fluent with Azure cloud services. Azure Certification is a plus. Experience in Azure Log Analytics Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or Snowflake. Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). BA/BS in Computer Science, Math, Physics, or other technical fields. Candidate must be flexible to work an alternative work schedule either on tradition work week from Monday to Friday; or Tuesday to Saturday or Sunday to Thursday depending upon product and project coverage requirements of the job. Candidates are expected to be in the office at the assigned location at least 3 days a week and the days at work needs to be coordinated with immediate supervisor Skills, Abilities, Knowledge: Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Proven track record of leading, mentoring data teams. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals. Ability to lead others without direct authority in a matrixed environment.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

19 - 25 Lacs

Hyderabad

Work from Office

Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity.

Posted 2 weeks ago

Apply

2.0 - 4.0 years

8 - 12 Lacs

Hyderabad

Work from Office

Overview We are seeking a skilled and proactive business analyst with expertise in Azure Data Engineering to join our dynamic team. In this role, you will bridge the gap between business needs and technical solutions, leveraging your analytical skills and Azure platform knowledge to design and implement robust data solutions. You will collaborate closely with stakeholders to gather and translate requirements, develop data pipelines, and ensure data quality and governance. This position requires a strong understanding of Azure services, data modeling, and ETL processes, along with the ability to thrive in a fast-paced, evolving environment. Responsibilities Collaborate with stakeholders to understand business needs and translate them into technical requirements. Design, develop, and implement data solutions using Azure Data Engineering technologies. Analyze complex data sets to identify trends, patterns, and insights that drive business decisions. Create and maintain detailed documentation of business requirements, data models, and data flows. Work in an environment where requirements are not always clearly defined, demonstrating flexibility and adaptability. Conduct data quality assessments and implement data governance practices. Provide training and support to end-users on data tools and solutions. Continuously monitor and optimize data processes for efficiency and performance. Qualifications Minimum of 2-4 years of experience as a data analyst with hands-on experience in Azure Data Engineering. Proficiency in Azure Data Factory, Azure Databricks, Azure SQL Database, and other Azure data services. Strong analytical and problem-solving skills with the ability to work in a fast-paced, ambiguous environment. Excellent communication and interpersonal skills to effectively collaborate with cross-functional teams. Experience with data modeling, ETL processes, and data warehousing. Knowledge of data governance and data quality best practices. Ability to manage multiple projects and priorities simultaneously. Preferred Skills: Experience with other cloud platforms and data engineering tools. Certification in Azure Data Engineering or related fields.

Posted 2 weeks ago

Apply

12.0 - 17.0 years

19 - 22 Lacs

Hyderabad

Work from Office

Overview Seeking a Manager, Data Operations, to support our growing data organization. In this role, you will play a key role in maintaining data pipelines and corresponding platforms (on-prem and cloud) while collaborating with global teams on DataOps initiatives. Manage the day-to-day operations of data pipelines, ensuring governance, reliability, and performance optimization on Microsoft Azure. This role requires hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, real-time streaming architectures, and DataOps methodologies. Ensure availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Support DataOps programs, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. Assist in implementing real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Contribute to the development of governance models and execution roadmaps to optimize efficiency across data platforms, including Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to enhance enterprise-wide data operations. Collaborate on building and supporting next-generation Data & Analytics platforms while fostering an agile and high-performing DataOps team. Support the adoption of Data & Analytics technology transformations, ensuring full sustainment capabilities and automation for proactive issue identification and resolution. Partner with cross-functional teams to drive process improvements, best practices, and operational excellence within DataOps. Responsibilities Support the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Assist in managing end-to-end data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Ensure seamless batch, real-time, and streaming data processing while focusing on high availability and fault tolerance. Contribute to DataOps automation initiatives, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps, Terraform, and Infrastructure-as-Code (IaC). Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to enable data-driven decision-making. Work with IT, data stewards, and compliance teams to align DataOps practices with regulatory and security requirements. Support data operations and sustainment efforts, including testing and monitoring processes to support global products and projects. Assist in data capture, storage, integration, governance, and analytics initiatives, collaborating with cross-functional teams. Manage day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to align data platform capabilities with business needs. Participate in the Agile work intake and management process to support execution excellence for data platform teams. Collaborate with cross-functional teams to troubleshoot and resolve issues related to cloud infrastructure and data services. Assist in developing and automating operational policies and procedures to improve efficiency and service resilience. Support incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric environment, advocating for operational excellence and continuous service improvements. Contribute to building a collaborative, high-performing team culture focused on automation and efficiency in DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity while meeting business goals. Leverage technical expertise in cloud and data operations to improve service reliability and scalability. Qualifications 12+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 12+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 8+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. 5+ years of experience in a management or lead role, with a focus on DataOps execution and delivery. Hands-on experience with Azure Data Factory (ADF) for orchestrating data pipelines and ETL workflows. Proficiency in Azure Synapse Analytics, Azure Data Lake Storage (ADLS), and Azure SQL Database. Familiarity with Azure Databricks for large-scale data processing (basic troubleshooting or support scope is sufficient if not engineering-focused). Exposure to cloud environments (AWS, Azure, GCP) and understanding of CI/CD pipelines for data operations. Knowledge of structured and semi-structured data storage formats (e.g., Parquet, JSON, Delta). Excellent communication skills, with the ability to empathize with stakeholders and articulate technical concepts to non-technical audiences. Strong problem-solving abilities, prioritizing customer needs and advocating for operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational excellence. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience in supporting mission-critical solutions in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) practices, such as automated issue remediation and scalability improvements. Experience driving operational excellence in complex, high-availability data environments. Ability to collaborate across teams, fostering strong relationships with business and IT stakeholders. Experience in data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong analytical and strategic thinking skills, with the ability to execute plans effectively and drive results. Proven ability to work in a fast-changing, complex environment, adapting to shifting priorities while maintaining productivity.

Posted 2 weeks ago

Apply

6.0 - 11.0 years

13 - 17 Lacs

Hyderabad

Work from Office

Overview As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems Responsibilities Active contributor to code development in projects and services. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to productionalize data science models. Define and manage SLAs for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries Qualifications 6+ years of overall technology experience that includes at least 4+ years of hands-on software development, data engineering, and systems architecture. 4+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in Azure. Fluent with Azure cloud services. Azure Certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or SnowFlake. Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). BA/BS in Computer Science, Math, Physics, or other technical fields.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

17 - 20 Lacs

Hyderabad

Work from Office

Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity.

Posted 2 weeks ago

Apply

5.0 - 6.0 years

12 - 15 Lacs

Ahmedabad

Work from Office

We are hiring a Sr. Azure Data Engineer with 5 to 6 Years of experience! Handle full project delivery, client communication & short-term travel to Nairobi. Bonus + travel perks. Ready to lead & explore global projects? Apply now! Provident fund

Posted 2 weeks ago

Apply

6.0 - 8.0 years

9 - 13 Lacs

Chennai

Work from Office

Job Title:Data Engineering Lead Experience6-8 YearsLocationChennai : ADF (Azure Data Factory) Azure Data Bricks Azure Synapse Strong ETL experience Power BI

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad/Secunderabad

Hybrid

Job Objective We 're looking for a skilled and passionate Data Engineer to build robust, scalable data platforms using cutting-edge technologies. If you have expertise in Databricks, Python, PySpark, Azure Data Factory, Azure Synapse, SQL Server , and a deep understanding of data modeling, orchestration, and pipeline development, this is your opportunity to make a real impact. Youll thrive in our cloud-first, innovation-driven environment, designing and optimizing end-to-end data workflows that drive meaningful business outcomes. If you're committed to high performance, clean data architecture, and continuous learning, we want to hear from you! Required Qualifications Education: BE, ME/MTech, MCA, MSc, MBA, or equivalent industry experience Experience: 5 to 10 years working with data engineering technologies ( Databricks, Azure, Python, SQL Server, PySpark, Azure Data Factory, Synapse, Delta Lake, Git, CI/CD Tech Stack, MSBI etc. ) Preferred Qualifications & Skills: Must-Have Skills: Expertise in relational & multi-dimensional database architectures Proficiency in Microsoft BI tools (SQL Server SSRS, SSAS, SSIS), Power BI , and SharePoint Strong experience in Power BI MDX, SSAS, SSIS, SSRS , Tabular & DAX Queries Deep understanding of SQL Server Tabular Model & multidimensional database design Excellent SQL-based data analysis skills Strong hands-on experience with Azure Data Factory, Databricks, PySpark/Python Nice-to-Have Skills: Exposure to AWS or GCP Experience with Lakehouse Architecture, Real-time Streaming (Kafka/Event Hubs), Infrastructure as Code (Terraform/ARM) Familiarity with Cognos, Qlik, Tableau, MDM, DQ, Data Migration MS BI, Power BI, or Azure Certifications

Posted 2 weeks ago

Apply

8.0 - 10.0 years

32 - 35 Lacs

Hyderabad

Work from Office

Position Summary MetLife established a Global capability center (MGCC) in India to scale and mature Data & Analytics, technology capabilities in a cost-effective manner and make MetLife future ready. The center is integral to Global Technology and Operations with a with a focus to protect & build MetLife IP, promote reusability and drive experimentation and innovation. The Data & Analytics team in India mirrors the Global D&A team with an objective to drive business value through trusted data, scaled capabilities, and actionable insights Role Value Proposition MetLife Global Capability Center (MGCC) is looking for a Senior Cloud data engineer who has the responsibility of building ETL/ELT, data warehousing and reusable components using Azure, Databricks and spark. He/She will collaborate with the business systems analyst, technical leads, project managers and business/operations teams in building data enablement solutions across different LOBs and use cases. Job Responsibilities Collect, store, process and analyze large datasets to build and implement extract, transfer, load (ETL) processes Develop metadata and configuration based reusable frameworks to reduce the development effort Develop quality code with integral performance optimizations in place right at the development stage. Collaborate with global team in driving the delivery of projects and recommend development and performance improvements. Extensive experience of various databases types and knowledge to leverage the right one for the need. Strong understanding of data tools and ability to leverage them to understand the data and generate insights Hands on experience in building/designing at-scale Data Lake, Data warehouses, data stores for analytics consumption On prem and Cloud (real time as well as batch use cases) Ability to interact with business analysts and functional analysts in getting the requirements and implementing the ETL solutions. Education, Technical Skills & Other Critical Requirement Education Bachelors degree in computer science, Engineering, or related discipline Experience (In Years) 8 to 10 years of working experience on Azure Cloud using Databricks or Synapse Technical Skills Experience in transforming data using Python, Spark or Scala Technical depth in Cloud Architecture Framework, Lakehouse Architecture and One Lake solutions. Experience in implementing data ingestion and curation process using Azure with tools such as Azure Data Factory, Databricks Workflows, Azure Synapse, Cosmos DB, Spark (Scala/python), Data bricks . Experience in cloud optimized code on Azure using Databricks, Synapse dedicated SQL Pool and serverless Pools, Cosmos, SQL APIs loading and consumption optimizations. Scripting experience primarily on shell/bash/PowerShell would be desirable. Experience in writing SQL and performing data analysis skills for data anomaly detection and data quality assurance. Other Preferred Skills Expertise in Python and experience writing Azure functions using Python/Node.js Experience using Event Hub for data integrations . Required working knowledge of Azure DevOps pipelines Self-starter and ability to adapt with changing business needs

Posted 2 weeks ago

Apply

5.0 - 7.0 years

12 - 16 Lacs

Chennai

Remote

Role & responsibilities Role: Azure Databricks Data Engineer Location : Remote(1 week onboarding in Chennai ) Experience: 5-6years(5 yrs relevant in DataEngineering) Role Summary: The Offshore Technical Resource will support ongoing development and maintenance activities by delivering high-quality technical solutions. Key Responsibilities : Develop, test, and deploy technical components as per the specifications provided by the onshore team. Provide timely resolution of technical issues and production support tickets. Participate in code reviews, ensuring adherence to coding standards and best practices. Contribute to system integrations, data migrations, and configuration tasks as needed. Document technical specifications, procedures, and support guides. Collaborate with QA teams to support testing activities and defect resolution. Maintain effective communication with onshore leads to align on priorities and deliverables. Qualifications: Proficiency in AzureDatabricks(should be very strong), Spark, SQL, and Python for data engineering and remediation tasks.Strong problem-solving and debugging skills.Good verbal and written communication skills. Preferred candidate profile

Posted 2 weeks ago

Apply

5.0 - 10.0 years

17 - 32 Lacs

Kochi

Hybrid

We are conducting a Weekday walk-in drive in Kochi from 15th July to 21s t July 2025 (Weekday only). Venue : Neudesic, an IBM Company, 3 rd Floor, Block A, Prestige Cyber Green Phase 1, Smart City, Kakkanad, Ernakulam, Kerala 682030 Time : 2 PM - 6 PM Date : 28 June 2025, Saturday Experience : 5+ yrs Mode of Interview : In-Person Only for candidates can join in 30 days. Azure Data Engineer Skills required : SQL, Python, PySpark, Azure Data Factory, Azure Data Lake Gen2, Azure Databricks, Azure Synapse, NoSQL DBs, Data Warehouses, GenAI (desirable) Strong data engineering skills in data cleansing, transformation, enrichment, semantic analytics, real-time analytics, ML/DL (desirable), streaming, data modeling, and data management.

Posted 2 weeks ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Noida

Work from Office

8+ years of experience in data engineering with a strong focus on AWS services . Proven expertise in: Amazon S3 for scalable data storage AWS Glue for ETL and serverless data integration using Amazon S3, DataSync, EMR, Redshift for data warehousing and analytics Proficiency in SQL , Python , or PySpark for data processing. Experience with data modeling , partitioning strategies , and performance optimization . Familiarity with orchestration tools like AWS Step Functions , Apache Airflow , or Glue Workflows . Strong understanding of data lake and data warehouse architectures. Excellent problem-solving and communication skills. Mandatory Competencies Beh - Communication ETL - ETL - AWS Glue Big Data - Big Data - Pyspark Cloud - AWS - AWS S3, S3 glacier, AWS EBS Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Database - Database Programming - SQL Programming Language - Python - Python Shell Cloud - Azure - Azure Data Factory (ADF), Azure Databricks, Azure Data Lake Storage, Event Hubs, HDInsight

Posted 2 weeks ago

Apply

8.0 - 13.0 years

30 - 35 Lacs

Bengaluru

Remote

Proven experience as a Data Engineer, BI Analyst, or similar role. Strong proficiency in SQL and experience with database management systems (e.g., Fabric, Synapse, SQL Server). Experience with ETL tools and data integration platforms Required Candidate profile Proficiency in data visualization tools – Power BI. Knowledge of cloud platforms (Azure) and data warehousing concepts (semantic modelling) and technologies (e.g., Snowflake, Redshift).

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

The project is expected to last for 6 months with a monthly rate of 1.60 Lac. The ideal candidate should have 4-7 years of experience and the work location will be in Bangalore with hybrid working options available. As a candidate, you are required to have a strong proficiency in Python, LLMs, Lang Chain, Prompt Engineering, and related Gen AI technologies. Additionally, you should have proficiency with Azure Databricks and possess strong analytical, problem-solving, and stakeholder communication skills. A solid understanding of data governance frameworks, compliance, and internal controls is essential. Your experience should include data quality rule development, profiling, and implementation, as well as familiarity with Azure Data Services such as Data Lake, Synapse, and Blob Storage. Preferred qualifications for this role include experience in supporting AI/ML pipelines, particularly with GenAI or LLM based models. Proficiency in Python, PySpark, SQL, and Delta Lake architecture is desired, along with hands-on experience in Azure Data Lake, Azure Data Factory, and Azure Synapse Analytics. A background in data engineering with a strong expertise in Databricks would be beneficial for this position.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

The Data Analytics Engineer role at Rightpoint involves being a crucial part of client projects to develop and deliver decisioning intelligence solutions. Working collaboratively with other team members and various business and technical entities on the client side is a key aspect of this role. As a member of a modern data team, your primary responsibility will be to bridge the gap between enterprise data engineers and business-focused data and visualization analysts. This involves transforming raw data into clean, organized, and reusable datasets to facilitate effective analysis and decisioning intelligence data products. Key Responsibilities: - Design, develop, and maintain clean, scalable data models to support analytics and business intelligence needs. Define rules and requirements for the data to serve business analysis objectives. - Collaborate with data analysts and business stakeholders to define data requirements, ensure data consistency across platforms, and promote self-service analytics. - Build, optimize, and document transformed pipelines into visualization and analysis environments to ensure high data quality and integrity. - Implement data transformation best practices using modern tools like dbt, SQL, and cloud data warehouses (e.g., Azure Synapse, BigQuery, Azure Databricks). - Monitor and troubleshoot data quality issues, ensuring accuracy, completeness, and reliability. - Define and maintain data quality metrics, data formats, and adopt automated methods to cleanse and improve data quality. - Optimize data performance to ensure query efficiency for large datasets. - Establish and maintain analytics platform best practices for the team, including version control, data unit testing, CI/CD, and documentation. - Collaborate with other team members, including data engineers, business and visualization analysts, and data scientists to align data assets with business analysis objectives. - Work closely with data engineering teams to integrate new data sources into the data lake and optimize performance. - Act as a consultant within cross-functional teams to understand business needs and develop appropriate data solutions. - Demonstrate strong communication skills, both written and verbal, and exhibit professionalism, conciseness, and effectiveness. - Take initiative, be proactive, anticipate needs, and complete projects comprehensively. - Exhibit a willingness to continuously learn, problem-solve, and assist others. Desired Qualifications: - Strong knowledge of SQL and Python. - Familiarity with cloud platforms like Azure, Azure Databricks, and Google BigQuery. - Understanding of schema design and data modeling methodologies. - Hands-on experience with dbt for data transformation and modeling. - Experience with version control systems like Git and CI/CD workflows. - Passion for continuous improvement, learning, and applying new technologies to everyday activities. - Ability to translate technical concepts for non-technical stakeholders. - Analytical mindset to address business challenges through data design. - Bachelor's or master's degree in computer science, Data Science, Engineering, or a related field. - Strong problem-solving skills and attention to detail. By joining Rightpoint, you will have the opportunity to work with cutting-edge business and data technologies, in a collaborative and innovative environment. Competitive salary and benefits package, along with career growth opportunities in a data-driven organization are some of the perks of working at Rightpoint. If you are passionate about data and enjoy creating efficient, scalable data solutions, we would love to hear from you! Benefits and Perks at Rightpoint include 30 Paid leaves, Public Holidays, Casual and open office environment, Flexible Work Schedule, Family medical insurance, Life insurance, Accidental Insurance, Regular Cultural & Social Events, Continuous Training, Certifications, and Learning Opportunities. Rightpoint is committed to bringing people together from diverse backgrounds and experiences to create phenomenal work, making it an inclusive and welcoming workplace for all. EEO Statement: Rightpoint is an equal opportunity employer and is committed to providing a workplace that is free from any form of discrimination.,

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

coimbatore, tamil nadu

On-site

As a Data Engineer specializing in supply chain applications, you will play a crucial role in the Supply Chain Analytics team at NovintiX, based in Coimbatore, India. Your primary responsibility will be to design, develop, and optimize scalable data solutions that support various aspects of logistics, procurement, and demand planning. Your key responsibilities will include building and enhancing data pipelines for inventory, shipping, and procurement data, integrating data from ERP, PLM, and third-party sources, and creating APIs to facilitate seamless data exchange. Additionally, you will be tasked with designing and maintaining enterprise-grade data lakes and warehouses while ensuring high standards of data quality, integrity, and security. Collaborating with stakeholders, you will be involved in developing reporting dashboards using tools like Power BI, Tableau, or QlikSense to support supply chain decision-making through data-driven insights. You will also work on building data models and algorithms for demand forecasting and logistics optimization, leveraging ML libraries and concepts for predictive analysis. Your role will involve cross-functional collaboration with supply chain, logistics, and IT teams, translating complex technical solutions into business language to drive operational efficiency. Implementing robust data governance frameworks and ensuring data compliance and audit readiness will be essential aspects of your job. To qualify for this position, you should have at least 7 years of experience in Data Engineering, a Bachelor's degree in Computer Science/IT or a related field, and expertise in technologies such as Python, Java, SQL, Spark SQL, Hadoop, PySpark, NoSQL, Power BI, Tableau, QlikSense, Azure Data Factory, Azure Databricks, and AWS. Strong collaboration, communication skills, and experience in fast-paced, agile environments are also desired. This is a full-time position based in Coimbatore, Tamil Nadu, requiring in-person work. If you are passionate about leveraging data to drive supply chain efficiency and are ready to take on this exciting challenge, please send your resume to shanmathi.saravanan@novintix.com before the application deadline on 13/07/2025.,

Posted 2 weeks ago

Apply

12.0 - 22.0 years

35 - 100 Lacs

Noida, Hyderabad, Jaipur

Hybrid

Databricks Data Architect Experience : 12+ Years Location: Mumbai Onsite Salary: best in Industry! Immediate Joiners We are seeking an experienced Databricks Data Architect with a strong background in designing scalable data platforms in the manufacturing or energy sector . The ideal candidate will have over 10 years of experience in designing and implementing enterprise-grade data solutions, with strong proficiency in Azure Databricks and big data technologies . Key Responsibilities: Architect and deliver scalable, cloud-native data solutions to support both real-time and batch processing needs. Work closely with business and technical stakeholders to understand business requirements, define data strategy, governance, and architecture standards. Ensure data quality, integrity, and security across platforms and systems. Define data models, data integration patterns, and governance frameworks to support analytics use cases. Collaborate with DevOps and Engineering teams to ensure robust CI/CD pipelines and deliver production-grade deployments. Define and enforce data architecture standards, frameworks, and best practices across data engineering and analytics teams. Implement data governance, security, and compliance measures, including data cataloguing, access controls, and regulatory adherence. Lead capacity planning and performance tuning efforts to optimize data processing and query performance. Create and maintain architecture documentation, including data flow diagrams, data models, entity-relationship diagrams, system interfaces etc. Design clear and impactful visualizations to support key analytical objectives. Required Skills and Experience: Strong proficiency in Azure Databricks and big data technologies (Apache Spark, Kafka, Event Hub). Deep understanding of data modeling, data lakes, batch and real-time/streaming data processing. Proven experience with high volume data pipeline orchestration and ETL/ELT workflows. Experience designing and implementing data lakes, data warehouses, and lakehouse architectures. Proven experience in designing and implementing data visualization solutions for actionable insights. Strong understanding of data integration patterns, APIs, and message streaming (e.g., Event Hub, Kafka). Experience with metadata management, and data quality frameworks. Excellent problem-solving skills and the ability to translate business needs into technical solutions. Experience with structured and unstructured data ingestion, transformation, and processing at scale. Excellent communication, documentation, and stakeholder management skills. Preferred Qualifications: Familiarity with lakehouse architectures using Delta Lake. Knowledge of manufacturing/energy domain-specific standards and protocols. Experience with IoT data and time-series analysis. Knowledge of data governance, security, and compliance best practices.

Posted 2 weeks ago

Apply

12.0 - 22.0 years

35 - 100 Lacs

Navi Mumbai, Pune, Bengaluru

Hybrid

Databricks Data Architect Experience : 12+ Years Location: Mumbai Onsite Salary: best in Industry! Immediate Joiners We are seeking an experienced Databricks Data Architect with a strong background in designing scalable data platforms in the manufacturing or energy sector . The ideal candidate will have over 10 years of experience in designing and implementing enterprise-grade data solutions, with strong proficiency in Azure Databricks and big data technologies . Key Responsibilities: Architect and deliver scalable, cloud-native data solutions to support both real-time and batch processing needs. Work closely with business and technical stakeholders to understand business requirements, define data strategy, governance, and architecture standards. Ensure data quality, integrity, and security across platforms and systems. Define data models, data integration patterns, and governance frameworks to support analytics use cases. Collaborate with DevOps and Engineering teams to ensure robust CI/CD pipelines and deliver production-grade deployments. Define and enforce data architecture standards, frameworks, and best practices across data engineering and analytics teams. Implement data governance, security, and compliance measures, including data cataloguing, access controls, and regulatory adherence. Lead capacity planning and performance tuning efforts to optimize data processing and query performance. Create and maintain architecture documentation, including data flow diagrams, data models, entity-relationship diagrams, system interfaces etc. Design clear and impactful visualizations to support key analytical objectives. Required Skills and Experience: Strong proficiency in Azure Databricks and big data technologies (Apache Spark, Kafka, Event Hub). Deep understanding of data modeling, data lakes, batch and real-time/streaming data processing. Proven experience with high volume data pipeline orchestration and ETL/ELT workflows. Experience designing and implementing data lakes, data warehouses, and lakehouse architectures. Proven experience in designing and implementing data visualization solutions for actionable insights. Strong understanding of data integration patterns, APIs, and message streaming (e.g., Event Hub, Kafka). Experience with metadata management, and data quality frameworks. Excellent problem-solving skills and the ability to translate business needs into technical solutions. Experience with structured and unstructured data ingestion, transformation, and processing at scale. Excellent communication, documentation, and stakeholder management skills. Preferred Qualifications: Familiarity with lakehouse architectures using Delta Lake. Knowledge of manufacturing/energy domain-specific standards and protocols. Experience with IoT data and time-series analysis. Knowledge of data governance, security, and compliance best practices.

Posted 2 weeks ago

Apply

9.0 - 14.0 years

30 - 45 Lacs

Noida, Gurugram, Bengaluru

Hybrid

9+ Years of Implementation experience on time-critical production projects following key software development practices 8+ years of programming experience in Python/Scala 6+ years of hands-on programming experience in Spark using scala/python 4+ years of hands-on working experience with Azure services like: Azure Databricks Azure Data Factory Azure Functions Azure App Service Good knowledge in writing SQL queries Good knowledge in building REST API's Good knowledge on tools like Azure Dev Ops & Github Ability to understand the existing application codebase, perform impact analysis and update the code when required based on the business logic or for optimization Ability to learn modern technologies and be part of fast paced teams Proven excellent Analytical and Communication skills (Both Verbal and Written) Proficiency with AI-powered development tools such as GitHub Copilot or AWS Code Whisperer or Googles Codey (Duet AI) or any relevant tools is expected. Candidates should be adept at integrating these tools into their workflows to accelerate development, improve code quality, and enhance delivery velocity. Expected to proactively leverage AI tools throughout the software development lifecycle to drive faster iteration, reduce manual effort, and boost overall engineering productivity

Posted 2 weeks ago

Apply

5.0 - 8.0 years

15 - 20 Lacs

Pune

Work from Office

Critical Skills to Possess: Expertise in data ingestion, data processing and analytical pipelines for big data, relational databases, and data warehouse solutions Hands-on experience with Agile software development Experience in designing and hands-on development in cloud-based analytics solutions. Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. Designing and building of data pipelines using API ingestion and Streaming ingestion methods. Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. Thorough understanding of Azure and AWS Cloud Infrastructure offerings. Expertise in Azure Databricks, Azure Stream Analytics, Power BI is desirable Knowledge of SAP and BW/ BPC is desirable. Expert in Python, Scala, SQL is desirable Experience developing security models. Preferred Qualifications: BS degree in Computer Science or Engineering or equivalent experience Roles and Responsibilities Roles and Responsibilities: Design, develop, and deploy data pipelines and ETL processes using Azure Data Factory. Implement data integration solutions, ensuring data flows efficiently and reliably between various data sources and destinations. Collaborate with data architects and analysts to understand data requirements and translate them into technical specifications. Build and maintain scalable and optimized data storage solutions using Azure Data Lake Storage, Azure SQL Data Warehouse, and other relevant Azure services. Develop and manage data transformation and cleansing processes to ensure data quality and accuracy. Monitor and troubleshoot data pipelines to identify and resolve issues in a timely manner. Optimize data pipelines for performance, cost, and scalability

Posted 2 weeks ago

Apply

5.0 - 7.0 years

15 - 20 Lacs

Pune

Work from Office

Critical Skills to Possess: Advanced working knowledge and experience with relational and non-relational databases. Advanced working knowledge and experience with API data providers Experience building and optimizing Big Data pipelines, architectures, and datasets. Strong analytic skills related to working with structured and unstructured datasets. Hands-on experience in Azure Databricks utilizing Spark to develop ETL pipelines. Strong proficiency in data analysis, manipulation, and statistical modeling using tools like Spark, Python, Scala, SQL, or similar languages. Strong experience in Azure Data Lake Storage Gen2, Azure Data Factory, Databricks, Event Hub, Azure Synapse. Familiarity with several of the following technologies: Event Hub, Docker, Azure Kubernetes Service, Azure DWH, API Azure, Azure Function, Power BI, Azure Cognitive Services. Azure DevOps experience to deploy the data pipelines through CI/CD. Preferred Qualifications: BS degree in Computer Science or Engineering or equivalent experience Roles and Responsibilities Roles and Responsibilities: You are detailed reviewing and analyzing structured, semi-structured and unstructured data sources for quality, completeness, and business value. You design, architect, implement and test rapid prototypes that demonstrate value of the data and present them to diverse audiences. You participate in early state design and feature definition activities. Responsible for implementing robust data pipeline using Microsoft, Databricks Stack Responsible for creating reusable and scalable data pipelines. You are a Team-Player, collaborating with team members across multiple engineering teams to support the integration of proven prototypes into core intelligence products. You have strong communication skills to effectively convey complex data insights to non-technical stakeholders.

Posted 2 weeks ago

Apply

7.0 - 10.0 years

15 - 30 Lacs

Pune, Chennai

Work from Office

Exp - 7 10 Yrs SSIS , ETL , SQL , Azure Synapse , ADF Pune chennai

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies