Jobs
Interviews

1265 Azure Databricks Jobs - Page 36

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 11.0 years

20 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

DS Key Responsibilities Combine expertise in mathematics statistics computer science and domain knowledge to create AIML models to solve various business challenges Collaborate closely with the AI Technical Manager and GCC Petro technical professionals and data engineers to integrate models into the business framework Identify and frame opportunities to apply advanced analytics modeling and related technologies to data to help businesses gain insight and improve decision making workflow and automation Understand and communicate the value of proposed opportunity with team members and other stakeholders Identify needed data and appropriate technology to solve identified business challenges Clean data and develop and test models Establish the life cycle management process for models Provide technical mentoring in modeling and analytics technologies the specifics of the modeling process and general consulting skills Drive innovation in AIML to enhance capabilities in data driven decision making Aligns with team on shared goals and outcomes recognizes others contributions and work collaboratively seek diverse perspectives Takes actions to develop self and others beyond existing skillset Encourages innovative ideas adapts to change and changing technologies Understand and communicate data insights and model behaviors to stakeholders with varying levels of technical expertise Required Qualification Minimum 5 years of experience in designing and developing AIML models and or various optimization algorithms 5 to 9 years of experience Solid foundation in mathematics probability and statistics with demonstrated depth of knowledge and experience in advanced analytics and data science methodologies eg supervised and unsupervised learning statistics data science model development Proficiency in Python and working knowledge of cloud AIML services Azure Machine Learning and Databricks preferred Domain knowledge relevant to the energy sector and working knowledge of Oil and Gas value chain eg upstream midstream or downstream and associated business workflows Proven ability to frame data science opportunities leverage standard foundational tools and Azure services to perform exploratory data analysis for purposes of data cleaning and discovery visualize data and identify actions to reach needed results Ability to quickly assess current state and apply technical concepts across cross functional business workflows Experience with driving successful execution deliverables and accountabilities to meet quality and schedule goals Ability to translate complex data into actionable insights that drive business val Demonstrated ability to engage and establish collaborative relationships both inside and outside immediate workgroup at various organizational levels across functional and geographic boundaries to achieve desired outcomes Demonstrated ability to adjust behavior based on feedback and provide feedback to other Team oriented mindset with effective communication skills and the ability to work collaboratively Strong problem solving skills and attention to detail Excellent communication and collaboration skills

Posted 1 month ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Mumbai

Work from Office

Roles & Responsibilities: Resource must have 5+ years of hands on experience in Azure Cloud development (ADF + DataBricks) - mandatory Strong in Azure SQL and good to have knowledge on Synapse / Analytics Experience in working on Agile Project and familiar with Scrum/SAFe ceremonies. Good communication skills - Written & Verbal Can work directly with customer Ready to work in 2nd shift Good in communication and flexible Defines, designs, develops and test software components/applications using Microsoft Azure- Data-bricks, ADF, ADL, Hive, Python, Data bricks, SparkSql, PySpark. Expertise in Azure Data Bricks, ADF, ADL, Hive, Python, Spark, PySpark Strong T-SQL skills with experience in Azure SQL DW Experience handling Structured and unstructured datasets Experience in Data Modeling and Advanced SQL techniques Experience implementing Azure Data Factory Pipelines using latest technologies and techniques. Good exposure in Application Development. The candidate should work independently with minimal supervision

Posted 1 month ago

Apply

10.0 - 15.0 years

30 - 40 Lacs

Pune, Bengaluru

Hybrid

Job Role & responsibilities: - Understanding operational needs by collaborating with specialized teams Supporting key business operations. This involves architecture designing, building and deploying data systems, pipelines etc Designing and implementing agile, scalable, and cost efficiency solution on cloud data services. Lead a team of developers, implement Sprint planning and executions to ensure timely deliveries Technical Skill, Qualification & experience required:- 9-11 years of experience in Cloud Data Engineering. Experience in Azure Cloud Data Engineering, Azure Databricks, datafactory , Pyspark, SQL,Python Hands on experience in Data Engineer, Azure Databricks, Data factory, Pyspark, SQL Proficient in Cloud Services Azure Architect and implement ETL and data movement solutions. Bachelors/Master's Degree in Computer Science or related field Design and implement data solutions using medallion architecture, ensuring effective organization and flow of data through bronze, silver, and gold layers. Optimize data storage and processing strategies to enhance performance and data accessibility across various stages of the medallion architecture. Collaborate with data engineers and analysts to define data access patterns and establish efficient data pipelines. Develop and oversee data flow strategies to ensure seamless data movement and transformation across different environments and stages of the data lifecycle. Migrate data from traditional database systems to Cloud environment Strong hands-on experience for working with Streaming dataset Building Complex Notebook in Databricks to achieve business Transformations. Hands-on Expertise in Data Refinement using Pyspark and Spark SQL Familiarity with building dataset using Scala. Familiarity with tools such as Jira and GitHub Experience leading agile scrum, sprint planning and review sessions Good communication and interpersonal skills * Immediate Joiners will be preferred only

Posted 1 month ago

Apply

4.0 - 9.0 years

0 Lacs

Bengaluru

Work from Office

Required skillset: Experience in Data Platform Support, DevOps, or Operations role. Experience in Tableau, Snowflake, AWS, Informatica Cloud. Familiarity with ITSM practices Proficiency with Jira, CI/CD workflows, and monitoring tools.

Posted 1 month ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

Hyderabad

Work from Office

Overview PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo s global business scale to enable business insights, advanced analytics, and new product development. PepsiCo s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations, and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company. Responsible for day-to-day data collection, transportation, maintenance/ curation, and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders. Increase awareness about available data and democratize access to it across the company . As a data enginee r , you will be the key technical expert building PepsiCo's data product s to drive a strong vision. You'll be empowered to create data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help developing very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics . You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Act as a subject matter expert across different digital projects. Oversee work with internal clients and external partners to structure and store data into unified taxonomies and link them together with standard identifiers. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance, and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to productionalize data science models. Define and manage SLA s for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 4+ years of overall technology experience that includes at least 3+ years of hands-on software development, data engineering, and systems architecture. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 3+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in Azure. Fluent with Azure cloud services. Azure Certification is a plus. Experience in Azure Log Analytics Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operatinghighly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or Snowflake. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Experience with version control systems like Github and deployment & CI tools. Working knowledge of agile development, including DevOps and DataOps concepts. B Tech/ BA/ BS in Computer Science, Math, Physics, or other technical fields. Skills, Abilities, Knowledge: Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals. 4+ years of overall technology experience that includes at least 3+ years of hands-on software development, data engineering, and systems architecture. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 3+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in Azure. Fluent with Azure cloud services. Azure Certification is a plus. Experience in Azure Log Analytics Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operatinghighly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or Snowflake. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Experience with version control systems like Github and deployment & CI tools. Working knowledge of agile development, including DevOps and DataOps concepts. B Tech/BA/BS in Computer Science, Math, Physics, or other technical fields.

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 16 Lacs

Bangalore Rural, Bengaluru

Work from Office

Experience in designing, building, and managing data solutions on Azure. Design, develop, and optimize big data pipelines and architectures on Azure. Implement ETL/ELT processes using Azure Data Factory, Databricks, and Spark. Required Candidate profile 5yrs of exp in data engineering and big data technologies. Hands-on experience with Azure services (Azure Data Factory, Azure Synapse, Azure SQL, ADLS, etc.). Databricks Certification (Mandatory).

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Pune

Work from Office

The data architect is responsible for designing, creating, and managing an organizations data architecture. This role is critical in establishing a solid foundation for data management within an organization, ensuring that data is organized, accessible, secure, and aligned with business objectives. The data architect designs data models, warehouses, file systems and databases, and defines how data will be collected and organized. Responsibilities Interprets and delivers impactful strategic plans improving data integration, data quality, and data delivery in support of business initiatives and roadmaps Designs the structure and layout of data systems, including databases, warehouses, and lakes Selects and designs database management systems that meet the organizations needs by defining data schemas, optimizing data storage, and establishing data access controls and security measures Defines and implements the long-term technology strategy and innovations roadmaps across analytics, data engineering, and data platforms Designs processes for the ETL process from various sources into the organizations data systems Translates high-level business requirements into data models and appropriate metadata, test data, and data quality standards Manages senior business stakeholders to secure strong engagement and ensures that the delivery of the project aligns with longer-term strategic roadmaps Simplifies the existing data architecture, delivering reusable services and cost-saving opportunities in line with the policies and standards of the company Leads and participates in the peer review and quality assurance of project architectural artifacts across the EA group through governance forums Defines and manages standards, guidelines, and processes to ensure data quality Works with IT teams, business analysts, and data analytics teams to understand data consumers needs and develop solutions Evaluates and recommends emerging technologies for data management, storage, and analytics Design, create, and implement logical and physical data models for both IT and business solutions to capture the structure, relationships, and constraints of relevant datasets Build and operationalize complex data solutions, correct problems, apply transformations, and recommend data cleansing/quality solutions Effectively collaborate and communicate with various stakeholders to understand data and business requirements and translate them into data models Create entity-relationship diagrams (ERDs), data flow diagrams, and other visualization tools to represent data models Collaborate with database administrators and software engineers to implement and maintain data models in databases, data warehouses, and data lakes Develop data modeling best practices, and use these standards to identify and resolve data modeling issues and conflicts Conduct performance tuning and optimization of data models for efficient data access and retrieval Incorporate core data management competencies, including data governance, data security and data quality Job Requirements Education: A bachelors degree in computer science, data science, engineering, or related field Experience: At least five years of relevant experience in design and implementation of data models for enterprise data warehouse initiatives Experience leading projects involving data warehousing, data modeling, and data analysis Design experience in Azure Databricks, PySpark, and Power BI/Tableau Skills: Ability in programming languages such as Java, Python, and C/C++ Ability in data science languages/tools such as SQL, R, SAS, or Excel Proficiency in the design and implementation of modern data architectures and concepts such as cloud services (AWS, Azure, GCP), real-time data distribution (Kafka, Dataflow), and modern data warehouse tools (Snowflake, Databricks) Experience with database technologies such as SQL, NoSQL, Oracle, Hadoop, or Teradata Understanding of entity-relationship modeling, metadata systems, and data quality tools and techniques Ability to think strategically and relate architectural decisions and recommendations to business needs and client culture Ability to assess traditional and modern data architecture components based on business needs Experience with business intelligence tools and technologies such as ETL, Power BI, and Tableau Ability to regularly learn and adopt new technology, especially in the ML/AI realm Strong analytical and problem-solving skills Ability to synthesize and clearly communicate large volumes of complex information to senior management of various technical understandings Ability to collaborate and excel in complex, cross-functional teams involving data scientists, business analysts, and stakeholders Ability to guide solution design and architecture to meet business needs Expert knowledge of data modeling concepts, methodologies, and best practices Proficiency in data modeling tools such as Erwin or ER/Studio Knowledge of relational databases and database design principles Familiarity with dimensional modeling and data warehousing concepts Strong SQL skills for data querying, manipulation, and optimization, and knowledge of other data science languages, including JavaScript, Python, and R Ability to collaborate with cross-functional teams and stakeholders to gather requirements and align on data models Excellent analytical and problem-solving skills to identify and resolve data modeling issues Strong communication and documentation skills to effectively convey complex data modeling concepts to technical and business stakeholders

Posted 1 month ago

Apply

4.0 - 7.0 years

7 - 11 Lacs

Bengaluru

Work from Office

With 4 9 Years of Experience Build ETL ELT pipelines with Azure Data Factory Azure Databricks Spark Azure Data Lake Azure SQL Database and Synapse Minimum 3 Years of Hands-on experience developing Solid knowledge of Data Modelling Relational Databases and BI and Data Warehousing Demonstrated expertise in SQL Good to have experience with CICD Cloud architectures NoSQL Databases Azure Analysis Services and Power BI Working knowledge or experience in Agile, DevOps Good written and verbal communication skills English Ability to work with geographically diverse teams via collaborative technologies

Posted 1 month ago

Apply

2.0 - 8.0 years

6 - 10 Lacs

Kolkata, Mumbai, Hyderabad

Work from Office

- PowerBI and AAS expert (Strong SC or Specialist Senior) - Should have hands-on experience of Data Modelling in Azure SQL Data Ware House and Azure Analysis Service - Should be able twrite and test Dex queries - Should be able generate Paginated Reports in PowerBI - Should have minimum 3 Years' working experience in delivering projects in PowerBI ROLE 2 : - DataBricks expert (Strong SC or Specialist Senior) - Should have minimum 3 years' working experience of writing code in Spark and Scala ROLE 3: - One Azure backend expert (Strong SC or Specialist Senior) - Should have hands-on experience of working with ADLS, ADF and Azure SQL DW - Should have minimum 3 Year's working experience of delivering Azure projects

Posted 1 month ago

Apply

10.0 - 20.0 years

10 - 20 Lacs

Hyderabad

Work from Office

Maddisoft has the following immediate opportunity, let us know if you or someone you know would be interested. Send in your resume ASAP. Send in resume along with LinkedIn profile without which applications will not be considered. Call us NOW! Job Title: Solution Architect Job Location: Hyderabad, India Responsibilities Interprets and delivers impactful strategic plans improving data integration, data quality, and data delivery in support of business initiatives and roadmaps Designs the structure and layout of data systems, including databases, warehouses, and lakes Selects and implements database management systems that meet the organizations needs by defining data schemas, optimizing data storage, and establishing data access controls and security measures Defines and implements the long-term technology strategy and innovations roadmaps across analytics, data engineering, and data platforms Designs and implements processes for the ETL process from various sources into the organizations data systems Translates high-level business requirements into data models and appropriate metadata, test data, and data quality standards Manages senior business stakeholders to secure strong engagement and ensures that the delivery of the project aligns with longer-term strategic roadmaps Simplifies the existing Solution Architecture, delivering reusable services and cost-saving opportunities in line with the policies and standards of the company Leads and participates in the peer review and quality assurance of project architectural artifacts across the EA group through governance forums Defines and manages standards, guidelines, and processes to ensure data quality Works with IT teams, business analysts, and data analytics teams to understand data consumers needs and develop solutions Evaluates and recommends emerging technologies for data management, storage, and analytics Job Requirements Bachelor's degree in Computer Science, Information Sciences or related discipline and 5 - 8 years of relevant experience (ex: IT solutions architecture, enterprise architecture, and systems & application design) or 12 -15 years or related experience Broad technical expertise in at least one area, such as application development, enterprise applications or IT systems engineering Excellent communications skills - Able to effectively communicate highly technical information in non-technical terminology (written and verbal) Expert in change management principles associated with new technology implementations Deep understanding of project management principles Preferred Qualifications Strong understanding of Azure cloud services Develop and maintain strong relationships with various business areas and IT Teams to understand their needs and challenges. Proactively identify opportunities for collaboration and engagement across IT Teams. At least five years of relevant experience in design and implementation of data models for enterprise data warehouse initiatives Experience leading projects involving data warehousing, data modeling, and data analysis Design experience in Azure Databricks, PySpark, and Power BI/Tableau Strong ability in programming languages such as Java, Python, and C/C++ Ability in data science languages/tools such as SQL, R, SAS, or Excel Proficiency in the design and implementation of modern Solution Architectures and concepts such as cloud services (AWS, Azure, GCP), real-time data distribution (Kafka, Dataflow), and modern data warehouse tools (Snowflake, Databricks) Experience with database technologies such as SQL, NoSQL, Oracle, Hadoop, or Teradata Understanding of entity-relationship modeling, metadata systems, and data quality tools and techniques Ability to think strategically and relate architectural decisions and recommendations to business needs and client culture Ability to assess traditional and modern Solution Architecture components based on business needs Experience with business intelligence tools and technologies such as ETL, Power BI, and Tableau Ability to regularly learn and adopt new technology, especially in the ML/AI realm Strong analytical and problem-solving skills Ability to synthesize and clearly communicate large volumes of complex information to senior management of various technical understandings Ability to collaborate and excel in complex, cross-functional teams involving data scientists, business analysts, and stakeholders Ability to guide solution design and architecture to meet business needs.

Posted 1 month ago

Apply

8.0 - 12.0 years

25 - 30 Lacs

Chennai

Work from Office

Job description Job Title: Manager Data Engineer - Azure Location: Chennai (On-site) Experience: 8 - 12 years Employment Type: Full-Time About the Role We are seeking a highly skilled Senior Azure Data Solutions Architect to design and implement scalable, secure, and efficient data solutions supporting enterprise-wide analytics and business intelligence initiatives. You will lead the architecture of modern data platforms, drive cloud migration, and collaborate with cross-functional teams to deliver robust Azure-based solutions. Key Responsibilities Architect and implement end-to-end data solutions using Azure services (Data Factory, Databricks, Data Lake, Synapse, Cosmos DB Design robust and scalable data models, including relational, dimensional, and NoSQL schemas. Develop and optimize ETL/ELT pipelines and data lakes using Azure Data Factory, Databricks, and open formats such as Delta and Iceberg. Integrate data governance, quality, and security best practices into all architecture designs. Support analytics and machine learning initiatives through structured data pipelines and platforms. Collaborate with data engineers, analysts, data scientists, and business stakeholders to align solutions with business needs. Drive CI/CD integration with Databricks using Azure DevOps and tools like DBT. Monitor system performance, troubleshoot issues, and optimize data infrastructure for efficiency and reliability. Stay current with Azure platform advancements and recommend improvements. Required Skills & Experience Extensive hands-on experience with Azure services: Data Factory, Databricks, Data Lake, Azure SQL, Cosmos DB, Synapse. • Expertise in data modeling and design (relational, dimensional, NoSQL). • Proven experience with ETL/ELT processes, data lakes, and modern lakehouse architectures. • Proficiency in Python, SQL, Scala, and/or Java. • Strong knowledge of data governance, security, and compliance frameworks. • Experience with CI/CD, Azure DevOps, and infrastructure as code (Terraform or ARM templates). • Familiarity with BI and analytics tools such as Power BI or Tableau. • Excellent communication, collaboration, and stakeholder management skills. • Bachelors degree in Computer Science, Engineering, Information Systems, or related field. Preferred Qualifications • Experience in regulated industries (finance, healthcare, etc.). • Familiarity with data cataloging, metadata management, and machine learning integration. • Leadership experience guiding teams and presenting architectural strategies to leadership. Why Join Us? • Work on cutting-edge cloud data platforms in a collaborative, innovative environment. • Lead strategic data initiatives that impact enterprise-wide decision-making. • Competitive compensation and opportunities for professional growth.

Posted 1 month ago

Apply

5.0 - 8.0 years

14 - 18 Lacs

Chennai

Work from Office

Job description Job Title: Lead Data Engineer - Azure | | GeakMinds | Chennai Location: Chennai (On-site) Experience: 5 - 8 years Employment Type: Full-Time About the Role We are seeking a highly skilled Senior Azure Data Solutions Architect to design and implement scalable, secure, and efficient data solutions supporting enterprise-wide analytics and business intelligence initiatives. You will lead the architecture of modern data platforms, drive cloud migration, and collaborate with cross-functional teams to deliver robust Azure-based solutions. Key Responsibilities Architect and implement end-to-end data solutions using Azure services (Data Factory, Databricks, Data Lake, Synapse, Cosmos DB Design robust and scalable data models, including relational, dimensional, and NoSQL schemas. Develop and optimize ETL/ELT pipelines and data lakes using Azure Data Factory, Databricks, and open formats such as Delta and Iceberg. Integrate data governance, quality, and security best practices into all architecture designs. Support analytics and machine learning initiatives through structured data pipelines and platforms. Collaborate with data engineers, analysts, data scientists, and business stakeholders to align solutions with business needs. Drive CI/CD integration with Databricks using Azure DevOps and tools like DBT. Monitor system performance, troubleshoot issues, and optimize data infrastructure for efficiency and reliability. Stay current with Azure platform advancements and recommend improvements. Required Skills & Experience Extensive hands-on experience with Azure services: Data Factory, Databricks, Data Lake, Azure SQL, Cosmos DB, Synapse. • Expertise in data modeling and design (relational, dimensional, NoSQL). • Proven experience with ETL/ELT processes, data lakes, and modern lakehouse architectures. • Proficiency in Python, SQL, Scala, and/or Java. • Strong knowledge of data governance, security, and compliance frameworks. • Experience with CI/CD, Azure DevOps, and infrastructure as code (Terraform or ARM templates). • Familiarity with BI and analytics tools such as Power BI or Tableau. • Excellent communication, collaboration, and stakeholder management skills. • Bachelors degree in Computer Science, Engineering, Information Systems, or related field. Preferred Qualifications • Experience in regulated industries (finance, healthcare, etc.). • Familiarity with data cataloging, metadata management, and machine learning integration. • Leadership experience guiding teams and presenting architectural strategies to leadership. Why Join Us? • Work on cutting-edge cloud data platforms in a collaborative, innovative environment. • Lead strategic data initiatives that impact enterprise-wide decision-making. • Competitive compensation and opportunities for professional growth.

Posted 1 month ago

Apply

9.0 - 14.0 years

10 - 16 Lacs

Chennai

Work from Office

Azure Data Bricks, Data Factory, Pyspark, Sql If Your Interst in this position Attached your CV to this Mail ID muniswamyinfyjob@gmail.com

Posted 1 month ago

Apply

5.0 - 10.0 years

25 - 40 Lacs

Pune

Hybrid

At Ecolab, you can help take on some of the worlds most meaningful challenges, delivering critical insights and innovative solutions to help our customers achieve clean water, safe food, abundant energy and healthy environments. With our worldwide reach and ambitious growth plans, you will have the opportunity to own your future and impact what matters. Are you ready to make an impact? The AI Operations Analyst is responsible for managing and optimizing the adoption and performance of AI systems within GBS+. This role involves providing designing and executing model training processes, monitoring daily AI operational performance, and ensuring the accuracy, reliability, and functioning of AI models and applications. The AI Optimization Analyst will work with cross-functional teams to ensure AI models are optimized for performance and scalability. What's in it For You: You will join a growth company offering a competitive salary and benefits. The ability to make an impact and shape your career with a company that is passionate about growth. The support of an organization that believes it is vital to include and engage diverse people, perspectives and ideas to achieve our best. Feel proud each day to work for a company that provides clean water, safe food, abundant energy and healthy environments. What You Will Do: Perform AI model training activities such as generating/loading large datasets, document samples, process documentation, and prompts to support rapid and complete development of high impact models. Execute daily monitoring of AI and process performance. Identify, troubleshoot, and resolve issues with AI-based process performance in collaboration with users and various stakeholders Identify and drive implementation of improvements in process, AI prompts, and model accuracy and completeness in conjunction with Ecolab Digital AI team. Support objectives to ensure AI performance meets business value objectives. Ensure compliance with established responsible AI policies Maintain documentation on AI processes Minimum Qualifications: Bachelor's degree in Computer Science, Data Science, or a related field. Master's degree preferred Process domain expertise Experience with AI/ML operations and monitoring tools. Strong problem-solving and analytical skills. Knowledge of AI governance and ethical guidelines. Excellent communication and collaboration skills. Knowledge of machine learning frameworks and libraries

Posted 1 month ago

Apply

5.0 - 9.0 years

11 - 12 Lacs

Bengaluru

Work from Office

5 to 9 years experience Nice to have Worked in hp eco system (FDL architecture) Databricks + SQL combination is must EXPERIENCE 6-8 Years SKILLS Primary Skill: Data Engineering Sub Skill(s): Data Engineering Additional Skill(s): databricks, SQL

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

EPAM has presence across 40+ countries globally with 55,000 + professionals & numerous delivery centers, Key locations are North America, Eastern Europe, Central Europe, Western Europe, APAC, Mid East & Development Centers in India (Hyderabad, Pune & Bangalore). Location: Gurgaon/Pune/Hyderabad/Bengaluru/Chennai Work Mode: Hybrid (2-3 days office in a week) Job Description: 5-14 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services Azure Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology WE OFFER Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: • Health benefits, Medical Benefits• Retirement benefits• Paid time off• Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc

Posted 1 month ago

Apply

2.0 - 5.0 years

11 - 15 Lacs

Hyderabad

Work from Office

Overview As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like customer orders, sales transformation, revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. Responsibilities Data pipeline development end-to-end, spanning data modeling, testing, scalability, operability and ongoing metrics. Data Integration with Data Science and Application team. Collaborate in architecture discussions and architectural decision making that is part of continually improving and expanding these platforms. Lead feature development in collaboration with other engineers; validate requirements / stories, assess current system capabilities, and decompose feature requirements into engineering tasks. Focus on delivering high quality data pipelines and tools through careful analysis of system capabilities and feature requests, peer reviews, test automation, and collaboration with other engineers. Develop software in short iterations to quickly add business value. Ensure that we build high quality software by reviewing peer code check-ins. Introduce new tools / practices to improve data and code quality; this includes researching / sourcing 3rd party tools and libraries, developing tools & frameworks in-house to improve workflow and quality for all data engineers. Support data pipelines developed by your teamthrough good exception handling, monitoring, and when needed by debugging production issues. Support to attract talent to the team by networking with your peers, by representing PepsiCo HBS at conferences and other events, and by discussing our values and best practices when interviewing candidates. Qualifications 710 years of overall technology experience, including at least 6+ years of hands-on experience in software development, data engineering, and systems architecture. 6+ years of experience in SQL performance tuning and optimization, including execution plan analysis, indexing strategies, and query refactoring. Strong experience in data modelling, data warehousing, and designing high-volume ETL/ELT pipelines using industry best practices. Proven ability to build and operate highly available, distributed systems for data ingestion, transformation, and processing of large-scale datasets (structured and semi-structured). 3+ years of experience working with Azure Databricks, including Delta Lake, Unity Catalog, and collaborative notebook development for data pipelines. Must have an understanding of medallion architecture. Proficiency with Apache Spark (PySpark/Scala), including tuning for performance, job optimization, and large-scale batch/stream processing. Experience integrating with Azure ecosystem services such as Azure Data Lake Storage (ADLS), Azure Synapse Analytics, Azure Data Factory, Azure Event Hub, Azure Data Explorer, Azure SQL and Azure Key Vault. Solid understanding of DevOps practices, including CI/CD pipelines for data engineering (e.g., using Azure DevOps or GitHub Actions). Experience participating in or leading architecture discussions and technical decision-making, especially in cloud-native data platform designs. Demonstrated experience in collaborating with data science and application engineering teams, ensuring seamless data integration and delivery. Proficiency in code review processes, version control (Git), and promoting a culture of clean, maintainable, and well-documented code. Education Tech/BE/M.Sc. in Computer Science, IT, or other technical fields. Skills, Abilities, Knowledge Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Proven track record of leading, mentoring data teams. Strong change manager. Comfortable with change, especially that which arises through company growth. Able to lead a team effectively through times of change. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals Ability to lead others without direct authority in a matrixed environment. Competencies Highly influential and having the ability to educate challenging stakeholders on the role of data and its purpose in the business. Understands both the engineering and business side of the Data Products released. Places the user in the center of decision making. Teams up and collaborates for speed, agility, and innovation. Experience with and embraces agile methodologies. Strong negotiation and decision-making skill. Experience managing and working with globally distributed teams.

Posted 1 month ago

Apply

6.0 - 9.0 years

4 - 8 Lacs

Pune

Work from Office

Your Role As a senior software engineer with Capgemini, you will have 6 + years of experience in Azure technology with strong project track record In this role you will play a key role in: Strong customer orientation, decision making, problem solving, communication and presentation skills Very good judgement skills and ability to shape compelling solutions and solve unstructured problems with assumptions Very good collaboration skills and ability to interact with multi-cultural and multi-functional teams spread across geographies Strong executive presence and entrepreneurial spirit Superb leadership and team building skills with ability to build consensus and achieve goals through collaboration rather than direct line authority Your Profile Experience with Azure Data Bricks, Data Factory Experience with Azure Data components such as Azure SQL Database, Azure SQL Warehouse, SYNAPSE Analytics Experience in Python/Pyspark/Scala/Hive Programming Experience with Azure Databricks/ADB is must have Experience with building CI/CD pipelines in Data environments

Posted 1 month ago

Apply

4.0 - 9.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Your Role As a senior software engineer with Capgemini, you should have 4 + years of experience in Azure Data Engineer with strong project track record In this role you will play a key role in Strong customer orientation, decision making, problem solving, communication and presentation skills Very good judgement skills and ability to shape compelling solutions and solve unstructured problems with assumptions Very good collaboration skills and ability to interact with multi-cultural and multi-functional teams spread across geographies Strong executive presence andspirit Superb leadership and team building skills with ability to build consensus and achieve goals through collaboration rather than direct line authority Your Profile Experience with Azure Data Bricks, Data Factory Experience with Azure Data components such as Azure SQL Database, Azure SQL Warehouse, SYNAPSE Analytics Experience in Python/Pyspark/Scala/Hive Programming. Experience with Azure Databricks/ADB Experience with building CI/CD pipelines in Data environments Primary Skills ADF (Azure Data Factory) OR ADB (Azure Data Bricks) Secondary Skills Excellent verbal and written communication and interpersonal skills Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Hyderabad, Coimbatore

Work from Office

MKS Vision Pvt Ltd About us: MKS Vision is a full spectrum of Information Technology and engineering service provider. We exist to provide increased efficiencies and flexibility that accelerate business performance by adapting the latest cutting-edge technologies for our customers. Our services bring tangible benefits to our customers. MKS Vision will assist you in adopting global services. Website: https://www.mksvision.com/ Job Location: Coimbatore Below are the roles and associated skills we are looking for: Risk Data Analyst Knowledge of lending data systems and data structures (credit applications, loan origination, collections, payments, dialers, credit bureau data) Proficient in SQL for ETL Strong knowledge in PowerBI. Experience in the use of tools for ETL Scheduling / Automation Proficient in Data standardization and cleanup, ensure data integrity and diagnostic Knowledge of Database schema design and normalization Knowledge of cloud-based systems (Azure) for data warehouse building Intermediate knowledge of Python for data cleanup and transformation Knowledge of data documentation and exception/error handling and remediation Ability to build / test / deploy APIs to ingest data from internal and external databases Proficient in JSON, XML and text formats to ingest, parse, transform and load databases Preferred minimum 7+ years of experience, BS degree on computer science, management information systems, statistics, data science, etc., or similar experience.

Posted 1 month ago

Apply

4.0 - 8.0 years

15 - 30 Lacs

Noida, Pune, Bengaluru

Hybrid

Job Description Looking for a Data Engineer with expertise in Azure SQL, data solutions, and data migrations to design, develop, and optimize data pipelines and integration processes. You will be responsible for moving data between systems, ensuring data integrity, accuracy, and efficiency in migrations and conversions. This role involves working closely with stakeholders to understand data requirements, implement scalable solutions, and optimize database performance. Required Skills & Experience: 4-6 years of experience in data engineering and migrations. Strong expertise in Azure SQL, SQL Server, and cloud-based databases. Hands-on experience with ETL/ELT processes and data integration. Knowledge of Azure Data Factory, Synapse Analytics, and Data Lake. Experience in moving data between systems, data conversions, and migrations. Proficiency in Python, PowerShell, or SQL scripting for data manipulation. Understanding of data modeling, indexing, and performance optimization. Preferred Skills: Experience with NoSQL databases (Cosmos DB, MongoDB). Familiarity with Kafka, Event Hubs, or real-time data streaming. Knowledge of Power BI, Databricks, or other analytical tools. Exposure to Azure DevOps, Git, and CI/CD pipelines for data workflows. Key Responsibilities: Data Engineering & Development Design and implement ETL/ELT pipelines for data movement across systems. Develop, optimize, and manage Azure SQL databases and other cloud-based data solutions. Ensure data integrity and consistency during migrations and conversions. Implement data transformation, cleansing, and validation processes. Data Migration & Integration Design and execute data migration strategies between different platforms. Extract, transform, and load data from structured and unstructured sources. Work with APIs, batch processing, and real-time data movement. Support cross-system data integration for analytics, reporting, and operational needs. Cloud & DevOps Utilize Azure Data Factory, Synapse, and Data Lake for scalable data processing. Implement monitoring, logging, and performance tuning for data solutions. Work with CI/CD pipelines for automated data deployments and version control. Collaboration & Best Practices Work closely with data analysts, developers, and business teams to understand requirements. Ensure compliance with data governance, security, and privacy standards. Document data workflows, architecture, and technical decisions.

Posted 1 month ago

Apply

6.0 - 10.0 years

15 - 30 Lacs

Noida, Pune, Bengaluru

Hybrid

Job Description Looking for a Data Engineer with expertise in Azure SQL, data solutions, and data migrations to design, develop, and optimize data pipelines and integration processes. You will be responsible for moving data between systems, ensuring data integrity, accuracy, and efficiency in migrations and conversions. This role involves working closely with stakeholders to understand data requirements, implement scalable solutions, and optimize database performance. Required Skills & Experience: 4-6 years of experience in data engineering and migrations. Strong expertise in Azure SQL, SQL Server, and cloud-based databases. Hands-on experience with ETL/ELT processes and data integration. Knowledge of Azure Data Factory, Synapse Analytics, and Data Lake. Experience in moving data between systems, data conversions, and migrations. Proficiency in Python, PowerShell, or SQL scripting for data manipulation. Understanding of data modeling, indexing, and performance optimization. Preferred Skills: Experience with NoSQL databases (Cosmos DB, MongoDB). Familiarity with Kafka, Event Hubs, or real-time data streaming. Knowledge of Power BI, Databricks, or other analytical tools. Exposure to Azure DevOps, Git, and CI/CD pipelines for data workflows. Key Responsibilities: Data Engineering & Development Design and implement ETL/ELT pipelines for data movement across systems. Develop, optimize, and manage Azure SQL databases and other cloud-based data solutions. Ensure data integrity and consistency during migrations and conversions. Implement data transformation, cleansing, and validation processes. Data Migration & Integration Design and execute data migration strategies between different platforms. Extract, transform, and load data from structured and unstructured sources. Work with APIs, batch processing, and real-time data movement. Support cross-system data integration for analytics, reporting, and operational needs. Cloud & DevOps Utilize Azure Data Factory, Synapse, and Data Lake for scalable data processing. Implement monitoring, logging, and performance tuning for data solutions. Work with CI/CD pipelines for automated data deployments and version control. Collaboration & Best Practices Work closely with data analysts, developers, and business teams to understand requirements. Ensure compliance with data governance, security, and privacy standards. Document data workflows, architecture, and technical decisions.

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Chennai

Remote

Role & responsibilities Develop, maintain, and enhance new data sources and tables, contributing to data engineering efforts to ensure comprehensive and efficient data architecture. Serves as the liaison between Data Engineer team and the Airport operation teams, developing new data sources and overseeing enhancements to existing database; being one of the main contact points for data requests, metadata, and statistical analysis Migrates all existing Hive Metastore tables to Unity Catalog, addressing access issues and ensuring smooth transition of jobs and tables. Collaborate with IT teams to validate package (gold level data) table outputs during the production deployment of developed notebooks Develop and implement data quality alerting systems and Tableau alerting mechanisms for dashboards, setting up notifications for various thresholds. Create and maintain standard reports and dashboards to provide insights into airport performance, helping guide stations to optimize operations and improve performance. Preferred candidate profile Master's degree / UG Min 5 -10 years of experience Databricks (Azur op) Good Communication Experience developing solutions on a Big Data platform utilizing tools such as Impala and Spark Advanced knowledge/experience with Azure Databricks, PySpark , ( Teradata )/Databricks SQL Advanced knowledge/experience in Python along with associated development environments (e.g. JupyterHub, PyCharm, etc.) Advanced knowledge/experience in building Tableau Dashboard / Clikview / PowerBi Basic idea on HTML and JavaScript Immediate Joiner Skills, Licenses & Certifications Strong project management skills Proficient with Microsoft Office applications (MS Excel, Access and PowerPoint); advanced knowledge of Microsoft Excel Advanced aptitude in problem-solving, including the ability to logically structure an appropriate analytical framework Proficient in SharePoint, PowerApp and ability to use Graph API

Posted 1 month ago

Apply

8.0 - 12.0 years

0 - 20 Lacs

Hyderabad, Bengaluru

Work from Office

Tech Mahindra in hiring for Azure Data Engineer. Roles and Responsibilities : Design, develop, test, deploy and maintain Azure Data Factory (ADF) pipelines for data integration and migration projects. Collaborate with cross-functional teams to gather requirements and design solutions that meet business needs. Develop complex SQL queries to extract insights from large datasets using PySpark on Azure Databricks. Troubleshoot issues related to ADF pipeline failures and optimize performance for improved efficiency. Job Requirements : Experience in IT Services & Consulting industry with expertise in ADF development. Strong understanding of Azure Data Lake Storage, Azure Data Factory, Azure Databricks, Python programming language, and SQL querying concepts. Experience working with big data technologies such as Hadoop ecosystem components including Hive, Pig, etc.

Posted 1 month ago

Apply

5.0 - 9.0 years

10 - 17 Lacs

Pune

Work from Office

Interested candidates contact:7207997185

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies