Jobs
Interviews

100 Sql Optimization Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 4.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Overview As a data engineering lead, you will be the key technical expert overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be empowered to create & lead a strong team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. Responsibilities Act as a subject matter expert across different digital projects. Oversee work with internal clients and external partners to structure and store data into unified taxonomies and link them together with standard identifiers. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance, and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to productionalize data science models. Define and manage SLAs for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 7+ years of overall technology experience that includes at least 5+ years of hands-on software development, data engineering, and systems architecture. 4+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in Azure. Fluent with Azure cloud services. Azure Certification is a plus. Experience in Azure Log Analytics Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or Snowflake. Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). B Tech/BA/BS in Computer Science, Math, Physics, or other technical fields.

Posted 2 months ago

Apply

5.0 - 15.0 years

6 - 19 Lacs

Gurgaon / Gurugram, Haryana, India

On-site

Description We are seeking an experienced SQL Developer to join our team in India. The ideal candidate will have 5-15 years of experience in SQL development, with a strong focus on optimizing database performance and ensuring data integrity. Responsibilities Develop and maintain SQL queries and procedures to support business applications. Optimize SQL queries for performance improvements and efficiency. Create and manage database schemas, tables, and relationships. Collaborate with software developers and data analysts to gather requirements and deliver data solutions. Troubleshoot database issues and provide support to end-users as needed. Ensure data integrity and security across all databases. Skills and Qualifications Proficient in SQL and PL/SQL. Strong understanding of database design and normalization principles. Experience with SQL Server, MySQL, or Oracle databases. Ability to write complex queries, stored procedures, and triggers. Knowledge of data warehousing concepts and ETL processes. Familiarity with database performance tuning and optimization techniques. Strong analytical and problem-solving skills. Excellent communication and teamwork abilities.

Posted 2 months ago

Apply

5.0 - 8.0 years

8 - 15 Lacs

Pune

Work from Office

Core Technical Skills: Design and develop robust backend solutions using Java 11+/17 and Spring Boot. Build, test, and maintain scalable microservices in a cloud environment (AWS). Work with Kafka or other messaging systems for event-driven architecture. Write clean, maintainable code with high test coverage using. Tools & Reporting: Java 11+/17, SpringBoot, AWS, Kafta Soft Skills: Strong communication, coordination with app teams Analytical thinking and problem-solving Ability to work independently or collaboratively Snowflake architecture & performance tuning, Oracle DB, SQL optimization, Data governance, RBAC, Data replication, Time Travel & Cloning, Dynamic data masking, OEM & AWR reports, Apps DBA experience.

Posted 2 months ago

Apply

4.0 - 8.0 years

18 - 22 Lacs

Hyderabad, Bengaluru

Work from Office

Job Type: C2H (Long Term) Required Skill Set: Core Technical Skills: Snowflake database design, architecture & performance tuning Strong experience in Oracle DB and SQL query optimization Expertise in DDL/DML operations, data replication, and failover handling Knowledge of Time Travel, Cloning, and RBAC (Role-Based Access Control) Experience with dynamic data masking, secure views, and data governance Tools & Reporting: Familiarity with OEM, Tuning Advisor, AWR reports Soft Skills: Strong communication, coordination with app teams Analytical thinking and problem-solving Ability to work independently or collaboratively Additional Experience: Previous role as Apps DBA or similar Exposure to agile methodologies Hands-on with Snowflake admin best practices, load optimization, and secure data sharing.

Posted 2 months ago

Apply

6 - 10 years

30 - 35 Lacs

Bengaluru

Work from Office

We are seeking an experienced Amazon Redshift Developer / Data Engineer to design, develop, and optimize cloud-based data warehousing solutions. The ideal candidate should have expertise in Amazon Redshift, ETL processes, SQL optimization, and cloud-based data lake architectures. This role involves working with large-scale datasets, performance tuning, and building scalable data pipelines. Key Responsibilities: Design, develop, and maintain data models, schemas, and stored procedures in Amazon Redshift. Optimize Redshift performance using distribution styles, sort keys, and compression techniques. Build and maintain ETL/ELT data pipelines using AWS Glue, AWS Lambda, Apache Airflow, and dbt. Develop complex SQL queries, stored procedures, and materialized views for data transformations. Integrate Redshift with AWS services such as S3, Athena, Glue, Kinesis, and DynamoDB. Implement data partitioning, clustering, and query tuning strategies for optimal performance. Ensure data security, governance, and compliance (GDPR, HIPAA, CCPA, etc.). Work with data scientists and analysts to support BI tools like QuickSight, Tableau, and Power BI. Monitor Redshift clusters, troubleshoot performance issues, and implement cost-saving strategies. Automate data ingestion, transformations, and warehouse maintenance tasks. Required Skills & Qualifications: 6+ years of experience in data warehousing, ETL, and data engineering. Strong hands-on experience with Amazon Redshift and AWS data services. Expertise in SQL performance tuning, indexing, and query optimization. Experience with ETL/ELT tools like AWS Glue, Apache Airflow, dbt, or Talend. Knowledge of big data processing frameworks (Spark, EMR, Presto, Athena). Familiarity with data lake architectures and modern data stack. Proficiency in Python, Shell scripting, or PySpark for automation. Experience working in Agile/DevOps environments with CI/CD pipelines.

Posted 2 months ago

Apply

5 - 10 years

9 - 13 Lacs

Hyderabad

Work from Office

Overview As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. Responsibilities Be a founding member of the data engineering team. Help to attract talent to the team by networking with your peers, by representing PepsiCo HBS at conferences and other events, and by discussing our values and best practices when interviewing candidates. Own data pipeline development end-to-end, spanning data modeling, testing, scalability, operability and ongoing metrics. Ensure that we build high quality software by reviewing peer code check-ins. Define best practices for product development, engineering, and coding as part of a world class engineering team. Collaborate in architecture discussions and architectural decision making that is part of continually improving and expanding these platforms. Lead feature development in collaboration with other engineers; validate requirements / stories, assess current system capabilities, and decompose feature requirements into engineering tasks. Focus on delivering high quality data pipelines and tools through careful analysis of system capabilities and feature requests, peer reviews, test automation, and collaboration with other engineers. Develop software in short iterations to quickly add business value. Introduce new tools / practices to improve data and code quality; this includes researching / sourcing 3rd party tools and libraries, as well as developing tools in-house to improve workflow and quality for all data engineers. Support data pipelines developed by your teamthrough good exception handling, monitoring, and when needed by debugging production issues. Qualifications 6-9 years of overall technology experience that includes at least 5+ years of hands-on software development, data engineering, and systems architecture. 4+ years of experience in SQL optimization and performance tuning Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with data profiling and data quality tools like Apache Griffin, Deequ, or Great Expectations. Current skills in following technologies Python Orchestration platformsAirflow, Luigi, Databricks, or similar Relational databasesPostgres, MySQL, or equivalents MPP data systemsSnowflake, Redshift, Synapse, or similar Cloud platformsAWS, Azure, or similar Version control (e.g., GitHub) and familiarity with deployment, CI/CD tools. Fluent with Agile processes and tools such as Jira or Pivotal Tracker Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus.

Posted 2 months ago

Apply

11 - 14 years

35 - 40 Lacs

Hyderabad

Work from Office

What PepsiCo Data Management and Operations does: Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company. Responsible for day-to-day data collection, transportation, maintenance/curation, and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders. Increase awareness about available data and democratize access to it across the company. As a Data Engineering Associate Manager, you will be the key technical expert overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be empowered to create & lead a strong team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Provide leadership and management to a team of data engineers, managing processes and their flow of work, vetting their designs, and mentoring them to realize their full potential. Act as a subject matter expert across different digital projects. Overseework with internal clients and external partners to structure and store data into unified taxonomies and link them together with standard identifiers. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance, and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to productionalize data science models. Define and manage SLAs for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications B.Tech in Computer Science, Math, Physics, or other technical fields. 11+ years of overall technology experience that includes at least 5+ years of hands-on software development, data engineering, and systems architecture. 4+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in Azure. Fluent with Azure cloud services. Azure Certification is a plus. Experience in Azure Log Analytics Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or Snowflake. Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI).

Posted 2 months ago

Apply

4 - 6 years

16 - 18 Lacs

Vadodara

Work from Office

SQL Query T SQL - functions and package SQL Optimization and Performance improvement Knowledge of data warehousing concepts (Star Schema, Fact and dimension tables) Experience in SQL Server, SSIS SSIS Development Experience on data loads SQL Servers Total experience. SSIS Package configuration and optimization. ETL xfr, error handling, data flow components, script components, debugging. Input files : XML, CSV, flat, json Installation, backup, db setting, configurations Strong SQL Skills (Joins, subqueries, Aggregations, Window functions).

Posted 2 months ago

Apply

4 - 6 years

16 - 18 Lacs

Visakhapatnam

Work from Office

SQL Query T SQL - functions and package SQL Optimization and Performance improvement Knowledge of data warehousing concepts (Star Schema, Fact and dimension tables) Experience in SQL Server, SSIS SSIS Development Experience on data loads SQL Servers Total experience. SSIS Package configuration and optimization. ETL xfr, error handling, data flow components, script components, debugging. Input files : XML, CSV, flat, json Installation, backup, db setting, configurations Strong SQL Skills (Joins, subqueries, Aggregations, Window functions).

Posted 2 months ago

Apply

4 - 6 years

16 - 18 Lacs

Chandigarh

Work from Office

SQL Query T SQL - functions and package SQL Optimization and Performance improvement Knowledge of data warehousing concepts (Star Schema, Fact and dimension tables) Experience in SQL Server, SSIS SSIS Development Experience on data loads SQL Servers Total experience. SSIS Package configuration and optimization. ETL xfr, error handling, data flow components, script components, debugging. Input files : XML, CSV, flat, json Installation, backup, db setting, configurations Strong SQL Skills (Joins, subqueries, Aggregations, Window functions).

Posted 2 months ago

Apply

4 - 6 years

16 - 18 Lacs

Thiruvananthapuram

Work from Office

SQL Query T SQL - functions and package SQL Optimization and Performance improvement Knowledge of data warehousing concepts (Star Schema, Fact and dimension tables) Experience in SQL Server, SSIS SSIS Development Experience on data loads SQL Servers Total experience. SSIS Package configuration and optimization. ETL xfr, error handling, data flow components, script components, debugging. Input files : XML, CSV, flat, json Installation, backup, db setting, configurations Strong SQL Skills (Joins, subqueries, Aggregations, Window functions).

Posted 2 months ago

Apply

4 - 6 years

16 - 18 Lacs

Coimbatore

Work from Office

SQL Query T SQL - functions and package SQL Optimization and Performance improvement Knowledge of data warehousing concepts (Star Schema, Fact and dimension tables) Experience in SQL Server, SSIS SSIS Development Experience on data loads SQL Servers Total experience. SSIS Package configuration and optimization. ETL xfr, error handling, data flow components, script components, debugging. Input files : XML, CSV, flat, json Installation, backup, db setting, configurations Strong SQL Skills (Joins, subqueries, Aggregations, Window functions).

Posted 2 months ago

Apply

4 - 6 years

16 - 18 Lacs

Lucknow

Work from Office

SQL Query T SQL - functions and package SQL Optimization and Performance improvement Knowledge of data warehousing concepts (Star Schema, Fact and dimension tables) Experience in SQL Server, SSIS SSIS Development Experience on data loads SQL Servers Total experience. SSIS Package configuration and optimization. ETL xfr, error handling, data flow components, script components, debugging. Input files : XML, CSV, flat, json Installation, backup, db setting, configurations Strong SQL Skills (Joins, subqueries, Aggregations, Window functions).

Posted 2 months ago

Apply

4 - 6 years

16 - 18 Lacs

Nagpur

Work from Office

SQL Query T SQL - functions and package SQL Optimization and Performance improvement Knowledge of data warehousing concepts (Star Schema, Fact and dimension tables) Experience in SQL Server, SSIS SSIS Development Experience on data loads SQL Servers Total experience. SSIS Package configuration and optimization. ETL xfr, error handling, data flow components, script components, debugging. Input files : XML, CSV, flat, json Installation, backup, db setting, configurations Strong SQL Skills (Joins, subqueries, Aggregations, Window functions).

Posted 2 months ago

Apply

4 - 6 years

16 - 18 Lacs

Kanpur

Work from Office

SQL Query T SQL - functions and package SQL Optimization and Performance improvement Knowledge of data warehousing concepts (Star Schema, Fact and dimension tables) Experience in SQL Server, SSIS SSIS Development Experience on data loads SQL Servers Total experience. SSIS Package configuration and optimization. ETL xfr, error handling, data flow components, script components, debugging. Input files : XML, CSV, flat, json Installation, backup, db setting, configurations Strong SQL Skills (Joins, subqueries, Aggregations, Window functions).

Posted 2 months ago

Apply

4 - 6 years

16 - 18 Lacs

Surat

Work from Office

SQL Query T SQL - functions and package SQL Optimization and Performance improvement Knowledge of data warehousing concepts (Star Schema, Fact and dimension tables) Experience in SQL Server, SSIS SSIS Development Experience on data loads SQL Servers Total experience. SSIS Package configuration and optimization. ETL xfr, error handling, data flow components, script components, debugging. Input files : XML, CSV, flat, json Installation, backup, db setting, configurations Strong SQL Skills (Joins, subqueries, Aggregations, Window functions).

Posted 2 months ago

Apply

4 - 6 years

16 - 18 Lacs

Hyderabad

Work from Office

SQL Query T SQL - functions and package SQL Optimization and Performance improvement Knowledge of data warehousing concepts (Star Schema, Fact and dimension tables) Experience in SQL Server, SSIS SSIS Development Experience on data loads SQL Servers Total experience. SSIS Package configuration and optimization. ETL xfr, error handling, data flow components, script components, debugging. Input files : XML, CSV, flat, json Installation, backup, db setting, configurations Strong SQL Skills (Joins, subqueries, Aggregations, Window functions).

Posted 2 months ago

Apply

4 - 6 years

16 - 18 Lacs

Kolkata

Work from Office

SQL Query T SQL - functions and package SQL Optimization and Performance improvement Knowledge of data warehousing concepts (Star Schema, Fact and dimension tables) Experience in SQL Server, SSIS SSIS Development Experience on data loads SQL Servers Total experience. SSIS Package configuration and optimization. ETL xfr, error handling, data flow components, script components, debugging. Input files : XML, CSV, flat, json Installation, backup, db setting, configurations Strong SQL Skills (Joins, subqueries, Aggregations, Window functions).

Posted 2 months ago

Apply

4 - 6 years

16 - 18 Lacs

Pune

Work from Office

SQL Query T SQL - functions and package SQL Optimization and Performance improvement Knowledge of data warehousing concepts (Star Schema, Fact and dimension tables) Experience in SQL Server, SSIS SSIS Development Experience on data loads SQL Servers Total experience. SSIS Package configuration and optimization. ETL xfr, error handling, data flow components, script components, debugging. Input files : XML, CSV, flat, json Installation, backup, db setting, configurations Strong SQL Skills (Joins, subqueries, Aggregations, Window functions).

Posted 2 months ago

Apply

4 - 6 years

16 - 18 Lacs

Chennai

Work from Office

SQL Query T SQL - functions and package SQL Optimization and Performance improvement Knowledge of data warehousing concepts (Star Schema, Fact and dimension tables) Experience in SQL Server, SSIS SSIS Development Experience on data loads SQL Servers Total experience. SSIS Package configuration and optimization. ETL xfr, error handling, data flow components, script components, debugging. Input files : XML, CSV, flat, json Installation, backup, db setting, configurations Strong SQL Skills (Joins, subqueries, Aggregations, Window functions).

Posted 2 months ago

Apply

4 - 6 years

16 - 18 Lacs

Bengaluru

Work from Office

SQL Query T SQL - functions and package SQL Optimization and Performance improvement Knowledge of data warehousing concepts (Star Schema, Fact and dimension tables) Experience in SQL Server, SSIS SSIS Development Experience on data loads SQL Servers Total experience. SSIS Package configuration and optimization. ETL xfr, error handling, data flow components, script components, debugging. Input files : XML, CSV, flat, json Installation, backup, db setting, configurations Strong SQL Skills (Joins, subqueries, Aggregations, Window functions).

Posted 2 months ago

Apply

4 - 6 years

16 - 18 Lacs

Ahmedabad

Work from Office

SQL Query T SQL - functions and package SQL Optimization and Performance improvement Knowledge of data warehousing concepts (Star Schema, Fact and dimension tables) Experience in SQL Server, SSIS SSIS Development Experience on data loads SQL Servers Total experience. SSIS Package configuration and optimization. ETL xfr, error handling, data flow components, script components, debugging. Input files : XML, CSV, flat, json Installation, backup, db setting, configurations Strong SQL Skills (Joins, subqueries, Aggregations, Window functions).

Posted 2 months ago

Apply

4 - 6 years

16 - 18 Lacs

Mumbai

Work from Office

SQL Query T SQL - functions and package SQL Optimization and Performance improvement Knowledge of data warehousing concepts (Star Schema, Fact and dimension tables) Experience in SQL Server, SSIS SSIS Development Experience on data loads SQL Servers Total experience. SSIS Package configuration and optimization. ETL xfr, error handling, data flow components, script components, debugging. Input files : XML, CSV, flat, json Installation, backup, db setting, configurations Strong SQL Skills (Joins, subqueries, Aggregations, Window functions).

Posted 2 months ago

Apply

5 - 10 years

7 - 12 Lacs

Mumbai

Work from Office

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Microsoft SQL Server Analysis Services (SSAS) Good to have skills : Microsoft SQL Server Integration Services (SSIS), Microsoft SQL Server Reporting Services, No Function Specialty Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Software Development Engineer, you will analyze, design, code, and test multiple components of application code across one or more clients. You will perform maintenance, enhancements, and/or development work. Your typical day will involve analyzing requirements, designing solutions, writing code, and conducting testing to ensure the quality of the application. You will collaborate with team members, participate in code reviews, and contribute to the overall success of the project. Roles & Responsibilities: Expected to be an SME, collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Analyze requirements and design solutions for application components. Write high-quality code following coding standards and best practices. Conduct unit testing and participate in code reviews to ensure code quality. Collaborate with team members to troubleshoot and resolve issues. Contribute to the overall success of the project by meeting project milestones and deliverables. Professional & Technical Skills: Must To Have Skills:Proficiency in Microsoft SQL Server Analysis Services (SSAS), Microsoft SQL Server Integration Services (SSIS), Microsoft SQL Server Reporting Services. Strong understanding of database concepts and SQL query optimization. Experience in designing and developing data models using Microsoft SQL Server Analysis Services (SSAS). Hands-on experience in writing complex SQL queries and stored procedures. Experience in performance tuning and optimization of SQL queries. Good To Have Skills:Experience with data visualization tools such as Tableau or Power BI. Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: The candidate should have a minimum of 5 years of experience in Microsoft SQL Server Analysis Services (SSAS). This position is based in Mumbai. A 15 years full-time education is required. Qualifications 15 years full time education

Posted 2 months ago

Apply

12 - 18 years

14 - 24 Lacs

Hyderabad

Work from Office

Overview Deputy Director - Data Engineering PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCos global business scale to enable business insights, advanced analytics, and new product development. PepsiCos Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations, and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. What PepsiCo Data Management and Operations does: Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company. Responsible for day-to-day data collection, transportation, maintenance/curation, and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders. Increase awareness about available data and democratize access to it across the company. As a data engineering lead, you will be the key technical expert overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be empowered to create & lead a strong team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Data engineering lead role for D&Ai data modernization (MDIP) Ideally Candidate must be flexible to work an alternative schedule either on tradition work week from Monday to Friday; or Tuesday to Saturday or Sunday to Thursday depending upon coverage requirements of the job. The can didate can work with immediate supervisor to change the work schedule on rotational basis depending on the product and project requirements. Responsibilities Manage a team of data engineers and data analysts by delegating project responsibilities and managing their flow of work as well as empowering them to realize their full potential. Design, structure and store data into unified data models and link them together to make the data reusable for downstream products. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Create reusable accelerators and solutions to migrate data from legacy data warehouse platforms such as Teradata to Azure Databricks and Azure SQL. Enable and accelerate standards-based development prioritizing reuse of code, adopt test-driven development, unit testing and test automation with end-to-end observability of data Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality, performance and cost. Collaborate with internal clients (product teams, sector leads, data science teams) and external partners (SI partners/data providers) to drive solutioning and clarify solution requirements. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects to build and support the right domain architecture for each application following well-architected design standards. Define and manage SLAs for data products and processes running in production. Create documentation for learnings and knowledge transfer to internal associates. Qualifications 12+ years of engineering and data management experience Qualifications 12+ years of overall technology experience that includes at least 5+ years of hands-on software development, data engineering, and systems architecture. 8+ years of experience with Data Lakehouse, Data Warehousing, and Data Analytics tools. 6+ years of experience in SQL optimization and performance tuning on MS SQL Server, Azure SQL or any other popular RDBMS 6+ years of experience in Python/Pyspark/Scala programming on big data platforms like Databricks 4+ years in cloud data engineering experience in Azure or AWS. Fluent with Azure cloud services. Azure Data Engineering certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one business intelligence tool such as Power BI or Tableau Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like ADO, Github and CI/CD tools for DevOps automation and deployments. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus. BA/BS in Computer Science, Math, Physics, or other technical fields. Candidate must be flexible to work an alternative work schedule either on tradition work week from Monday to Friday; or Tuesday to Saturday or Sunday to Thursday depending upon product and project coverage requirements of the job. Candidates are expected to be in the office at the assigned location at least 3 days a week and the days at work needs to be coordinated with immediate supervisor Skills, Abilities, Knowledge: Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Proven track record of leading, mentoring data teams. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals. Ability to lead others without direct authority in a matrixed environment. Comfortable working in a hybrid environment with teams consisting of contractors as well as FTEs spread across multiple PepsiCo locations. Domain Knowledge in CPG industry with Supply chain/GTM background is preferred.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies