Jobs
Interviews

100 Sql Optimization Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 9.0 years

3 - 12 Lacs

Pune, Maharashtra, India

On-site

Technologies and Profiles: The resources must have architectural hands-on experience in managing the listed middleware platforms for On-premises and Cloud (Preferably GCP) Design, implement, and maintain middleware solutions to meet business needs. Collaborate with development teams to define and integrate middleware architecture. Monitor, troubleshoot, and optimize middleware performance. Conduct system audits and apply necessary patches and upgrades for security and compliance. Develop and maintain documentation for middleware configurations and processes. Engage with stakeholders to gather requirements and align middleware strategy with business goals. Evaluate and select middleware technologies and tools suitable for projects. Coordinate with network, storage, and database teams for effective system architecture. Perform capacity planning and scalability assessments. Implement best practices for middleware governance, lifecycle management, and disaster recovery. Automate routine processes to enhance operational efficiency. Conduct root cause analysis and implement corrective actions for technical issues. Participate in architecture review boards and technical committees offering middleware expertise. Conduct proof of concepts for new middleware technologies to assess their impact.

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

madurai, tamil nadu

On-site

You are an experienced Java architect responsible for designing and implementing sophisticated Java-based software solutions. Your role involves overseeing system architecture, selecting appropriate technologies, ensuring scalability and performance, collaborating with cross-functional teams, mentoring junior developers, and staying updated on emerging Java technologies, focusing on areas such as microservices, cloud computing, and high-availability systems. **Key Responsibilities:** **Architecture Design:** - Define overall system architecture for large-scale Java applications, including component design, data flow, and integration patterns. - Select appropriate Java frameworks and libraries based on project requirements. - Design for scalability, performance, and security considerations. - Implement microservices architecture where applicable. **Technology Evaluation and Selection:** - Research and evaluate new Java technologies, frameworks, and tools. - Stay updated on cloud platforms like AWS, Azure, and GCP for potential integration. - Make informed technology decisions based on project needs. **Development Leadership:** - Guide development teams on technical best practices and design patterns. - Provide code reviews and mentor junior developers. - Troubleshoot complex technical issues and design flaws. **Collaboration and Stakeholder Management:** - Work closely with product managers, business analysts, and other stakeholders to understand requirements. - Communicate technical concepts effectively to non-technical audiences. - Collaborate with DevOps teams to ensure smooth deployment and monitoring. **Performance Optimization:** - Identify performance bottlenecks and implement optimization strategies. - Monitor system health and performance metrics. **Essential skills for a Java architect:** - Deep expertise in Java Core concepts: Object-oriented programming, Collections, Concurrency, JVM internals. - Advanced Java frameworks: Spring Boot, Spring MVC, Hibernate, JPA. - Architectural patterns: Microservices, Event-driven architecture, RESTful APIs. - Database design and SQL: Proficiency in relational databases and SQL optimization, Proficiency in NO SQL (ElasticSearch/Opensearch). - Cloud computing knowledge: AWS, Azure, GCP. - Hands-on Experience in ETL, ELT. - Knowledge of Python, Pyspark would be an added advantage. - Strong communication and leadership skills. **Minimum Qualifications:** - Bachelor's degree in Computer Science, Information Technology, or a related field. - Deep expertise in Java Core concepts, Advanced Java frameworks, Architectural patterns, Database design and SQL, Cloud computing knowledge, Hands-on Experience in ETL, ELT, Knowledge of Python, Pyspark. - Strong communication and leadership skills. This is a full-time job for the position of Principal Consultant based in India-Madurai. If you possess the required qualifications and skills, we invite you to apply for this role.,

Posted 4 days ago

Apply

4.0 - 9.0 years

8 - 18 Lacs

Guwahati

Work from Office

The Postgre SQL DBA would need to have at least 3 years of experience with development and customer support, providing technical and operational support activities of Database Servers, including logical and physical database design support, troubleshooting, performance monitoring, tuning, and optimizing. PostgreSQL database cloning by using Pg_dump and Pg_Restore. Review all PostgreSQL logs and problems. Experience with Postgres Connection Pooling (PgBouncer). Hands-on operational DBA expertise in the areas of Replication, High Availability and Backup/Recovery. Advanced performance-tuning skills to review performance data and prepare reports. SQL and Linux scripting skills are required skills. Strong communication skills are a requirement for this position. Ability to manage customer or in-house escalation calls is required Experience working on an Application/Development team or experience as an Application DBA. Responsible for managing Development, QA, Staging and Production databases and applications and implementing future enhancements to meet the business needs of the end users. Monitor database health - availability, reliability, performance, capacity planning. Upgrading PostgreSQL Databases. Responsible for the monitoring and uptime of all production databases. Estimate PostgreSQL database capacities; develop methods for monitoring database capacity and usage. Lead efforts to develop and improve procedures for automated monitoring and proactive intervention, reducing any need downtime. Responsible for developer SQL code review to ensure queries are optimized and tuned to perform efficiently prior to production release. Responsible for regular maintenance on databases (e.g., Vacuum, Reindexing, Archiving). Responsible for pro-active remediation of database operational problems. Participate in a 247 support rotation. Support complex web-based applications. Automatic Switchover & Failover.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

BestEx Research is a financial technology and research firm that specializes in developing sophisticated execution algorithms and transaction cost modeling tools across various asset classes. The company offers high-performance algorithmic execution services to hedge funds, CTAs, asset managers, and banks through both a traditional electronic broker and a broker-neutral Software as a Service (SaaS) model. The firm's cloud-based platform, Algo Management System (AMS), is a comprehensive end-to-end algorithmic trading solution designed for equities and futures. It encompasses various features such as transaction cost analysis (TCA), a tool for algorithm customization called Strategy Studio, a trading dashboard, and pre-trade analytics all within a single platform. Currently, the platform is operational for U.S., Europe, and Canadian equities as well as global futures trading. BestEx Research aims to revolutionize the $100 billion industry by challenging conventional black-box solutions from banks and offering innovative execution algorithms that prioritize performance enhancement, transparency, and customization. Leveraging cutting-edge technology, the company supports its low-latency and highly scalable research and trading systems with backend operations in C++, research libraries in C++/Python and R, and web-based technologies for front-end platforms. The mission of BestEx Research is to establish itself as the leader in automation and measurement of execution across global asset classes while significantly reducing transaction costs for clients. The company's Bangalore office serves as a core engineering and research hub, tackling complex challenges in trading, systems, and data science alongside the U.S. team. Joining BestEx Research offers a unique opportunity to work in a collaborative environment with minimal bureaucracy, allowing engineers to directly engage with traders, researchers, and the CEO. Employees benefit from direct ownership and visibility on production systems, daily learning opportunities from industry pioneers, and a high-trust culture where performance takes precedence over hierarchy. The company offers competitive compensation in India, including equity and cash bonuses, along with a structured 5-week training program covering various aspects of market microstructure, algorithmic execution, exchange simulators, and more. As a member of the engineering team at BestEx Research, you will be involved in building ultra-low-latency trading systems, real-time exchange simulators, execution algorithms, and alpha forecasting models. The role requires ownership of the technology stack, ranging from C++ infrastructure to Python-based research platforms. Ideal candidates are those who excel at the intersection of research and engineering, comfortable working on systems, testing hypotheses, and analyzing market-moving data. Key responsibilities include architecting and implementing execution algorithms across global markets, developing exchange simulators and backtesting frameworks, designing smart order routing systems, creating models for market impact and price prediction, optimizing high-performance trading systems for throughput and latency, and establishing core infrastructure to support new asset classes and global markets. Collaboration with global quants, traders, and senior engineers is essential to analyze system performance across various layers. Qualifications for this role include a Bachelors or Masters degree from a top-tier CS, Math, or Engineering program (IIT/NIT/BITS preferred but not required), at least 5 years of hands-on C++ experience in performance-sensitive applications, a strong foundation in data structures, OS, networks, and concurrency, and a passion for learning about markets, systems, and modeling. Additional experience with Python, real-time systems, and trading platforms is advantageous. Working at BestEx Research offers exposure to real-time trading systems, mentorship from industry veterans, a blend of research, systems design, and algorithm development, equity and cash compensation, and a culture free of red tape and outsourcing mentality.,

Posted 1 week ago

Apply

8.0 - 10.0 years

4 - 14 Lacs

Pune, Maharashtra, India

On-site

Description We are seeking an experienced Oracle PostgreSQL Developer to join our dynamic team in India. The ideal candidate will have extensive experience in developing and managing PostgreSQL databases, ensuring optimal performance and reliability to support our business applications. Responsibilities Design, develop and maintain PostgreSQL database systems. Optimize database performance through indexing, query optimization, and schema design. Work with application developers to integrate PostgreSQL with various applications. Implement backup and recovery strategies for PostgreSQL databases. Monitor database health and performance, ensuring high availability and reliability. Write complex SQL queries and stored procedures to support business requirements. Perform database upgrades and migrations with minimal downtime. Ensure data security and compliance with data protection regulations. Skills and Qualifications 8-10 years of experience in PostgreSQL database development and administration. Strong knowledge of SQL and PL/pgSQL. Experience with database performance tuning and optimization techniques. Proficient in database backup and recovery strategies. Hands-on experience with data modeling and schema design. Familiarity with database migration and upgrade processes. Understanding of data security best practices. Experience with Linux/Unix environments and shell scripting. Ability to work collaboratively in a team environment and communicate effectively with stakeholders.

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 22 Lacs

Noida, Pune, Bengaluru

Work from Office

Description: We are hiring an experienced Azure SQL & MS SQL DBA with a solid background in SQL development. The role is 70% focused on database administration and 30% on SQL coding and optimization. The ideal candidate should be capable of managing both on-prem and cloud SQL environments efficiently. Requirements: • Strong experience with SQL Server (2016–2022) and Azure SQL. • Expertise in query tuning, index strategies, statistics, and data analysis. • Hands-on with PowerShell, T-SQL, Azure CLI, and CI/CD pipelines. • Familiarity with SQL migrations, DevOps, and database documentation. Job Responsibilities: Database Administration (70%) • Install, configure & maintain MS SQL Server (2016/2019/2022) and Azure SQL. • Manage system database roles, capacity planning, and monitoring. • Design and implement backup/recovery strategies including Azure restore procedures. • Ensure data consistency, apply index management techniques, and maintain high availability. • Use PowerShell scripting and Azure Automation for daily DBA tasks. • Manage security, access control, and ensure compliance (GDPR, HIPAA). • Utilize Azure tools (Monitor, Log Analytics, DevOps CI/CD) for proactive management. Performance Tuning & Optimization • Optimize SQL queries, stored procedures, and execution plans. • Apply best practices in indexing, statistics management, and SQL Server performance tuning. • Lead Azure SQL database optimization efforts in collaboration with development teams. SQL Development (30%) • Develop and optimize T-SQL scripts, procedures, and triggers. • Support schema design, data transformation, and application logic. What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!

Posted 1 week ago

Apply

3.0 - 6.0 years

3 - 15 Lacs

Hyderabad, Telangana, India

On-site

Job Summary Responsibilities: Someone with good experience of working as a SQL DBA : Experience in Database Server Security Patch Management Experience in Database Access Management Experience in SQL Server Standard edition, Redgate - Monitoring , SQL Server Instances, AWS New relic/Ivanti anyone of these skills should be fine Experience in MS SQL Server Administration Experience with Performance Tuning and Optimization (PTO).

Posted 1 week ago

Apply

3.0 - 5.0 years

3 - 8 Lacs

Noida

Work from Office

Location: Noida Experience: 4-6 years Job Summary: We are seeking an experienced and detail-oriented Oracle Developer with strong expertise in PL/SQL, data migration , and SQL optimization . The ideal candidate will be responsible for managing and supporting data migration projects, especially transitioning from Oracle to PostgreSQL , as well as providing ongoing application support . Key Responsibilities: Design, develop, and maintain PL/SQL procedures, packages, triggers, and functions in Oracle. Lead and execute data migration strategies from Oracle to PostgreSQL databases. Develop SQL scripts for data transformation, validation, and performance optimization. Troubleshoot database and application issues, ensuring high availability and performance. Collaborate with application support teams to resolve incidents and implement enhancements. Document technical solutions, data mapping, and process flows clearly and effectively. Work closely with stakeholders to understand business needs and translate them into technical solutions. Required Skills: Expert-level proficiency in PL/SQL and SQL . Proven experience in data migration projects , especially Oracle to PostgreSQL . Excellent problem-solving and analytical thinking abilities.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

vadodara, gujarat

On-site

As a Data Solutions Developer, you will collaborate closely with the Head of Data to create and implement scalable data solutions that facilitate the consolidation of company-wide data into a controlled data warehouse environment, potentially utilizing Azure Data Lake and SQL Server. Your primary focus will be on constructing efficient ELT pipelines, APIs, and database architectures to facilitate the high-speed ingestion and storage of data from various sources into a unified repository. This repository will act as the foundation for business reporting and operational decision-making processes. Your responsibilities will include designing and implementing scalable database and data warehouse solutions, establishing and managing data ingestion pipelines for both batch and real-time integration, defining and maintaining relational schema and metadata frameworks, and supporting the development of a centralized data repository for unified reporting and analysis. Furthermore, you will optimize database structures and queries for performance and scalability, while also documenting and version controlling all data pipeline and architecture components. In addition to your focus on centralizing data infrastructure, you will address ad hoc data solution requirements to ensure the continuity of business operations. You will develop and maintain robust APIs for internal and external data exchange, facilitate efficient data flow between operational systems and reporting platforms, and support system integrations with various tools such as CRM, finance, operations, and marketing. You will collaborate with IT and compliance personnel to enforce access control, encryption, and PII protection across all data solutions, as well as ensure compliance with data protection regulations such as GDPR. Promoting and upholding data quality, governance, and security standards will be an integral part of your role. Acting as a subject matter expert, you will provide guidance on best practices in data engineering and cloud architecture, offer ongoing support to internal teams on performance optimization and scalable solution design, and take on DBA responsibilities for the SBR database. To be successful in this role, you should possess at least 5 years of experience as a SQL Developer or in a similar role, demonstrate a strong understanding of system design and software architecture, and be capable of designing and building performant and scalable data infrastructure solutions. Proficiency in SQL and its variations among popular databases, expertise in optimizing complex SQL statements, and a solid grasp of API knowledge and data ingestion pipeline creation are essential skills for this position.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

kolkata, west bengal

On-site

You are a highly skilled and strategic Data Architect with deep expertise in the Azure Data ecosystem. Your role will involve defining and driving the overall Azure-based data architecture strategy aligned with enterprise goals. You will architect and implement scalable data pipelines, data lakes, and data warehouses using Azure Data Lake, ADF, and Azure SQL/Synapse. Providing technical leadership on Azure Databricks for large-scale data processing and advanced analytics use cases is a crucial aspect of your responsibilities. Integrating AI/ML models into data pipelines and supporting the end-to-end ML lifecycle including training, deployment, and monitoring will be part of your day-to-day tasks. Collaboration with cross-functional teams such as data scientists, DevOps engineers, and business analysts is essential. You will evaluate and recommend tools, platforms, and design patterns for data and ML infrastructure while mentoring data engineers and junior architects on best practices and architectural standards. Your role will require a strong background in data modeling, ETL/ELT frameworks, and data warehousing concepts. Proficiency in SQL, Python, PySpark, and a solid understanding of AI/ML workflows and tools are necessary. Exposure to Azure DevOps and excellent communication and stakeholder management skills are also key requirements. As a Data Architect at Lexmark, you will play a vital role in designing and overseeing robust, scalable, and secure data architectures to support advanced analytics and machine learning workloads. If you are an innovator looking to make your mark with a global technology leader, apply now to join our team in Kolkata, India.,

Posted 2 weeks ago

Apply

12.0 - 17.0 years

14 - 19 Lacs

Hyderabad

Work from Office

Responsibilities Data engineering lead role for D&Ai data modernization (MDIP) Ideally Candidate must be flexible to work an alternative schedule either on tradition work week from Monday to Friday; or Tuesday to Saturday or Sunday to Thursday depending upon coverage requirements of the job. The can didate can work with immediate supervisor to change the work schedule on rotational basis depending on the product and project requirements. Responsibilities Manage a team of data engineers and data analysts by delegating project responsibilities and managing their flow of work as well as empowering them to realize their full potential. Design, structure and store data into unified data models and link them together to make the data reusable for downstream products. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Create reusable accelerators and solutions to migrate data from legacy data warehouse platforms such as Teradata to Azure Databricks and Azure SQL. Enable and accelerate standards-based development prioritizing reuse of code, adopt test-driven development, unit testing and test automation with end-to-end observability of data Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality, performance and cost. Collaborate with internal clients (product teams, sector leads, data science teams) and external partners (SI partners/data providers) to drive solutioning and clarify solution requirements. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects to build and support the right domain architecture for each application following well-architected design standards. Define and manage SLAs for data products and processes running in production. Create documentation for learnings and knowledge transfer to internal associates. Qualifications 12+ years of engineering and data management experience Qualifications 12+ years of overall technology experience that includes at least 5+ years of hands-on software development, data engineering, and systems architecture. 8+ years of experience with Data Lakehouse, Data Warehousing, and Data Analytics tools. 6+ years of experience in SQL optimization and performance tuning on MS SQL Server, Azure SQL or any other popular RDBMS 6+ years of experience in Python/Pyspark/Scala programming on big data platforms like Databricks 4+ years in cloud data engineering experience in Azure or AWS. Fluent with Azure cloud services. Azure Data Engineering certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one business intelligence tool such as Power BI or Tableau Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like ADO, Github and CI/CD tools for DevOps automation and deployments. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus. BA/BS in Computer Science, Math, Physics, or other technical fields. Candidate must be flexible to work an alternative work schedule either on tradition work week from Monday to Friday; or Tuesday to Saturday or Sunday to Thursday depending upon product and project coverage requirements of the job. Candidates are expected to be in the office at the assigned location at least 3 days a week and the days at work needs to be coordinated with immediate supervisor Skills, Abilities, Knowledge: Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Proven track record of leading, mentoring data teams. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals. Ability to lead others without direct authority in a matrixed environment. Comfortable working in a hybrid environment with teams consisting of contractors as well as FTEs spread across multiple PepsiCo locations. Domain Knowledge in CPG industry with Supply chain/GTM background is preferred.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

11 - 20 Lacs

Ahmedabad, Gujarat, India

On-site

Description We are seeking an experienced Oracle Database Administrator to join our team in India. The ideal candidate will have a strong background in Oracle database management and will be responsible for ensuring the performance, integrity, and security of our database systems. Responsibilities Develop, implement, and maintain Oracle database systems. Monitor database performance and optimize queries. Ensure data integrity and security of databases. Collaborate with development teams to design database solutions. Provide support for database-related issues and troubleshooting. Perform regular database backups and recovery procedures. Write and maintain SQL scripts and stored procedures. Skills and Qualifications 5-10 years of experience in Oracle database administration. Strong knowledge of SQL and PL/SQL programming. Experience with Oracle database tuning and optimization techniques. Familiarity with Oracle database backup and recovery strategies. Understanding of data modeling and database design principles. Ability to work collaboratively in a team environment. Excellent problem-solving skills and attention to detail.

Posted 2 weeks ago

Apply

5.0 - 7.0 years

3 - 12 Lacs

Hyderabad, Telangana, India

On-site

SQL & PL/SQL Expertise : Demonstrated proficiency in SQL and PL/SQL, with a deep understanding of complex queries and optimization. Data Modeling : Strong experience in data modeling, including designing table structures and efficiently joining multiple tables. Ability to extract and manipulate data across different tables, utilizing common columns effectively. Advanced PL/SQL Concepts : Expertise in developing and managing PL/SQL packages, cursors, procedures, and functions. In-depth knowledge of advanced PL/SQL features to enhance performance and maintainability. Qualifications: 5 to 7 years of hands-on experience in SQL and PL/SQL development. Proven track record of implementing data solutions and optimizing database performance. Strong problem-solving skills and attention to detail.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

3 - 12 Lacs

Hyderabad, Telangana, India

On-site

Total Experience 3 to 4 years Relevant Experience on Mandatory Skills 3 years Resource will work on PL/SQL. He should have hands on experience in understanding/writing complex SQL queries /SP. Should have working experience in SQL project . Mandatory Skills PL/SQL Good to Have Skills CONTROL M or SSRS

Posted 2 weeks ago

Apply

3.0 - 4.0 years

3 - 12 Lacs

Hyderabad, Telangana, India

On-site

Total Experience 3 to 4 years Relevant Experience on Mandatory Skills 3 years Resource will work on PL/SQL. He should have hands on experience in understanding/writing complex SQL queries /SP. Should have working experience in SQL project . Mandatory Skills PL/SQL Good to Have Skills CONTROL M or SSRS

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

You are a Database Performance & Data Modeling Specialist with a primary focus on optimizing schema structures, tuning SQL queries, and ensuring that data models are well-prepared for high-volume, real-time systems. Your responsibilities include designing data models that balance performance, flexibility, and scalability, conducting performance benchmarking to identify bottlenecks and propose improvements, analyzing slow queries to recommend indexing, denormalization, or schema revisions, monitoring query plans, memory usage, and caching strategies for cloud databases, and collaborating with developers and analysts to optimize application-to-database workflows. You must possess strong experience in database performance tuning, especially in GCP platforms like BigQuery, CloudSQL, and AlloyDB. Proficiency in schema refactoring, partitioning, clustering, and sharding techniques is essential. Familiarity with profiling tools, slow query logs, and GCP monitoring solutions is required, along with SQL optimization skills including query rewriting and execution plan analysis. Preferred skills include a background in mutual fund or high-frequency financial data modeling, hands-on experience with relational databases like PostgreSQL, MySQL, distributed caching, materialized views, and hybrid model structures. Soft skills that are crucial for this role include being precision-driven with an analytical mindset, a clear communicator with attention to detail, and possessing strong problem-solving and troubleshooting abilities. By joining this role, you will have the opportunity to shape high-performance data systems from the ground up, play a critical role in system scalability and responsiveness, and work with high-volume data in a cloud-native enterprise setting.,

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

NTT DATA is looking for a .NET Lead Engineer to join their team in Pune, Maharashtra, India. As a .NET Senior Manager, you will be responsible for leading the architecture definition, design, and development of web-based applications using .NET C# technology. You will work with a team to deliver high-quality software solutions for clients, focusing on creating fast, resilient, and scalable applications. In this role, you will lead a team of 5-8 people, architect modern cloud-native and microservice applications, and have experience in building distributed services in Azure. You will be involved in creating component designs, producing technical documentation, and ensuring the quality delivery of enterprise solutions. Additionally, you will collaborate with product management, development teams, and internal IT departments to meet business requirements and drive innovation. To be successful in this position, you should have at least 10 years of experience in architecting .NET C# web-based applications, 5+ years leading .NET application architecture, and proficiency in web service development, reporting, and analytics. Experience with Visual Studio, ASP.NET, C#, and SQL Server is required, along with a strong understanding of Object-Oriented Design and Service-Oriented Architecture. The ideal candidate is a lifelong learner, a team player, and an effective communicator. NTT DATA offers a supportive environment for professional growth and provides opportunities for skills development and career advancement. If you are passionate about technology and eager to contribute to a dynamic team, apply now and be a part of NTT DATA's innovative and forward-thinking organization.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

The Data Warehouse Engineer will be responsible for managing and optimizing data processes in an Azure environment using Snowflake. The ideal candidate should have solid SQL skills and a basic understanding of data modeling. Experience with CI/CD processes and Azure ADF is preferred. Additionally, expertise in ETL/ELT frameworks and ER/Studio would be a plus. As a Senior Data Warehouse Engineer, in addition to the core requirements, you will oversee other engineers while also being actively involved in data modeling and Snowflake SQL optimization. You will be responsible for conducting design reviews, code reviews, and deployment reviews with the engineering team. Familiarity with medallion architecture and experience in Healthcare or life sciences industry will be highly advantageous. At Myridius, we are committed to transforming the way businesses operate by offering tailored solutions in AI, data analytics, digital engineering, and cloud innovation. With over 50 years of expertise, we aim to drive organizations through the rapidly evolving landscapes of technology and business. Our integration of cutting-edge technology with deep domain knowledge enables businesses to seize new opportunities, drive significant growth, and maintain a competitive edge in the global market. We go beyond typical service delivery to craft transformative outcomes that help businesses not just adapt, but thrive in a world of continuous change. Discover how Myridius can elevate your business to new heights of innovation by visiting us at www.myridius.com and start leading the change.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

2 - 6 Lacs

Hyderabad

Work from Office

Key Skills: PL/SQL, SQL Optimization, Data Migration, Stored Procedures, Data Integrity, Performance Tuning, Production Support. Roles & Responsibilities: Develop, test, and maintain PL/SQL procedures, packages, functions, and triggers. Work closely with business analysts and end-users to understand data requirements and deliver scalable solutions. Perform performance tuning and optimization of SQL queries. Participate in data migration and transformation activities. Create and manage complex stored procedures for reporting and integration purposes. Work with large data sets and ensure data quality and integrity. Collaborate with application developers to implement database changes and improvements. Handle production support and resolve database-related issues. Experience Requirement: 4-8 years of experience in PL/SQL development with a strong understanding of databases. Experience in performance tuning and query optimization. Familiarity with data migration and transformation processes. Hands-on experience in creating stored procedures for reporting and integration. Knowledge of data quality and integrity management. Education: M.B.A., B.Tech M.Tech (Dual), MCA, B.E., B.Tech, M. Tech, B. Sc.

Posted 2 weeks ago

Apply

8.0 - 13.0 years

11 - 15 Lacs

Hyderabad

Work from Office

Overview PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCos global business scale to enable business insights, advanced analytics, and new product development. PepsiCos Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations, and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. Increase awareness about available data and democratize access to it across the company. As a data engineering lead, you will be the key technical expert overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be empowered to create & lead a strong team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Ideally Candidate must be flexible to work an alternative schedule either on tradition work week from Monday to Friday; or Tuesday to Saturday or Sunday to Thursday depending upon coverage requirements of the job. The candidate can work with immediate supervisor to change the work schedule on rotational basis depending on the product and project requirements. Responsibilities Provide leadership and management to a team of data engineers, managing processes and their flow of work, vetting their designs, and mentoring them to realize their full potential. Act as a subject matter expert across different digital projects. Overseework with internal clients and external partners to structure and store data into unified taxonomies and link them together with standard identifiers. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance, and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to productionalize data science models. Define and manage SLAs for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 8+ years of overall technology experience that includes at least 4+ years of hands-on software development, data engineering, and systems architecture. 4+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in Azure. Fluent with Azure cloud services. Azure Certification is a plus. Experience in Azure Log Analytics Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or Snowflake. Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). BA/BS in Computer Science, Math, Physics, or other technical fields. Candidate must be flexible to work an alternative work schedule either on tradition work week from Monday to Friday; or Tuesday to Saturday or Sunday to Thursday depending upon product and project coverage requirements of the job. Candidates are expected to be in the office at the assigned location at least 3 days a week and the days at work needs to be coordinated with immediate supervisor Skills, Abilities, Knowledge: Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Proven track record of leading, mentoring data teams. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals. Ability to lead others without direct authority in a matrixed environment.

Posted 2 weeks ago

Apply

2.0 - 7.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Overview PepsiCo operates in an environment undergoing immense and rapid change.Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCos global business scale to enable business insights, advanced analytics, and new product development. PepsiCos Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations, and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company. Responsible for day-to-day data collection, transportation, maintenance/ curation, and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders. Increase awareness about available data and democratize access to it across the company . As a data enginee r , you will be the key technical expert building PepsiCo's data product s to drive a strong vision. You'll be empowered to create data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help developing very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics . You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Act as a subject matter expert across different digital projects. Overseework with internal clients and external partners to structure and store data into unified taxonomies and link them together with standard identifiers. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance, and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to productionalize data science models. Define and manage SLAs for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 4 + years of overall technology experience that includes at least 3 + years of hands-on software development, data engineering, and systems architecture. 3 + years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 3 + years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark , Scala etc.). 2+ years in cloud data engineering experience in Azure. Fluent with Azure cloud services. Azure Certification is a plus. Experience in Azure Log Analytics

Posted 2 weeks ago

Apply

5.0 - 10.0 years

27 - 32 Lacs

Hyderabad

Work from Office

Overview Job TitleData Engineer L10 PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCos global business scale to enable business insights, advanced analytics and new product development. PepsiCos Enterprise Data Operations (EDO) team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations, and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. What PepsiCo Enterprise Data Operations (EDO) does: Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company Responsible for day-to-day data collection, transportation, maintenance/curation and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders Increase awareness about available data and democratize access to it across the company Responsibilities As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. The candidate should have at least 5+ years of experience working on cloud platforms. 2+ years of experience in Azure is needed. S/he will have to front end technical discussions with leads of different business sectors. S/he will have to be on top of all support issues pertaining to t Qualifications Bachelors in computer science engineering or related field Skills, Abilities, Knowledge Excellent communication skills, both verbal and written, and the ability to influence and demonstrate confidence in communications with senior level management. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to coordinate effectively with team. Positive and flexible attitude and adjust to different needs in an ever-changing environment. Foster a team culture of accountability, communication, and self-management. Proactively drive impact and engagement while bringing others along. Consistently attain/exceed individual and team goals Ability to learn quickly and adapt to new skills. Certified candidate in Azure Fundamental is preferred. Below are nice-to-have experience for the candidate: - Proficiency with GIT and understanding of DevOps pipelines - Data Quality frameworks using Great Expectation suite - Understanding of Cloud networking (VNET RBACs etc.) - Fair understanding of web applications Qualifications 14+ years of overall technology experience that includes at least 8+ years of hands-on software development, data engineering. 8+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala, etc.). 2+ years in cloud data engineering experience in Azure. Azure Certification is a plus. Experience with version control systems like Github and deployment & CI tools. Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools is a plus. Experience in working with large data sets and scaling applications like Kubernetes is a plus. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus Understanding metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). BE/BTech/MCA (Regular) in Computer Science, Math, Physics, or other technical fields. The candidate must have thorough knowledge in Spark, SQL, Python, Databricks and Azure - Spark (Joins, Upserts, Deletes, Aggregates, Repartitioning, Optimizations, working with structured and unstructured data, framework designing etc.) - SQL (Joins, merge, aggregates, indexing, clustering, functions, stored procedures, optimizations etc.) - Python (Functions, modules, classes, tuples, lists, dictionaries, lists, error handling, multi-threading etc.) - Azure (Azure Data Factory, Service Bus, Log Analytics, Event Grid, Event Hub, Logic App, App services etc.) - Databricks (Clusters, pools, workflows, authorization, APIs, DBRs, AQE, optimizations etc.)

Posted 2 weeks ago

Apply

6.0 - 11.0 years

13 - 17 Lacs

Hyderabad

Work from Office

Overview As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems Responsibilities Active contributor to code development in projects and services. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to productionalize data science models. Define and manage SLAs for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries Qualifications 6+ years of overall technology experience that includes at least 4+ years of hands-on software development, data engineering, and systems architecture. 4+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in Azure. Fluent with Azure cloud services. Azure Certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or SnowFlake. Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). BA/BS in Computer Science, Math, Physics, or other technical fields.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Data Warehouse Engineer at Myridius, you will be responsible for working with solid SQL language skills and possessing basic knowledge of data modeling. Your role will involve collaborating with Snowflake in Azure, CI/CD process using any tooling. Additionally, familiarity with Azure ADF and ETL/ELT frameworks would be beneficial for this position. It would be advantageous to have experience in ER/Studio and a good understanding of Healthcare/life sciences industry. Knowledge of GxP processes will be a plus in this role. For a Senior Data Warehouse Engineer position, you will be overseeing engineers while actively engaging in the same tasks. Your responsibilities will include conducting design reviews, code reviews, and deployment reviews with engineers. You should have expertise in solid data modeling, preferably using ER/Studio or an equivalent tool. Optimizing Snowflake SQL queries to enhance performance and familiarity with medallion architecture will be key aspects of this role. At Myridius, we are dedicated to transforming the way businesses operate by offering tailored solutions in AI, data analytics, digital engineering, and cloud innovation. With over 50 years of expertise, we drive a new vision to propel organizations through rapidly evolving technology and business landscapes. Our commitment to exceeding expectations ensures measurable impact and fosters sustainable innovation. Together with our clients, we co-create solutions that anticipate future trends and help businesses thrive in a world of continuous change. If you are passionate about driving significant growth and maintaining a competitive edge in the global market, join Myridius in crafting transformative outcomes and elevating businesses to new heights of innovation. Visit www.myridius.com to learn more about how we lead the change.,

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

You should have a B.Tech/B.E/MSc/MCA qualification along with a minimum of 10 years of experience. As a Software Architect - Cloud, your responsibilities will include architecting and implementing AI-driven Cloud/SaaS offerings. You will be required to research and design new frameworks and features for various products, ensuring they meet high-quality standards and are designed for scale, resiliency, and efficiency. Additionally, motivating and assisting lead and senior developers for their professional and technical growth, contributing to academic outreach programs, and participating in company branding activities will be part of your role. To qualify for this position, you must have experience in designing and delivering widely used enterprise-class SaaS applications, preferably in Marketing technologies. Knowledge of cloud computing infrastructure, AWS Certification, hands-on experience in scalable distributed systems, AI/ML technologies, big data technologies, in-memory databases, caching systems, ETL tools, containerization solutions like Kubernetes, large-scale RDBMS deployments, SQL optimization, Agile and Scrum development processes, Java, Spring technologies, Git, and DevOps practices are essential requirements for this role.,

Posted 2 weeks ago

Apply
Page 1 of 4
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies