Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
4.0 - 5.0 years
3 - 10 Lacs
Mysore, Karnataka, India
On-site
Your role and responsibilities As Consultant, you are responsible to develop design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new mobile solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Mandatory Skills: LUW/Unix DB2 DBA, Shell Scripting, Linux and AIX knowledge Secondary Skills: Physical DB2 DBA skills, PL/SQL Knowledge and exposure Expert knowledge of IBM DB2 V11.5 installations, configurations administration in Linux /AIX systems. Expert level knowledge in Database restores including redirected restore backup concepts. Excellent understanding of database performance monitoring techniques, fine tuning and able to perform performance checks query optimization Preferred technical and professional experience Good knowledge of utilities like import, load export under high volume conditions. Ability to tune SQLs using db2advisor db2explain. Ability to troubleshoot database issues using db2diag, db2pd, db2dart, db2top tec
Posted 1 week ago
0.0 - 5.0 years
0 - 5 Lacs
Pune, Maharashtra, India
On-site
As a Data Engineer specializing in DBT, you'll be joining one of IBM Consulting's Client Innovation Centers here in Hyderabad. In this role, you'll contribute your deep technical and industry expertise to a variety of public and private sector clients, driving innovation and adopting new technologies. Your Responsibilities Establish and implement best practices for DBT workflows , ensuring efficiency, reliability, and maintainability. Collaborate with data analysts, engineers, and business teams to align data transformations with business needs. Monitor and troubleshoot data pipelines to ensure accuracy and performance. Work with Azure-based cloud technologies to support data storage, transformation, and processing. Required Qualifications Education : Bachelor's Degree Technical & Professional Expertise : Strong MS SQL and Azure Databricks experience. Ability to implement and manage data models in DBT , focusing on data transformation and alignment with business requirements. Experience ingesting raw, unstructured data into structured datasets within a cloud object store. Proficiency in utilizing DBT to convert raw, unstructured data into structured datasets , enabling efficient analysis and reporting. Skilled in writing and optimizing SQL queries within DBT to enhance data transformation processes and improve overall performance. Preferred Qualifications Education : Master's Degree Technical & Professional Expertise : Ability to establish best DBT processes to improve performance, scalability, and reliability. Experience in designing, developing, and maintaining scalable data models and transformations using DBT in conjunction with Databricks . Proven interpersonal skills, contributing to team efforts and achieving results as required.
Posted 1 week ago
6.0 - 10.0 years
6 - 10 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
We are looking for a Performance Engineer with 6+ years of experience to optimize, analyze, and enhance the performance of our applications and systems. The ideal candidate will have expertise in performance testing, tuning, monitoring, and troubleshooting to ensure smooth system functionality and scalability under varying loads. Your Role and Responsibilities Performance Testing & Analysis : Design, develop, and execute load, stress, endurance, and scalability tests using tools like JMeter, LoadRunner, or Gatling . Bottleneck Identification : Analyze performance issues related to CPU, memory, disk I/O, network latency, and application code. Optimization & Tuning : Work on code optimizations, database query tuning, JVM tuning, and configuration adjustments to improve performance. Monitoring & Profiling : Use APM tools (New Relic, Dynatrace, AppDynamics, Prometheus, Grafana) to track application performance in real-time. Automation & Scripting : Develop test scripts using Python, Java, or Shell scripting for continuous performance validation. Collaboration : Work closely with developers, DevOps, and infrastructure teams to troubleshoot and resolve performance-related issues. Benchmarking & Reporting : Define performance benchmarks and generate reports with detailed insights and recommendations. Required Education Bachelor's Degree Preferred Education Bachelor's Degree Required Technical and Professional Expertise 6+ years of experience in Performance Engineering or Performance Testing. Proficiency in performance testing tools such as JMeter, LoadRunner, Gatling. Strong knowledge of profiling and monitoring tools like New Relic, Dynatrace, AppDynamics, Prometheus, Grafana. Experience with APIs, microservices performance testing , and cloud-based performance testing . Hands-on expertise in JVM tuning, SQL query optimization, GC analysis, and thread dump analysis . Proficiency in scripting languages (Python, Bash, Groovy, or PowerShell) . Familiarity with CI/CD pipelines, Kubernetes, Docker, AWS, or Azure for performance testing in cloud environments. Strong analytical and troubleshooting skills for identifying and resolving performance issues. Preferred Technical and Professional Experience Experience with chaos engineering and resilience testing . Knowledge of distributed systems, caching mechanisms (Redis, Memcached) , and message queues (Kafka, RabbitMQ) . Hands-on experience with AI-driven performance analysis tools .
Posted 1 week ago
0.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Ready to build the future with AI At Genpact, we don&rsquot just keep up with technology&mdashwe set the pace. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos AI Gigafactory, our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what&rsquos possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Senior Principal Consultant, Automation Test Lead! Responsibilities . Understand the need of the requirement beyond its face value, design a proper machine-executable automation solution using Python scripts. . you will be getting the requirement of Business Rules or Automation test scenario from business or QA team to Automate using Python and SQL, you will not be responsible for writing test case. . Implement the re-useable solution following best practice, and delivery the automation results on time. . Maintaining, troubleshooting, and optimise existing solution . Collaborate with various disciplinary teams to align automation solution to boarder engineering community. . Documentation. . Lead, coordinate and guide the ETL Manual and automation testers. You may get a change to learn new technologies as well on cloud. Tech Stack (as of now) 1. Redshift 2. Aurora (postgresql) 3. S3 object storage 4. EKS / ECR 5. SQS/SNS 6. Roles/Policies 7. Argo 8. Robot Framework 9. Nested JSON Qualifications we seek in you! Minimum Qualifications 1. Python scripting. Candidate should be strong on python programming design / Pandas / processes / http requests like protocols 2. SQL technologies. (best in postgresql ) : OLTP/ OLAP / Join/Group/aggregation/windows functions etc. 3. Windows / Linux Operation systems basic command knowledge 4. Git usage. understand version control systems, concepts like git branch/pull request/ commit / rebase/ merge 6. SQL Optimization knowledge is plus 7. Good understand and experience in data structure related work. Preferred Qualifications Good to Have as Python code to be deploy using these framework 1. Docker is a plus. understanding about the images/container concepts. 2. Kubernetes is a plus. understanding the concepts and theory of the k8s, especially pods / env etc. 3. Argo workflow / airflow is a plus. 4. Robot Framework is a plus. 5. Kafka is a plus. understand the concept for kafka, and event driven method. Why join Genpact . Lead AI-first transformation - Build and scale AI solutions that redefine industries . Make an impact - Drive change for global enterprises and solve business challenges that matter . Accelerate your career&mdashGain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills . Grow with the best - Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace . Committed to ethical AI - Work in an environment where governance, transparency, and security are at the core of everything we build . Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 week ago
5.0 - 8.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Sr Data Analyst About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasn t happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. As a Sr Data Analyst for Target s Merch Data Analytics team you ll: Support our world-class Merchandising leadership team at Target with critical data analysis that helps Merch business team to make profitable decisions. Enable faster, smarter and more scalable decision-making to compete and win the modern retail market. Collaborate with stakeholders and understand their priorities/roadmap to drive business strategies using data. Interface with Target business representatives to validate business requirements/requests for analysis and present final analytical results. Designs, develops, and delivers analytical solutions resulting in decision support or models Gathers required data and performs data analysis to support needs. Communicate impact of proposed solutions to business partners Evaluates processes, analyzes and interprets statistical data Develop business acumen and cultivate client relationships Presents results in a manner that the business partners can understand. Translate scientific methodology to business terms. Documents analytical methodologies used in the execution of analytical projects Participate in knowledge sharing system to support iterative model builds Adheres to corporate information protection standards. Keep up to date on industry trends, best practices, and emerging methodologies / About You: ExperienceOverall 5-8 years exp and relevant 3-5 years exp Qualification B.Tech / B.E. or Masters in Statistics /Econometrics/Mathematics equivalent 1. Extensive exposure to Structured Query Language (SQL), SQL Optimization and DW/BI concepts. 2. Proven hands-on experience in BI Visualization tool (i.e. Tableau, Domo, MSTR10, Qlik) with ability to learn additional vendor and proprietary visualizations tools. 3. Strong knowledge of structured (i.e. Teradata, Oracle, Hive) and unstructured databases including Hadoop Distributed File System (HDFS). Exposure and extensive hands-on work with large data sets. 4. Hands on experience in R, Python, Hive or other open-source languages/database 5. Hands on experience in any advanced analytical techniques like Regression, Time-series models, Classification Techniques, etc. and conceptual understanding of all the techniques mentioned above 6. Git source code management & experience working in an agile environment. 7. Strong attention to detail, excellent diagnostic, and problem-solving skills 8. Highly self-motivated with a strong sense of urgency to be able to work both independently and in team settings in a fast-paced environment; capability to manage urgency timelines 9. Competent and curious to ask questions and learn to fill gaps, desire to teach and learn. 10. Excellent communication, service orientation and strong relationship building skills 11. Experience with Retail, Merchandising, Marketing will be strong addons Useful Links- Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging
Posted 1 week ago
6.0 - 10.0 years
8 - 13 Lacs
Pune, Bengaluru, Delhi / NCR
Work from Office
Notice - Immediate to 15 days Job Summary • Oracle Apex, Oracle PLSQL, Windows batch Scripts , Azure DevOps ,SQL and MSSQL. • Developed and maintain web applications using Oracle Apex aligned with Business requirements. • Efficient in PLSQL procedures, packages, functions and triggers for data processing and automation. • Perform Data analysis, debugging and performance tuning in Oracle and MSSQL environments. • Collaborate with cross-functional teams to deliver user stories and features within ADO. • Source-code versioning with Git. • Use Azure DevOps for code versioning, deployment pipelines and work tracking. • Ensure data Integrity, application stability, and adherence to coding best practices. • Ensured high Quality deliverables through unit testing, functional validation , and collaboration with QA teams. • Actively contributed to sprint planning, story refinement, and on-time delivery of tasks and features. • Worked on MSSQL for data migration transformations, and reporting tasks across environments. • Initiated learning of .NET Framework with MVC architecture, focusing on controller/view logic. • Document technical components and provide knowledge sharing within the team. Nice to have Skills • Currently enhancing cloud expertise for learning Microsoft Azure SQL etc. • Exploring Snowflake Data Cloud to build foundational knowledge in cloud. Based data warehousing, SQL optimization and data sharing features. • Actively aligning self-learning with organizational goals to prepare for future cloud/data platform integration initiatives. • Demonstrating continuous learning mindset and learning on new technology areas.
Posted 1 week ago
10.0 - 15.0 years
15 - 20 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
We are seeking an experienced SQL Developer to join our dynamic team in India. The ideal candidate will have extensive experience in SQL development, database management, and a deep understanding of data structures and optimization techniques. Responsibilities Design, develop, and maintain SQL databases and applications. Write complex SQL queries for data retrieval and analysis. Optimize SQL queries for performance improvements. Collaborate with application developers to integrate SQL databases with applications. Ensure data integrity and security in SQL databases. Monitor and troubleshoot database performance issues. Perform database backups and recovery as needed. Skills and Qualifications 10-15 years of experience in SQL development and database management. Proficient in SQL and experience with various database systems (e.g., MySQL, SQL Server, Oracle). Strong understanding of database design principles and normalization. Experience with performance tuning and optimization of SQL queries. Familiarity with database backup and recovery techniques. Ability to write efficient and effective stored procedures, triggers, and functions. Knowledge of data modeling and ETL processes. Experience with version control systems and collaborative development tools.
Posted 1 week ago
16.0 - 17.0 years
3 - 5 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Understand the business needs of our clients and design technical solutions to meet those needs Work closely with our clients to gather requirements and design solutions Collaborate with our development team to ensure that solutions are implemented effectively and efficiently Stay up to date with the latest technologies and tools to ensure that we are using the most effective solutions Mentor and guide junior team members to help them develop their skills and knowledge
Posted 1 week ago
16.0 - 20.0 years
3 - 5 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Designing and implementing data models, data integration solutions, and data management systems that ensure data accuracy, consistency, and security Developing and maintaining data dictionaries, metadata, and data lineage documents to ensure data governance and compliance Data Architect should have a strong technical background in data architecture and management, as well as excellent communication skills Strong problem-solving skills and the ability to think critically are also essential to identify and implement solutions to complex data issues
Posted 1 week ago
7.0 - 11.0 years
7 - 17 Lacs
Noida, Hyderabad, Bengaluru
Work from Office
We are looking for an experienced and results-driven Oracle OBIEE/BIP Developer to join our team. The ideal candidate will have solid experience in Oracle Business Intelligence tools (OBIEE 12c and BIP), strong expertise in SQL and database structures, and hands-on exposure to designing and optimizing reports and dashboards. This role involves end-to-end report development, performance tuning, reconciliation validation, and production support for reporting environments. Key Responsibilities: Design, develop, and unit test OBIEE/BIP reports, dashboards, and ad hoc analyses Perform maintenance of OBIEE RPD, web catalog, and data layer components Schedule and manage reports using OBIEE/BIP tools Optimize and troubleshoot Oracle SQL queries for performance Design reconciliation models and conduct report-to-report comparison/validation Collaborate with ETL and application teams to resolve data/reporting issues Assist with report testing, data validation, and issue resolution Handle production support activities, including bug fixes and data source validation Manage scheduler/admin functions and connection configurations in OBIEE/BIP Communicate effectively with business users and stakeholders Required Skills: Proficiency in Oracle OBIEE 12c : BIP Reports, Dashboards, Analyses, RPD, Web Catalog, Scheduling Strong SQL knowledge: queries, views, functions, procedures Experience in SQL performance tuning and debugging Understanding of reconciliation models and validation techniques Strong communication skills (written and verbal) for end-user and cross-team interaction Secondary/Preferred Skills: Familiarity with OBIEE/BIP scheduler and administration tools Knowledge of connection/session management Exposure to ETL pipelines and data integration processes Work Type: Hybrid Candidate must be currently located in or willing to relocate to Bangalore, Hyderabad, or Noida
Posted 1 week ago
3.0 - 8.0 years
2 - 15 Lacs
Indore, Madhya Pradesh, India
On-site
SQL Server database administrator, with 3+ years on SQL Server 2005, 2008 and 2008 R2, 2012, 2014, 2016, 2017, 2019 and 2022. Experience in supporting SQL Server IaaS and PaaS, AWS and Azure Clouds. Experience in administrating other database services like SSAS, SSIS and SSRS. Experience in installing different SQL server versions as per customer requirement. Experience in Upgrading and Migrating databases from lower to higher versions. Experience in applying latest patches (SPCUGDRHotfixes) to fix vulnerabilities and strong troubleshooting skills on resolving patching issues. Well versed in database performance tuning, especially SQL tuning. Knowledge on Windows cluster will be plus and Experience in MSSQL cluster environments (Active-Active or Active-Passive). Supported SQL Server on LinuxWindows both migrations and on-going support. Experience in handling HADR environments (Log Shipping, ReplicationCDC, Mirroring and AOAG). Need strong troubleshooting skills on SQL Server Always on Availability Group and Replication issues. Skilled in Backups & Disaster Recovery. Knowledge on Backup tools (Eg: CommVault, Semantic NetBackup, LiteSpeed) will be plus. Strong experience in Automating DB tasks, Health checks, blocking ,disk space ,Maintenance tasks etc. Experience in PowerShell scripting automation will be plus. 24 X 7 production support experience. Knowledge of Windows Server 2003R2 and later editions. Working knowledge in T-SQL, BCP, DTS and SSIS development. Any cloud Architecture certification is preferable. Knowledge and Hands-on expertise on Sybase, Postgre SQL, My SQL and Mongo DB would be additional value addition. Duties and responsibilities: SQL Server Database Administration. Microsoft SQL Server - (Version 2005, 2008, 2008 R2, 2012, 2014, 2016, 2017, 2019). Database support, monitoring, troubleshooting, planning, and migration. Transact-SQL (T-SQL). Performance Tuning. SQL Azure. SQL Server Integration Services (SSIS).
Posted 2 weeks ago
5.0 - 15.0 years
9 - 19 Lacs
Noida, Uttar Pradesh, India
On-site
Description We are seeking an experienced SQL Developer to join our team in India. The ideal candidate will have a strong background in database design, SQL programming, and performance optimization to support our data-driven projects. Responsibilities Designing and implementing SQL database solutions Developing complex SQL queries and stored procedures Optimizing database performance and scalability Collaborating with developers and stakeholders to define database requirements Performing data analysis and reporting using SQL Ensuring data integrity and security within the database Troubleshooting and resolving database issues Creating and maintaining documentation for database systems Skills and Qualifications 5-15 years of experience in SQL development Strong knowledge of relational database management systems (RDBMS) such as MySQL, SQL Server, or Oracle Proficiency in writing complex SQL queries, stored procedures, and functions Experience with database performance tuning and optimization techniques Understanding of database design principles and normalization Familiarity with data modeling and ETL processes Ability to work with large datasets and perform data analysis Excellent problem-solving skills and attention to detail Strong communication and collaboration skills
Posted 3 weeks ago
10.0 - 15.0 years
30 - 45 Lacs
Bengaluru
Work from Office
Design and maintain Azure-based APIs and integration solutions using Logic Apps, Functions, and API Management. Required Candidate profile 5–10 yrs in C#, .NET, REST APIs, Azure Integration. Team collaboration & system design experience.
Posted 3 weeks ago
2.0 - 4.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Overview As a data engineering lead, you will be the key technical expert overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be empowered to create & lead a strong team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. Responsibilities Act as a subject matter expert across different digital projects. Oversee work with internal clients and external partners to structure and store data into unified taxonomies and link them together with standard identifiers. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance, and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to productionalize data science models. Define and manage SLAs for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 7+ years of overall technology experience that includes at least 5+ years of hands-on software development, data engineering, and systems architecture. 4+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in Azure. Fluent with Azure cloud services. Azure Certification is a plus. Experience in Azure Log Analytics Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or Snowflake. Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). B Tech/BA/BS in Computer Science, Math, Physics, or other technical fields.
Posted 3 weeks ago
5.0 - 15.0 years
6 - 19 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
Description We are seeking an experienced SQL Developer to join our team in India. The ideal candidate will have 5-15 years of experience in SQL development, with a strong focus on optimizing database performance and ensuring data integrity. Responsibilities Develop and maintain SQL queries and procedures to support business applications. Optimize SQL queries for performance improvements and efficiency. Create and manage database schemas, tables, and relationships. Collaborate with software developers and data analysts to gather requirements and deliver data solutions. Troubleshoot database issues and provide support to end-users as needed. Ensure data integrity and security across all databases. Skills and Qualifications Proficient in SQL and PL/SQL. Strong understanding of database design and normalization principles. Experience with SQL Server, MySQL, or Oracle databases. Ability to write complex queries, stored procedures, and triggers. Knowledge of data warehousing concepts and ETL processes. Familiarity with database performance tuning and optimization techniques. Strong analytical and problem-solving skills. Excellent communication and teamwork abilities.
Posted 3 weeks ago
5.0 - 8.0 years
8 - 15 Lacs
Pune
Work from Office
Core Technical Skills: Design and develop robust backend solutions using Java 11+/17 and Spring Boot. Build, test, and maintain scalable microservices in a cloud environment (AWS). Work with Kafka or other messaging systems for event-driven architecture. Write clean, maintainable code with high test coverage using. Tools & Reporting: Java 11+/17, SpringBoot, AWS, Kafta Soft Skills: Strong communication, coordination with app teams Analytical thinking and problem-solving Ability to work independently or collaboratively Snowflake architecture & performance tuning, Oracle DB, SQL optimization, Data governance, RBAC, Data replication, Time Travel & Cloning, Dynamic data masking, OEM & AWR reports, Apps DBA experience.
Posted 3 weeks ago
4.0 - 8.0 years
18 - 22 Lacs
Hyderabad, Bengaluru
Work from Office
Job Type: C2H (Long Term) Required Skill Set: Core Technical Skills: Snowflake database design, architecture & performance tuning Strong experience in Oracle DB and SQL query optimization Expertise in DDL/DML operations, data replication, and failover handling Knowledge of Time Travel, Cloning, and RBAC (Role-Based Access Control) Experience with dynamic data masking, secure views, and data governance Tools & Reporting: Familiarity with OEM, Tuning Advisor, AWR reports Soft Skills: Strong communication, coordination with app teams Analytical thinking and problem-solving Ability to work independently or collaboratively Additional Experience: Previous role as Apps DBA or similar Exposure to agile methodologies Hands-on with Snowflake admin best practices, load optimization, and secure data sharing.
Posted 3 weeks ago
6 - 10 years
30 - 35 Lacs
Bengaluru
Work from Office
We are seeking an experienced Amazon Redshift Developer / Data Engineer to design, develop, and optimize cloud-based data warehousing solutions. The ideal candidate should have expertise in Amazon Redshift, ETL processes, SQL optimization, and cloud-based data lake architectures. This role involves working with large-scale datasets, performance tuning, and building scalable data pipelines. Key Responsibilities: Design, develop, and maintain data models, schemas, and stored procedures in Amazon Redshift. Optimize Redshift performance using distribution styles, sort keys, and compression techniques. Build and maintain ETL/ELT data pipelines using AWS Glue, AWS Lambda, Apache Airflow, and dbt. Develop complex SQL queries, stored procedures, and materialized views for data transformations. Integrate Redshift with AWS services such as S3, Athena, Glue, Kinesis, and DynamoDB. Implement data partitioning, clustering, and query tuning strategies for optimal performance. Ensure data security, governance, and compliance (GDPR, HIPAA, CCPA, etc.). Work with data scientists and analysts to support BI tools like QuickSight, Tableau, and Power BI. Monitor Redshift clusters, troubleshoot performance issues, and implement cost-saving strategies. Automate data ingestion, transformations, and warehouse maintenance tasks. Required Skills & Qualifications: 6+ years of experience in data warehousing, ETL, and data engineering. Strong hands-on experience with Amazon Redshift and AWS data services. Expertise in SQL performance tuning, indexing, and query optimization. Experience with ETL/ELT tools like AWS Glue, Apache Airflow, dbt, or Talend. Knowledge of big data processing frameworks (Spark, EMR, Presto, Athena). Familiarity with data lake architectures and modern data stack. Proficiency in Python, Shell scripting, or PySpark for automation. Experience working in Agile/DevOps environments with CI/CD pipelines.
Posted 1 month ago
5 - 10 years
9 - 13 Lacs
Hyderabad
Work from Office
Overview As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. Responsibilities Be a founding member of the data engineering team. Help to attract talent to the team by networking with your peers, by representing PepsiCo HBS at conferences and other events, and by discussing our values and best practices when interviewing candidates. Own data pipeline development end-to-end, spanning data modeling, testing, scalability, operability and ongoing metrics. Ensure that we build high quality software by reviewing peer code check-ins. Define best practices for product development, engineering, and coding as part of a world class engineering team. Collaborate in architecture discussions and architectural decision making that is part of continually improving and expanding these platforms. Lead feature development in collaboration with other engineers; validate requirements / stories, assess current system capabilities, and decompose feature requirements into engineering tasks. Focus on delivering high quality data pipelines and tools through careful analysis of system capabilities and feature requests, peer reviews, test automation, and collaboration with other engineers. Develop software in short iterations to quickly add business value. Introduce new tools / practices to improve data and code quality; this includes researching / sourcing 3rd party tools and libraries, as well as developing tools in-house to improve workflow and quality for all data engineers. Support data pipelines developed by your teamthrough good exception handling, monitoring, and when needed by debugging production issues. Qualifications 6-9 years of overall technology experience that includes at least 5+ years of hands-on software development, data engineering, and systems architecture. 4+ years of experience in SQL optimization and performance tuning Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with data profiling and data quality tools like Apache Griffin, Deequ, or Great Expectations. Current skills in following technologies Python Orchestration platformsAirflow, Luigi, Databricks, or similar Relational databasesPostgres, MySQL, or equivalents MPP data systemsSnowflake, Redshift, Synapse, or similar Cloud platformsAWS, Azure, or similar Version control (e.g., GitHub) and familiarity with deployment, CI/CD tools. Fluent with Agile processes and tools such as Jira or Pivotal Tracker Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus.
Posted 1 month ago
11 - 14 years
35 - 40 Lacs
Hyderabad
Work from Office
What PepsiCo Data Management and Operations does: Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company. Responsible for day-to-day data collection, transportation, maintenance/curation, and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders. Increase awareness about available data and democratize access to it across the company. As a Data Engineering Associate Manager, you will be the key technical expert overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be empowered to create & lead a strong team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Provide leadership and management to a team of data engineers, managing processes and their flow of work, vetting their designs, and mentoring them to realize their full potential. Act as a subject matter expert across different digital projects. Overseework with internal clients and external partners to structure and store data into unified taxonomies and link them together with standard identifiers. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance, and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to productionalize data science models. Define and manage SLAs for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications B.Tech in Computer Science, Math, Physics, or other technical fields. 11+ years of overall technology experience that includes at least 5+ years of hands-on software development, data engineering, and systems architecture. 4+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in Azure. Fluent with Azure cloud services. Azure Certification is a plus. Experience in Azure Log Analytics Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or Snowflake. Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI).
Posted 1 month ago
4 - 6 years
16 - 18 Lacs
Visakhapatnam
Work from Office
SQL Query T SQL - functions and package SQL Optimization and Performance improvement Knowledge of data warehousing concepts (Star Schema, Fact and dimension tables) Experience in SQL Server, SSIS SSIS Development Experience on data loads SQL Servers Total experience. SSIS Package configuration and optimization. ETL xfr, error handling, data flow components, script components, debugging. Input files : XML, CSV, flat, json Installation, backup, db setting, configurations Strong SQL Skills (Joins, subqueries, Aggregations, Window functions).
Posted 1 month ago
4 - 6 years
16 - 18 Lacs
Vadodara
Work from Office
SQL Query T SQL - functions and package SQL Optimization and Performance improvement Knowledge of data warehousing concepts (Star Schema, Fact and dimension tables) Experience in SQL Server, SSIS SSIS Development Experience on data loads SQL Servers Total experience. SSIS Package configuration and optimization. ETL xfr, error handling, data flow components, script components, debugging. Input files : XML, CSV, flat, json Installation, backup, db setting, configurations Strong SQL Skills (Joins, subqueries, Aggregations, Window functions).
Posted 1 month ago
4 - 6 years
16 - 18 Lacs
Thiruvananthapuram
Work from Office
SQL Query T SQL - functions and package SQL Optimization and Performance improvement Knowledge of data warehousing concepts (Star Schema, Fact and dimension tables) Experience in SQL Server, SSIS SSIS Development Experience on data loads SQL Servers Total experience. SSIS Package configuration and optimization. ETL xfr, error handling, data flow components, script components, debugging. Input files : XML, CSV, flat, json Installation, backup, db setting, configurations Strong SQL Skills (Joins, subqueries, Aggregations, Window functions).
Posted 1 month ago
4 - 6 years
16 - 18 Lacs
Chandigarh
Work from Office
SQL Query T SQL - functions and package SQL Optimization and Performance improvement Knowledge of data warehousing concepts (Star Schema, Fact and dimension tables) Experience in SQL Server, SSIS SSIS Development Experience on data loads SQL Servers Total experience. SSIS Package configuration and optimization. ETL xfr, error handling, data flow components, script components, debugging. Input files : XML, CSV, flat, json Installation, backup, db setting, configurations Strong SQL Skills (Joins, subqueries, Aggregations, Window functions).
Posted 1 month ago
4 - 6 years
16 - 18 Lacs
Coimbatore
Work from Office
SQL Query T SQL - functions and package SQL Optimization and Performance improvement Knowledge of data warehousing concepts (Star Schema, Fact and dimension tables) Experience in SQL Server, SSIS SSIS Development Experience on data loads SQL Servers Total experience. SSIS Package configuration and optimization. ETL xfr, error handling, data flow components, script components, debugging. Input files : XML, CSV, flat, json Installation, backup, db setting, configurations Strong SQL Skills (Joins, subqueries, Aggregations, Window functions).
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2