Jobs
Interviews

4432 Replication Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 8.0 years

0 Lacs

bengaluru, karnataka

On-site

General Description - External Job Description for S4 HANA Central Finance SAP Consultant having total 8 to 10 years of SAP experience out of which at least 4 years of Central Finance implementation or support experience. should be well versed with traditional FICO Modules along with S/4 Hana experience and worked in at least 1 full life cycle implementations and 1 support projects as an associate consultant for Central Finance Projects Roles and Responsibility: SAP S4 HANA FICO (1610/1709/1809 etc…) implementation or conversion project experience with Simplified exposure. Candidate should have experience of 6 to 8 years in SAP & SAP HANA Finance with minimum 1 Implementation and 1 support projects in SAP HANA (Preferably Central Finance) enterprise and mixed scenarios with stronger role play Candidate should be having strong exposure to Account Based COPA in S/4 HANA Experience in Upgrade custom program to enable to S4 HANA and Enhance the SAP standard table with customized fields. Experience with Data Migration from SAP ECC Master data & transactional to SAP CFIN. Candidate should be strong enough to assess IDOCs processing from source to Target system especially for DEBMAS, CREMAS, Projects etc… Central Finance Delta change– Knowledge on Business partner accounts for Customers and Suppliers with BP role and BP grouping, Integration with Logistics, Sales and Distribution and Production planning, Table changes, New user Exp in Fiori, simplification in master data, procure to pay cycle, Order to Cash and inventory simplification with S4 Hana. SAP S/4 HANA Certification is preferred. Having experience with FIORI interface and customization will be an added advantage. Added advantage if candidate has good experience in SLT. Primary & Secondary Skillset: SAP Functional configuration expertise in Finance à General Ledger (FI- GL), Accounts Receivable (AR), Accounts Payable (AP), Asset Accounting (AA), New GL, Order to Pay (O2C) process, Procure to Pay (P2P) process, Controlling (CO)à Overhead cost controlling, Allocations & distributions, Product costing, Make to stock process, Make to order process, Settlements, Result analysis, Profitability Analysis (PA). SAP CFIN experience à Initial load & Balances load, Real time replication, AIF error monitor, Reconciliations between source & Target systems, Cutover activities in CFIN, Mappings General Knowledge in SLT process & replications. Ability to interact with Business Users and understand their requirements if interaction is required. Soft skills: Excellent Communication & Presentation Skills (written and verbal) Quick adaptation to complex and sometimes highly political client environments. Proven track record in successful team work being part of global, multi-national projects. Multi-cultural awareness, open minded to working in diverse business environments. Able to constructively work under stress and pressure when faced with high workloads and deadlines. Qualifications Any Grad Primary Location : IN-Karnataka-Bangalore Schedule : Full-time Unposting Date : Ongoing

Posted 3 days ago

Apply

0.0 - 5.0 years

0 Lacs

maharashtra

On-site

Job Information Job Opening ID OTSI_2283_JOB Date Opened 09/12/2025 Industry IT Services Job Type Full time Required Skills DBA database admin +1 City NA State/Province Maharashtra Country India Zip/Postal Code 400071 About Us OTSI is a leading global technology company offering solutions, consulting, and managed services for businesses worldwide since 1999. OTSI serves clients from its 15 offices across 6 countries around the globe with a “Follow-the-Sun” model. Headquartered in Overland Park, Kansas, we have a strong presence in North America, Central America, and Asia-Pacific with a Global Delivery Center based in India. These strategic locations offer our customers the competitive advantages of onshore, nearshore, and offshore engagement and delivery options, with 24/7 support. OTSI works with 100+ enterprise customers, of which many are Fortune ranked, OTSI focuses on industry segments such as Banking, Financial Services & Insurance, Healthcare & Life Sciences, Energy & Utilities, Communications & Media Entertainment, Engineering & Telecom, Retail & Consumer Services, Hi-tech, Manufacturing, Engineering, transport logistics, Government, Defense & PSUs. Our focused technologies are: Data & Analytics (Traditional EDW, BI, Big data, Data Engineering, Data Management, Data Modernization, Data Insights) Digital Transformation (Cloud Computing, Mobility, Micro Services, RPA, DevOps) QA & Automation (Manual Testing, Nonfunctional testing, Test Automation, Digital Testing) Enterprise Applications (SAP, Java Full stack, Microsoft, Custom Development) Disruptive Technologies (Edge Computing/IOT, Block Chain, AR/VR, Biometric) Job Description Object Technology Solutions, Inc (OTSI) has an immediate opening for a Database Administrator Database Administrator Job Location: Mumbai, Chembur MAJOR RESPONSIBILITIES: Should have good knowledge of SQL database 2016 and above Installation and configuration of SQL server, related services, and additional server components. Implementation of SQL server cluster instance. Planning and execution of Migration Creation of databases and tables. Managing and configuring SQL server instances and databases. Manage database integrity Monitor database activities (sessions, blocking, resource utilization) and queries (query store, extended and trace events, execution plans) Troubleshooting performance problems with tools Manage backup and restore of SQL databases. SKILLS AND ABILITIES REQUIRED: Working with indexes, DB statistics, and audit configuration. Managing SQL logins, database permissions, and roles. Patching and upgrading of SQL instances. Installation and configuration of SQL server reporting services. Implementation of High availability features such as Always-ON, replication, and mirroring. Must have experience in handling backups, Always on / In memory table concepts, Roles Management, Index fragmentation, and Reporting Services. Min. 5 (Five) years of experience in Administration and Maintenance of Windows SQL Server (i.e. 2008, 2012, 2016 or later) Standard/Enterprise versions. QUALIFICATIONS AND EXPERIENCE B. E. / B. Tech. / M.Sc. in Computer Science or Computer Engineering or Information Technology or MCA with minimum passing marks of 60%. 5 years of relevant experience in managing Microsoft SQL database server.

Posted 3 days ago

Apply

5.0 - 15.0 years

0 Lacs

karnataka

On-site

Role Overview: As a member of the Postgres Database Team at Apple, you will be responsible for the design, configuration, and maintenance of a fleet of Postgres databases under Data Services. Your role will involve interacting with application teams to understand their requirements and providing optimal database solutions. The Postgres databases are deployed across baremetal, AWS, and Kubernetes, and you will be part of Apple's Information Systems and Technology (IS&T) division. Key Responsibilities: - Support Postgres databases in a high-volume customer-facing environment for 5-15 years - Demonstrate in-depth understanding of PostgreSQL architecture - Set up, configure, upgrade/patch, monitor, and troubleshoot database infrastructure - Provide detailed Root Cause Analysis (RCA) for outages - Understand MVCC and its handling in Postgres - Perform database upgrades and migrations with minimal downtime - Configure HA for High Availability and Disaster Recovery - Configure and support replication (logical/physical/active-active) - Optimize and tune Postgres performance including drill-down analysis - Measure and optimize system performance, conduct capacity planning, and manage forecasts - Provide database architecture and design solutions - Implement PostgreSQL security best practices - Implement standard backup methodologies (pg_dump/restore, online backup, incremental backup) - Thrive in a fast-paced environment with tight deadlines - Participate in technology evaluation, design, and development of scalable distributed databases - Conduct performance benchmarking using pgbench or other open-source tools - Manage databases deployed in Cloud Infrastructure including AWS/GCP and Kubernetes Qualifications Required: - Certified Kubernetes Administrator (CKA) preferred - AWS Certified Solutions Architect - Associate preferred - Python knowledge is a plus Please submit your CV to apply for this exciting opportunity at Apple.,

Posted 3 days ago

Apply

4.0 years

0 Lacs

greater chennai area

Remote

Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. At Workday, we value our candidates’ privacy and data security. Workday will never ask candidates to apply to jobs through websites that are not Workday Careers. Please be aware of sites that may ask for you to input your data in connection with a job posting that appears to be from Workday but is not. In addition, Workday will never ask candidates to pay a recruiting fee, or pay for consulting or coaching services, in order to apply for a job at Workday. About The Team The Database Engineering team at Workday designs, builds, develops, maintains, and supervises database infrastructure, ensuring that all of Workday’s data related needs are met with dedication and scale, while providing high availability that our customers expect from Workday. We are a fast paced and diverse team of database specialists and software engineers responsible for designing, automating, managing, and running the databases on Private and Public Cloud Platforms. We are looking for individuals who have strong experience in backend development specializing in database as a service with deep experience in Open-Source database technologies like MySQL, PostgreSQL, CloudSQL and other Cloud Native database technologies. This role will suit someone who is adaptable, flexible, and able to succeed within an open collaborative peer environment. We would love to hear from you if you have hands-on experience in designing, developing, and managing enterprise level database systems with complex interdependencies and have a key focus on high-availability, clustering, security, performance, and scalability requirements! Our team is the driving force behind all Workday operations, providing crucial support for all Lifecycle Engineering Operations. We ensure that Workday’s maintenance and releases proceed without a hitch and are at the forefront of accelerating the transition to the Public Cloud. We enable Workday’s Customer Success- 60% of Fortune 500 companies, 8000+ customers, 55M+ Workers About The Role Are you passionate about database technologies? Do you love to solve complex, large-scale database challenges in the world today using code and as a service? If yes, then read on! This position is responsible for managing and monitoring Workday’s production Database Infrastructure. Focus on automation to improve availability and scalability in our production environments. Work with developers to improve database resiliency and improve/implement auto remediation techniques. Provide support for large scale database instances across production, non-production and development environments. Serve in a rotational on-call and weekly maintenance supporting database infrastructure. About You Basic Qualifications: 4+ years of experience in managing and automating mission critical production workloads on MySQL, PostgreSQL, CloudSQL and other Cloud native databases. Hands-on experience with at least one Cloud technology: AWS, GCP and/or Azure Experience managing clustered, highly available database services deployed on different flavors of Linux. Experience in backend development using modern programming languages (Python, Golang,) Strong scripting experience in multiple languages such as shell, python, ruby etc. Bachelor’s degree in a computer related field or equivalent work experience Knowledge of automation tools such as Terraform, Chef, GitHub, JIRA confluence and Ansible. Working experience in modern DevOps technologies and container orchestration (Kubernetes, Docker), service deployment, monitoring and scaling. Other Qualifications: Experience with database architecture, design, replication, clustering, HA/DR Strong analytical, debugging, and interpersonal skills. Self-starter, highly motivated and ability to learn quickly. Excellent team player with strong collaboration, analytical, verbal, and written communication skills Our Approach to Flexible Work With Flex Work, we’re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process!

Posted 3 days ago

Apply

3.0 - 5.0 years

0 Lacs

greater kolkata area

On-site

Overview Working at Atlassian Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Responsibilities Own, solve complex customer technical issues, using collaboration, troubleshooting best practices, transparency with different teams, maintain updates on the case. Error diagnosis (code review if needed), debugging, validation, and root cause analysis. Enable replication of issues to verify product-related bugs. Contribute to case deflection initiatives, automation, and other digital self-help assets to improve customer/ engineer experience. Drive technical collaboration and engagement outside of CSS (Product Engineering teams/Services/Support/Regions) Provide technical leadership and mentoring for Support Engineers. Be the primary contact for local escalation management for the expertise acquired within a product domain, and to provide resolution. Experience 3-5 years of experience in Technical Support, Software Services, and system administration for a large end-user community Track record to de-escalate difficult situations with customers, working with executive levels, while working on tickets and mentoring your team. Experience mentoring other support engineers to grow their technical and troubleshooting skills. Have supported customers over email, phone, and screen-shares. Experience working in a high case volume environment Coordinate training for new hires and conducting training using the skill gap analysis. Qualifications Must have Skills: Demonstrated technical competence with database skills, with the expertise to write and update SQL queries with ease. Experience with APIs and REST calls. Usage of Browser dev tools, frontend troubleshooting, and HAR File analysis. Experience working with Splunk searching, monitoring, and analysing machine-generated data via a Web-style interface or similar tools. An understanding of Network terminologies such as DNS, DHCP, Usage of Basic Network troubleshooting commands, SSL, Proxy, Firewalls, and identifying underlying Network issues. Understanding Java-based apps, being able to analyse/troubleshoot Java-based exceptions. Good understanding of OAuth-based authentication and other authentication mechanisms such as SSO/SAML. Familiarity with Cloud technologies Benefits & Perks Atlassian offers a wide range of perks and benefits designed to support you, your family and to help you engage with your local community. Our offerings include health and wellbeing resources, paid volunteer days, and so much more. To learn more, visit go.atlassian.com/perksandbenefits . About Atlassian At Atlassian, we're motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together. We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them. To learn more about our culture and hiring process, visit go.atlassian.com/crh .

Posted 3 days ago

Apply

10.0 years

0 Lacs

anupgarh, rajasthan, india

On-site

37098BR Bangalore Job Description Senior SAP Data Architect Around 10 Years of experience working with SAP Data/Analytics Knowledge of Data Warehouse Cloud (DWC) or Datasphere is a must Experience in SAP Analytics Cloud with Functional knowledge of Finance, Procurement & Billing Experience in SAP Data Intelligence or SAP Data Service (BODS) Knowledge of Third Party Data warehouse tools – Preferably Google Big Query Position Overview As part of our offshore resource requirement , we are looking for experienced candidates who can manage, maintain, and optimize our SAP Data Sphere environment . The selected candidate will be responsible for configuring, monitoring, and troubleshooting SAP Data Sphere, ensuring seamless data integration, and managing data models and flows. The role also involves collaborating with cross-functional teams to ensure data consistency, security, and performance, while supporting business intelligence and reporting needs Key Responsibilities 2. Data Management & Optimization: 3. Monitoring & Troubleshooting: 4. Collaboration & Support: 5. Documentation & Reporting: Administration & Configuration: Install, configure, and manage SAP Data Sphere environments, including data models, connections, and replication flows. Configure and maintain Data Integration, Data Modeling, and Data Flow components. Manage user access, roles, and security permissions to ensure data confidentiality and compliance. Set up Data Lake connections and manage data access policies. Oversee data replication and synchronization between SAP S/4HANA, BRIM, ECC, and GCP Big Query. Optimize data loading and extraction processes for performance and efficiency. Implement and manage ETL/ELT processes, including data cleansing, transformation, and validation. Troubleshoot and resolve data integration issues. Perform regular monitoring of Data Sphere jobs, data flows, and performance metrics. Identify and resolve data flow failures, performance bottlenecks, and replication errors. Create and maintain alerts and notifications for data flow anomalies and errors. Ensure data accuracy and consistency through regular data audits and validation. Collaborate with data architects, developers, and business teams to define and implement data strategies. Provide technical support to development teams for data integration and extraction. Work with SAP support and ECS teams to resolve system issues. Participate in code reviews and technical discussions. Maintain comprehensive documentation of configurations, processes, and troubleshooting steps. Generate and share reports on data performance, usage, and issues with stakeholders. Ensure adherence to data governance policies and best practices. Technical Skills Required Proficiency in SAP Data Sphere administration, configuration, and management. Strong understanding of SAP HANA, SAP S/4HANA, SAP BRIM, and GCP Big Query integration. Hands-on experience with ETL processes, data pipelines, and data modeling. Expertise in SQL, Open SQL, and data transformation techniques. Experience with cloud platforms (GCP, AWS, Azure) and connectivity. Familiarity with SAP Data Intelligence and SAP Data Services is a plus. Knowledge of API integration and data extraction techniques. Strong troubleshooting and problem-solving skills. Qualifications & Experience Bachelor's degree in Computer Science, Information Technology, or related field. 10+ years of experience in SAP data management and integration. Experience with data replication, ETL processes, and data warehousing. SAP certification in Data Sphere or Data Management is a plus. Key Competencies Analytical skills with the ability to diagnose and resolve complex data issues. Excellent communication and collaboration skills. Ability to work independently and manage multiple priorities. Strong focus on data quality, performance, and security. Preferred Certifications (Optional) SAP Certified Application Associate – SAP Data Sphere SAP Certified Technology Associate – SAP HANA Cloud & Data Services Google Cloud Certified – Professional Data Engineer Qualifications BE MCA Range of Year Experience-Min Year 8 Range of Year Experience-Max Year 15

Posted 3 days ago

Apply

3.0 - 6.0 years

5 - 13 Lacs

mumbai

Work from Office

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to embark on a technical adventure and become a hero to our external and internal users? As Resiliency Orchestration (RO) Administrator at Kyndryl, you'll be part of an elite team that provides exceptional technical assistance, enabling our clients to achieve their desired business outcomes. As a Resiliency Orchestration (RO) Administrator at Kyndryl, you will be responsible for coordinating with Application team members and respective Bank team members to identify deviations and support till closure. You will also coordinate and support respective Subject Matter Experts (SMEs) till closure. Manage incident management of DR activities. Additionally, you will coordinate with the RO Administration team and manage documentation for changes to be done in RO. Maintain the BCP-DR Application Architecture and understanding of customer IT-DR for On-Prime/Off-Prime/Hybrid Infrastructure for the application. You'll be responsible to create a comprehensive disaster recovery plan that outlines strategies, procedures, and responsibilities for recovering systems and data in various disaster scenarios. Regularly review and update the disaster recovery plan to reflect changes in the organization's infrastructure, business processes, and technology. You will assess potential risks and vulnerabilities to the organization's IT systems and infrastructure, conduct a Business Impact Analysis to identify critical business functions, data, and systems, and determine their recovery priorities. You'll be the go-to person for our customers to define Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO) for different business functions and systems, establish metrics to measure the effectiveness and efficiency of the disaster recovery processes, and organize and conduct regular disaster recovery drills and tests to validate the effectiveness of the recovery plan and identify areas for improvement. With your passion for technology, you'll provide world-class support that exceeds customer expectations. As a key member of the RO team, you will continuously monitor systems for potential signs of disaster or impending failures, respond to and coordinate incident response efforts in the event of a disaster or disruptive event, and keep management and stakeholders informed about the status of disaster recovery preparedness, including risks, progress, and improvements. You will also be responsible for designing and building LLD, HLD, and Implementation plans, as well as creating and maintaining technical reports, PPTS, and other documentation. If you're a technical wizard, a customer service superstar, and have an unquenchable thirst for knowledge, we want you to join our team. Your Future at Kyndryl Imagine being part of a dynamic team that values your growth and development. As Technical Support at Kyndryl, you'll receive an extensive and diverse set of technical trainings, including cloud technology, and free certifications to enhance your skills and expertise. You'll have the opportunity to pursue a career in advanced technical roles and beyond – taking your future to the next level. With Kyndryl, the sky's the limit. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Expertise 5+ years of experience in Customer Service or Technical Support. Experience in disaster recovery management and DR tools (MANDATORY) Experience in Scripting perl and TCL/Shell/Batch/PowerShell/Expect Scripts or similar scripting languages is must. Knowledge of writing scripts to integrate with different technologies using CLI's /API's. Working knowledge with Linux and any database (Oracle, MySQL). Strong understanding of the Data Protection (Back-up & Recovery, BCP DR, Storage Replication, Database Native Replications, Data Archival & Retention) for application workloads such as MS SQL, Exchange, Oracle, VMware, Hyper-V, azure, AWS etc. Extremely good hands-on experience with Standalone and Clustered UNIX ( AIX/Solaris/HPUX/RHEL/etc .) and windows platform. Preferred Technical and Professional Experience Should be able to Understand and strong knowledge of any Storage Replication technology with various DR Scenarios. Application testing experience may be added advantage Overall IT Infrastructure understanding is an added advantage Cyber (IT) Security related experience is an added advantage Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 3 days ago

Apply

0 years

0 Lacs

chennai, tamil nadu, india

On-site

Job Description Design, build, and maintain backend microservices using Java, Spring Boot, and related frameworks Implement and optimize Redis data structures (caching, pub/sub, data persistence) to support low-latency, high-throughput use cases Produce clean, maintainable, and well-tested code (unit, integration, and performance tests) Collaborate with DevOps to containerize services (Docker), deploy on Kubernetes, and automate CI/CD pipelines Troubleshoot, profile, and tune application performance, memory usage, and concurrency issues Participate in Agile ceremonies (stand-ups, sprint planning, retrospectives) and contribute to backlog grooming Mentor junior engineers and share best practices in code reviews, design sessions, and knowledge sharing forums Drive technical design discussions and contribute to architectural roadmaps Strong expertise in Redis (caching patterns, cluster setup, replication, persistence strategies) Hands-on experience with Spring Boot Solid understanding of microservices architecture and RESTful API design Proficiency in SQL and NoSQL databases (e.g., Oracle, PostgreSQL, MongoDB) Expertise in Git, build tools (Maven/Gradle), and CI/CD pipelines (Jenkins, GitLab CI, etc.) Familiarity with message-driven architectures (e.g., Kafka, RabbitMQ) Experience with containerization (Docker) and orchestration (Kubernetes) Java backend Spring boot SQL and NoSQL databases (e.g., Oracle, PostgreSQL, MongoDB) REDIS Docker, Kubernates Maven/Gradle Jenkins, GitLab CI Kafka, RabbitMQ

Posted 3 days ago

Apply

0 years

0 Lacs

chennai, tamil nadu, india

On-site

Job Description Design, build, and maintain backend microservices using Java, Spring Boot, and related frameworks Implement and optimize Redis data structures (caching, pub/sub, data persistence) to support low-latency, high-throughput use cases Produce clean, maintainable, and well-tested code (unit, integration, and performance tests) Collaborate with DevOps to containerize services (Docker), deploy on Kubernetes, and automate CI/CD pipelines Troubleshoot, profile, and tune application performance, memory usage, and concurrency issues Participate in Agile ceremonies (stand-ups, sprint planning, retrospectives) and contribute to backlog grooming Mentor junior engineers and share best practices in code reviews, design sessions, and knowledge sharing forums Drive technical design discussions and contribute to architectural roadmaps Strong expertise in Redis (caching patterns, cluster setup, replication, persistence strategies) Hands-on experience with Spring Boot Solid understanding of microservices architecture and RESTful API design Proficiency in SQL and NoSQL databases (e.g., Oracle, PostgreSQL, MongoDB) Expertise in Git, build tools (Maven/Gradle), and CI/CD pipelines (Jenkins, GitLab CI, etc.) Familiarity with message-driven architectures (e.g., Kafka, RabbitMQ) Experience with containerization (Docker) and orchestration (Kubernetes) Java backend Spring boot SQL and NoSQL databases (e.g., Oracle, PostgreSQL, MongoDB) REDIS Docker, Kubernates Maven/Gradle Jenkins, GitLab CI Kafka, RabbitMQ

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

kanayannur, kerala, india

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. AWS – Databricks - Senior Job Description Expertise in Data warehousing and ETL Design and implementation Hands on experience with Programming language like Python, PySpark/Scala Good understanding of Spark architecture along with internals Expert in working with Databricks on AWS Hands on experience using AWS services like Glue(Pyspark), Lambda, S3, Athena, RDS, IAM, Lake formation Hands on experience on implementing different loading strategies like SCD1 and SCD2, Table/ partition refresh, insert update, Swap Partitions, Experience in consuming and writing data from and to Flat files, RDMBS systems, MPPS, JSON and XML, Service, Streams, queues, CDC etc. Awareness of scheduling and orchestration tools Awareness of OS Compute, Networking, internal working and architecture of DB and ETL Server and its impact on ETL Experience on RDBMS systems and concepts Expertise in writing and complex SQL queries and developing Database components including creating views, stored procedures, triggers etc. Create test cases and perform unit testing of ETL Jobs Analytical mind with a problem-solving and debugging skills. Excellent Communication Skills Awareness of Replication, Synchronization and Disaster Management techniques Hand on experience in Data Quality Management Awareness of data governance concepts and implementation. Experience: 3-7 Years of experience. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 3 days ago

Apply

15.0 years

0 Lacs

trivandrum, kerala, india

On-site

Role Description Job Title: Database Architect Experience: 15+ years Role Overview We are seeking an experienced Database Architect with deep expertise in DB2 on Linux to design, optimize, and lead enterprise-scale database solutions. The role requires hands-on technical leadership across architecture, performance engineering, security, lity, while also supporting business-critical migration and transformation initiatives. Key Responsibilities Design and implement DB2 on Linux architectures, including instances, storage/IOPS, and database layouts. Define and execute HA/DR (HADR) strategies to ensure resilience and business continuity. Develop and manage backup/restore strategies for large-scale enterprise systems. Implement advanced security mechanisms including TLS, RBAC, and RCAC. Lead database migration projects using db2look/db2move, Q Replication, and CDC. Drive performance engineering initiatives (bufferpools, logging strategies, WLM optimization). Manage licensing, BYOL, and coordinate with IBM SCAD for compliance. Provide consultative leadership to cross-functional teams in pre-sales and post-sales engagements. Ensure best practices, standardization, and reusable frameworks across accounts. Mandatory Skills DB2 on Linux architecture (instances, storage/IOPS, layout). High Availability / Disaster Recovery (HADR) design. Backup & Restore strategy implementation. Security frameworks: TLS, RBAC, RCAC. Migration methodologies: db2look/db2move, Q Replication (Q Rep), Change Data Capture (CDC). Performance optimization: bufferpools, logging, WLM. Licensing / BYOL management & IBM SCAD coordination. Preferred Qualifications Strong knowledge of enterprise solution architecture and integration. Experience leading large-scale database transitions in complex environments. Proven track record of thought leadership (white papers, accelerators, or reusable frameworks). Excellent client-facing, consultative, and leadership skills. Skills Db2,Backup,Restore,migration methods

Posted 3 days ago

Apply

7.0 years

0 Lacs

india

Remote

Data speaks the language of progress — telling the stories behind every choice, every breakthrough, every victory. As Database Manager, you’ll lead the guardianship of that pulse, ensuring systems are fast, secure, and always ready when people need them most. Here, leadership means more than directing projects — it means inspiring a team to deliver clarity and confidence through every report, query, and platform they touch. You’ll shape the strategy that carries us from today’s SQL and reporting environments into tomorrow’s data lakes, warehouses, and AI-driven solutions. Every decision you guide will echo across the business — protecting trust, unlocking possibilities, and proving that clarity can be a competitive advantage. You’ll feel the pride of watching leaders make bold moves because of what you built. Years from now, you’ll look back knowing you created more than systems — you built the backbone of transformation, the foundation that empowered every success the company celebrates. Your next 5 minutes could land you in our top 5% shortlist — click Apply to start your application (a short, simple step to put you in the running) → https://www.surveymonkey.com/r/RHWCM9W ROLE SNAPSHOT — Title: Database Manager Location/Schedule: Remote or On-site, Full-Time Purpose: Lead the management and modernization of Mercola’s database systems, ensuring data availability, security, and performance while guiding a high-performing technical team. KEY RESPONSIBILITIES — Oversee design, configuration, and maintenance of SQL Server (on-prem, Azure SQL, AWS-native), PostgreSQL, and other platforms. Manage database performance, tuning, replication, and disaster recovery strategies. Enforce governance, security, and compliance policies across all environments. Lead, mentor, and grow a team of DBAs, engineers, and report developers. Balance urgent stakeholder needs with long-term modernization initiatives. Deliver SSRS reports and support BI platforms that drive decision-making. Evaluate and integrate modern data solutions including data lakes, warehouses, and RAG-ready platforms. Provide Tier 2/3 support and lead root cause analysis for complex incidents. MUST-HAVE QUALIFICATIONS — Bachelor’s degree in Computer Science or related field. 7+ years of database engineering/administration with 2+ years leadership. Expertise in SQL Server, PostgreSQL, SSRS, and query optimization. Strong project management, communication, and stakeholder skills. PREFERRED QUALIFICATIONS — Exposure to ERP/eCommerce environments. Familiarity with NoSQL databases and cloud-native platforms (Aurora, GCP SQL). Experience with ETL, BI, and visualization tools (Power BI, Tableau). WHAT WE OFFER — The chance to modernize and shape the future of enterprise data strategy. A collaborative team culture focused on growth, impact, and innovation. Opportunities to expand into AI, analytics, and cutting-edge data platforms. HONEST CHALLENGE — This role demands balancing mission-critical uptime with bold modernization. Stakeholder priorities can shift quickly — but success here means shaping both the present and future of how Mercola uses data to lead. READY TO JUMP IN? Click Apply to start the short application (about 5 minutes; embedded step) → https://www.surveymonkey.com/r/RHWCM9W Important: Completing the embedded application step is required to be considered. Mercola welcomes applicants of every background, identity, and life experience.

Posted 3 days ago

Apply

0 years

0 Lacs

noida, uttar pradesh, india

On-site

Description Database Administration & Maintenance: Install, configure, and maintain MS SQL Server and Azure SQL databases. Perform regular database monitoring, health checks, and capacity planning. Ensure high availability (HA) and disaster recovery (DR) strategies are in place. Security & Compliance: Implement database security policies, user access controls, and role-based authentication. Ensure compliance with GDPR, HIPAA, and other regulatory standards. Monitor database audit logs for security breaches or unauthorized access. Backup & Recovery: Develop and test backup and recovery strategies using Full, Differential, and Transaction Log backups. Automate backup scheduling and test restore procedures. Automation & Scripting: Automate DBA tasks using PowerShell or any programming language Implement CI/CD pipelines for database deployments. Collaboration & Documentation: Work with application developers to optimize database performance. Maintain comprehensive documentation for database configurations and processes. Cloud Database Management (Azure SQL): Deploy and manage Azure SQL Database, Managed Instances, and SQL Server on Azure VMs. Configure Azure SQL geo-replication, failover groups, and backup strategies. Implement Azure Monitor, Log Analytics, and Automation for proactive issue detection. XML to JSON Good to Have: Performance Tuning & Optimization: Analyze and improve database performance using query tuning, indexing, and partitioning. Troubleshoot slow queries and optimize stored procedures. Implement best practices for SQL Server Profiler, Execution Plans, and DMVs. Python Requirements Database Administration & Maintenance: Install, configure, and maintain MS SQL Server and Azure SQL databases. Perform regular database monitoring, health checks, and capacity planning. Ensure high availability (HA) and disaster recovery (DR) strategies are in place. Security & Compliance: Implement database security policies, user access controls, and role-based authentication. Ensure compliance with GDPR, HIPAA, and other regulatory standards. Monitor database audit logs for security breaches or unauthorized access. Backup & Recovery: Develop and test backup and recovery strategies using Full, Differential, and Transaction Log backups. Automate backup scheduling and test restore procedures. Automation & Scripting: Automate DBA tasks using PowerShell or any programming language Implement CI/CD pipelines for database deployments. Collaboration & Documentation: Work with application developers to optimize database performance. Maintain comprehensive documentation for database configurations and processes. Cloud Database Management (Azure SQL): Deploy and manage Azure SQL Database, Managed Instances, and SQL Server on Azure VMs. Configure Azure SQL geo-replication, failover groups, and backup strategies. Implement Azure Monitor, Log Analytics, and Automation for proactive issue detection. XML to JSON Good to Have: Performance Tuning & Optimization: Analyze and improve database performance using query tuning, indexing, and partitioning. Troubleshoot slow queries and optimize stored procedures. Implement best practices for SQL Server Profiler, Execution Plans, and DMVs. Python Job responsibilities Database Administration & Maintenance: Install, configure, and maintain MS SQL Server and Azure SQL databases. Perform regular database monitoring, health checks, and capacity planning. Ensure high availability (HA) and disaster recovery (DR) strategies are in place. Security & Compliance: Implement database security policies, user access controls, and role-based authentication. Ensure compliance with GDPR, HIPAA, and other regulatory standards. Monitor database audit logs for security breaches or unauthorized access. Backup & Recovery: Develop and test backup and recovery strategies using Full, Differential, and Transaction Log backups. Automate backup scheduling and test restore procedures. Automation & Scripting: Automate DBA tasks using PowerShell or any programming language Implement CI/CD pipelines for database deployments. Collaboration & Documentation: Work with application developers to optimize database performance. Maintain comprehensive documentation for database configurations and processes. Cloud Database Management (Azure SQL): Deploy and manage Azure SQL Database, Managed Instances, and SQL Server on Azure VMs. Configure Azure SQL geo-replication, failover groups, and backup strategies. Implement Azure Monitor, Log Analytics, and Automation for proactive issue detection. XML to JSON Good to Have: Performance Tuning & Optimization: Analyze and improve database performance using query tuning, indexing, and partitioning. Troubleshoot slow queries and optimize stored procedures. Implement best practices for SQL Server Profiler, Execution Plans, and DMVs. Python What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.

Posted 3 days ago

Apply

10.0 years

0 Lacs

indore, madhya pradesh, india

On-site

Are you ready to write your next chapter? Make your mark at one of the biggest names in payments. We’re looking for a Devops Specialist to join our ever evolving Devops Cloud and Automation team . and help us unleash the potential of every business. What You’ll Own As Devops Specialist The following key responsibilities relate entirely to the day-to-day Engineering of the DevOps Cloud Automation Team Working on Core Terraform & Ansible Automation implementation on all the AWS Infra /Devops Task. Implement the necessary changes in the infrastructure leveraging the existing building blocks we have including Kubernetes, OpenShift, Docker using Terraform, Ansible and others on AWS and Azure. Work with Development, Security and Operations teams, to design and implement fully automated build and delivery pipelines for multiple technologies include J2EE, .net, and mainframe. Understand CI and CD and the tools sets used is key. Tools include GitHub, Nexus, Artifactory, Jenkins, , SonarQube, Checkmarx and more. Documenting all work and processes including diagrams, workflows, system requirements, installation steps and maintenance information while communicating clearly and concisely with your team, co-workers, and customers Core Business Hours – 12:00 PM – 9:00 PM What You Bring 10+ years of overall experience and 5+ years in managing DevOps CI/CD platforms and enabling engineering teams to consume these platforms. 3+ years of team management experience leading/managing a Devops team. Must have proficiency in following tools from managing and keeping these platforms up & running and enabling others to build their solution and consume the platforms— Jenkins, Maven, GitHub, Nexus, Artifactory, GitHub Terraform, Ansible, Python, Groovy, Bash, PowerShell Shell scripting. AWS - Compute and Networking services including but not limited to EC2,ECS, EKS and Lambda Setup. Kubernetes, AWS managed k8S services. Good understanding of networking knowledge needed for cloud services. Experience with Multi-Region replication and disaster management guidelines implementation for any cloud Understanding of Monitoring, Security, and cost optimization approach for cloud services OpenShift 4.3 or above preferred Basic monolithic and Microservices based application architecture understanding and best practices. Strong expertise in managing and supporting Redis, Stream Processing as a Service (Kafka) and NoSQL as a Service (Cassandra), ELK Stack Strong working knowledge of Linux/Windows Operating system Nice To Have Helm and equivalent Exposure to working with multiple cloud platforms – AWS, Azure and GCP Knowledge and expertise in automated deployment and release orchestration tools, monitoring tools and other tools like Spinnaker, XLDeploy, XLRelease, Puppet, Chef, Instana, Dynatrace, SonarQube, CheckMarx, SysDig, ELK Some experience with monitoring tools such as Instana, Dynatrace Where you'll own it You will own it in our Vibrant Office Locations as Indore/Pune/Bangalore hub . About The Team Become part of the DevOps Cloud and Automation team working with development, security, and operation teams to fully automate code development and product delivery in the cloud. We are a global team combined to support the entire FIS organization in our move to the cloud and full automation while working within Agile and DevOps philosophy. This team is independent, fast paced, and constantly adapting to new technologies. As a team of DevOps and Cloud engineers, we maintain expertise in all aspects CICD and CLOUD, particularly in support of microservices development in the cloud and possessing a drive to develop, and maintain robust solutions to enhance the integrity, reliability, and frequency of our product delivery. What Makes a World Payer What makes a World payer? It’s simple: Think, Act, Win. We stay curious, always asking the right questions and finding creative solutions to simplify the complex. We’re dynamic, every World payer is empowered to make the right decisions for their customers. And we’re determined, always staying open and winning and failing as one. Does this sound like you? Then you sound like a World payer. Apply now to write the next chapter in your career. Privacy Statement Worldpay is committed to protecting the privacy and security of all personal information that we process in order to provide services to our clients. For specific information on how Worldpay protects personal information online, please see the Online Privacy Notice. Sourcing Model Recruitment at Worldpay works primarily on a direct sourcing model; a relatively small portion of our hiring is through recruitment agencies. Worldpay does not accept resumes from recruitment agencies which are not on the preferred supplier list and is not responsible for any related fees for resumes submitted to job postings, our employees, or any other part of our company. #pridepass

Posted 3 days ago

Apply

8.0 years

0 Lacs

gurugram, haryana, india

On-site

About Delhivery : Delhivery is India’s leading fulfillment platform for digital commerce. With a vast logistics network spanning 18,000+ pin codes and over 2,500 cities, Delhivery provides a comprehensive suite of services including express parcel transportation, freight solutions, reverse logistics, cross-border commerce, warehousing, and cutting-edge technology services. Since 2011, we’ve fulfilled over 550 million transactions and empowered 10,000+ businesses, from startups to large enterprises. Vision : To become the operating system for commerce in India by combining world-class infrastructure, robust logistics operations, and technology excellence. About the Role: We’re looking for an experienced Backend Technical Lead (5–8 years experience) who can lead the design, development, and scaling of backend systems powering large-scale logistics. In this role, you’ll architect high-throughput systems, guide a team of engineers, and integrate AI tools and agentic frameworks (Model Context Protocol (MCP), Multi Agent Systems (MAS), etc.) into your development and decision-making workflows. You’ll drive engineering excellence, contribute to architectural decisions, and nurture a culture of ownership, innovation, and AI-native development. This is a hands-on leadership position where you’ll build systems, lead design reviews, and mentor a high-performing team while pushing the boundaries of AI-assisted engineering. What You’ll Do: Technical Leadership & Ownership: Lead the architecture, design, and delivery of scalable RESTful and gRPC APIs. Guide system design and backend workflows for microservices handling millions of transactions daily. Drive engineering best practices in code quality, testing, observability, and deployment. Team Leadership: Mentor and coach a team of backend and frontend developers, helping them grow technically and professionally. Conduct regular design/code reviews and set high standards for system performance, reliability, and scalability. Drive sprint planning, task breakdowns, and technical execution with a bias for action and quality. AI-Native Development: Leverage modern AI tools (e.g., Cursor AI, Copilot, Codex, Gemini, Windsurf) for: Prompt-based code generation, refactoring, and documentation. Intelligent debugging and observability enhancements. Test case generation and validation. Experiment with and implement agentic frameworks (MCP, MAS etc) in building tools, automating workflows, optimization tasks, or intelligent system behavior. Contribute and maintain to internal AI prompt libraries and lead adoption of AI-assisted development practices. Systems & Infrastructure: Own the end-to-end lifecycle of backend services — from design to rollout and production monitoring. Work with data models across PostgreSQL, DynamoDB, or MongoDB. Optimize services using metrics, logs, traces, and performance profiling tools. Cross-functional Collaboration: Partner closely with Product, Design, QA, DevOps, and other Engineering teams to deliver cohesive product experiences. Influence product roadmaps and business decisions through a technical lens. What We’re Looking For: 5–8 years of backend development experience with at least 1+ years as a team/tech lead . Deep expertise in Python, Go, or Java and comfort across multiple backend frameworks. Solid understanding of Data Structures, Algorithms, System Design, and SQL . Proven experience in building, scaling, and maintaining microservices . Strong hands-on experience with REST/gRPC APIs , containerized deployments , and CI/CD pipelines . Strong database fundamentals — indexing, query optimization, replication, consistency models, and schema design Exposure to system scalability concepts — horizontal/vertical scaling, sharding, partitioning, rate-limiting, load balancing, and caching strategies.. Effective use of AI developer tools (Cursor AI, Codex, Copilot, Gemini, Windsurf) in real-world engineering workflows. Exposure to agentic frameworks and AI concepts such as: MCP, Multi-Agent Systems (MAS), Belief-Desire-Intention (BDI), or NLP pipelines. Cloud proficiency (any cloud, AWS preferred ), including deployment, cost optimization , and infrastructure scaling . Excellent communication and stakeholder management skills. A strong sense of ownership, problem-solving ability, and collaborative mindset.

Posted 3 days ago

Apply

0 years

0 Lacs

hyderabad, telangana, india

On-site

Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of / Consultant Specialist. In this role, you will: Responsible for Technical Delivery of PostgreSQL Database Support for Build, Estate Management and Business Delivery. Engaging with Business Users, Analysing the Requirements, Planning and Implementation of New Builds, Patching, Migrations and Upgradation of Environments. Managing, monitoring, and securing large and complex environments across data centres using best practices, automating functions and enhancing monitoring and observability and managing product lifecycle (upgrades and patch sets) in highly regulated environments. Providing Consultancy for New Projects, Environment Setups, Planning Backup and Disaster Recovery. Plan, implement and maintain recovery procedures and emergency disaster recovery plans. Investigate and provide solutions to Database Performance related issues, Query Tuning, Analysing Environments Resources & Capacity. Good knowledge in Change Management, Incident Management, Root Cause Analysis.Work experience in Agile Model, Kanban, Jira, Confluence Contribute to enhancement of technical competency / application knowledge of team members. Enforce HSBC Standards & Processes and adherence to compliance of all internal controls. Requirements To be successful in this role, you should meet the following requirements: Qualification: A Bachelor degree or equivalent experience with a major or minor in Computer Science or related field. Good understanding of PostgreSQL – Architecture, security, operational knowledge Deep understanding of how PostgreSQL manages data, including tablespaces, data files, indexes, WAL (Write-Ahead Logging), and key processes like Postmaster and bgwriter Proficiency in installing, configuring, and upgrading PostgreSQL instances. Expertise in identifying and resolving performance bottlenecks, including SQL query optimization, index tuning, and server parameter adjustments. Good to have expertise in automating routine DBA task like log management, backups, schema migration, indexing and monitoring. Strong knowledge of various backup strategies (e.g., pg_basebackup, WAL archiving) and recovery procedures to ensure data integrity and availability. Experience in Applying Patches and troubleshooting patch failure issues. Strong work experience in Unix\Linux Operating system and Shell Scripting Experience in implementing and managing replication solutions Streaming replication (primary/standby setup). Logical replication. Experience on Postgres failover and load balancing. Understanding of user roles, privileges, authentication methods, and best practices for securing PostgreSQL databases. Ensure database security by implementing best practices, including encryption, access control and auditing. Monitor and mitigate vulnerabilities in Postgres SQL environments. Ability to implement and utilize monitoring tools to track database health, performance metrics, and identify potential issues. Proficiency in scripting languages (e.g., Bash, Python) for automating routine tasks, deployments, and maintenance Advanced knowledge of SQL for querying, data manipulation, and database object management Data migration from existing version to new versions Backup & restore from Prod to non-prod. Good to have experience in cross technology migration to Postgres ( ie. From oracle \SQL server to Postgres) Strong analytical and problem-solving abilities to diagnose and resolve complex database issues Effective communication skills for collaborating with development teams, system administrators, and other stakeholders Ability to work effectively within a team environment. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India

Posted 3 days ago

Apply

12.0 years

0 Lacs

trivandrum, kerala, india

On-site

Senior Site Reliability Engineer (SRE II) Own availability, latency, performance, and efficiency for Zafin’s SaaS on Azure. You’ll define and enforce reliability standards, lead high-impact projects, mentor engineers, and eliminate toil at scale. Reports to the Director of SRE. What you’ll do SLIs/SLOs & contracts: Define customer-centric SLIs/SLOs for Tier-0/Tier-1 services. Publish, review quarterly, and align teams to them. Error budgeting (policy & tooling): Run the error-budget policy with multi-window, multi-burn-rate alerts; clear runbooks and paging thresholds. Gate changes by budget status (freeze/relax rules) wired into CI/CD. Maintain SLO/EB dashboards (Azure Monitor, Grafana/Prometheus, App Insights). Run weekly SLO reviews with engineering/product. Drive roadmap tradeoffs when budgets are at risk; land reliability epics. Incidents without drama: Lead SEV1/SEV2, own comms, run blameless postmortems, and make corrective actions stick. Engineer reliability in: Multi-AZ/region patterns (active-active/DR), PDBs/Pod Topology Spread, HPA/VPA/KEDA, resilient rollout/rollback. AKS at scale: Harden clusters (network, identity, policy), optimize node/pod density, ingress (AGIC/Nginx); mesh optional. Observability that works: Metrics/traces/logs with Azure Monitor/App Insights, Log Analytics, Prometheus/Grafana, OpenTelemetry. Alert on symptoms, not noise. IaC & policy: Terraform/Bicep modules, GitOps (Flux/Argo), policy-as-code (Azure Policy/OPA Gatekeeper). No snowflakes. CI/CD reliability: Azure DevOps/GitHub Actions with canary/blue-green, progressive delivery, auto-rollback, Key Vault-backed secrets. Capacity & performance: Load testing, right-sizing, autoscaling; partner with FinOps to reduce spend without hurting SLOs. DR you can trust: Define RTO/RPO, test backups/restore, run game days/chaos drills, validate ASR and multi-region failover. Secure by default: Entra ID (Azure AD), managed identities, Key Vault rotation, VNets/NSGs/Private Link, shift-left checks in CI. Reduce toil: Automate recurring ops, build self-service runbooks/chatops, publish golden paths for product teams. Customer escalations: Be the technical owner on calls; communicate tradeoffs and recovery plans with authority. Document to scale: Architectures, runbooks, postmortems, SLIs/SLOs—kept current and discoverable. (If applicable) Streaming/ETL reliability: Apply SRE practices (SLOs, backpressure, idempotency, replay) to NiFi/Flink/Kafka/Redpanda data flows. Minimum qualifications Bachelor’s in CS/Engineering (or equivalent experience). 12+ years in production ops/platform/SRE, including 5+ years on Azure . PostgreSQL (must-have): Deep operational expertise incl. HA/DR, logical/physical replication, performance tuning (indexes/EXPLAIN/ANALYZE, pg_stat_statements), autovacuum strategy, partitioning, backup/restore testing, and connection pooling (pgBouncer). Prefer experience with Azure Database for PostgreSQL – Flexible Server . Azure core: AKS (must-have) ; Front Door/App Gateway, API Management, VNets/NSGs/Private Link, Storage, Key Vault, Redis, Service Bus/Event Hubs. Observability: Azure Monitor/App Insights, Log Analytics, Prometheus/Grafana; SLO design and error-budget operations. IaC/automation: Terraform and/or Bicep; PowerShell and Python; GitOps (Flux/Argo). Pipelines in Azure DevOps or GitHub Actions. Proven incident leadership at scale, blameless postmortems, and SLO/error-budget governance with change gating. Mentorship and crisp written/verbal communication. Preferred (nice to have) Apache NiFi , Apache Flink , Apache Kafka or Redpanda (self-managed on AKS or managed equivalents); schema management, exactly-once semantics, backpressure, dead-letter/replay patterns. Azure Solutions Architect Expert , CKA/CKAD. ITSM (ServiceNow), on-call tooling (PagerDuty/Opsgenie). Compliance/SecOps (SOC 2, ISO 27001), policy-as-code, workload identity. OpenTelemetry, eBPF tooling, or service mesh. Multi-tenant SaaS and cost optimization at scale.

Posted 3 days ago

Apply

0 years

0 Lacs

greater chennai area

Remote

India| EST | Remote | Work from Home Work shift timings : EST : 5:30PM to 2:30 AM IST Why Pythian? At Pythian, we are experts in strategic database and analytics services, driving digital transformation and operational excellence. Pythian, a multinational company, was founded in 1997 and started by ensuring the reliability and performance of mission-critical databases. We quickly earned a reputation for solving tough data challenges. We were there when the industry moved from on-premises to cloud environments, and as enterprises sought more from their data, we expanded our competencies to include advanced analytics. Today, we empower organizations to embrace transformation and leverage advanced technologies, including AI, to stay competitive. We deliver innovative solutions that meet each client’s data goals and have built strong partnerships with Google Cloud, AWS, Microsoft, Oracle, SAP, and Snowflake. The powerful combination of our extensive expertise in data and cloud and our ability to keep on top of the latest bleeding edge technologies make us the perfect partner to help mid and large-sized businesses transform to stay ahead in today’s rapidly changing digital economy. Why you? As an Oracle Database Consultant you will be a part of a team to supply complete support for all aspects of managed database and application infrastructure operations to a variety of Pythian’s customers. If this is you, and you wonder what it would be like to work at Pythian, reach out to us and find out! Intrigued to see what a life is like at Pythian? Check out #pythianlife on LinkedIn and follow @loveyourdata on Instagram! Not the right job for you? Check out what other great jobs Pythian has open around the world! Pythian Careers What will you be doing? Installing, configuring and upgrading Oracle databases. Oracle Administration including: Experience with RAC, RMAN, Data Guard, Golden Gate, Exadata, Performance Tuning, WebLogic middleware - Forms and Reports, Various storage engines, Oracle customer tools, Performance tuning of Oracle databases, Oracle technical support, Oracle tools. Designing and implementing various Oracle backup/recovery strategies. Oracle replication and slave setup, coding scripts, procedures, functions, etc. Developing methods for monitoring, Linux/Unix and Shell scripting. Experience with RAC, working directly with external customers, Project managing. Coordinating, analyzing, designing, implementing and administering IT solutions. Recommending best practices for improvements to current operational processes. Administering backup procedures and disaster recovery plans. Presenting technical courses to customers. Participating in on-call coverage rotation plan. Communicating status and planning activities to customers and team members. Collaborating with remote team members. Working Conditions Participate in on-call rotation and periodic overtime. Ability to perform primary job functions while standing or sitting for extended periods of time. Dexterity of hands and fingers (or skill with adaptive devices) to operate a computer keyboard, mouse, and other computing equipment. The incumbent must spend long hours in intense concentration. Stress may be caused by the need to complete tasks within tight deadlines. What do we need from you? Interfacing with external customers, strong customer service focus with the ability to maintain customer expectations and priorities. Excellent oral and written communication. Self-motivated and directed, while working in a fast-paced demanding environment. Keen attention to detail. Strong analytical, evaluative, and problem-solving abilities. Very effective organizational skills. Ability to work in a team. Demonstrate sound work ethics. Understanding of current IT service standards such as ITIL. Undergraduate degree in computer science, computer engineering, information technology or related field or equivalent experience. What do you get in return? Love your career: Competitive total rewards and salary package. Blog during work hours; take a day off and volunteer for your favorite charity. Love your work/life balance: Flexibly work remotely from your home, there’s no daily travel requirement to an office! All you need is a stable internet connection. Love your coworkers: Collaborate with some of the best and brightest in the industry! Love your development: Hone your skills or learn new ones with our substantial training allowance; participate in professional development days, attend training, become certified, whatever you like! Love your workspace: We give you all the equipment you need to work from home including a laptop with your choice of OS, and an annual budget to personalize your work environment! Love yourself: Pythian cares about the health and well-being of our team. You will have an annual wellness budget to make yourself a priority (use it on gym memberships, massages, fitness and more). Additionally, you will receive a generous amount of paid vacation and sick days, as well as a day off to volunteer for your favorite charity. Disclaimer The successful applicant will need to fulfill the requirements necessary to obtain a background check. Accommodations are available upon request for candidates taking part in any aspect of the selection process.

Posted 3 days ago

Apply

0 years

2 - 5 Lacs

hyderābād

On-site

Ready to build the future with AI? At Genpact, we don’t just keep up with technology—we set the pace. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory, our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what’s possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Consultant, MySQL DBA Administrator! In this role, we’re looking for a MySQL DBA Administrator with hands-on expertise in managing MySQL databases, performance tuning, and implementing data security best practices. This role requires significant experience in cloud environments, particularly with AWS, Google Cloud Platform (GCP), or Azure, to support and maintain scalable, high-performance database infrastructure. Responsibilities Database Administration: Manage MySQL database instances across development, staging, and production environments, ensuring optimal performance, scalability, and reliability. Cloud Management: Configure and manage database instances in cloud environments (AWS RDS, GCP Cloud SQL, Azure MySQL), with strong knowledge of cloud-native tools and best practices for backups, disaster recovery, and security. Performance Tuning and Optimization: Monitor and tune MySQL instances, queries, and indexes to improve database performance, leveraging tools such as MySQL Workbench, PMM, or similar. High Availability and Disaster Recovery: Implement and maintain high availability, replication, clustering, and disaster recovery solutions to ensure business continuity. Database Security: Develop and enforce database security standards, manage user roles and permissions, and ensure compliance with industry regulations. Backup and Recovery: Plan, schedule, and automate regular database backups; design and execute data recovery solutions to prevent data loss. Monitoring and Troubleshooting: Use monitoring tools (such as Prometheus, Nagios, Datadog) to track database health and performance. Troubleshoot and resolve issues to minimize downtime. Automation and Scripting: Develop scripts and automated processes for routine database maintenance and monitoring, primarily using shell scripting, Python, or other automation tools. Documentation: Create and maintain database documentation, including architecture diagrams, processes, and procedures, for efficient knowledge sharing Support 24x7 on-call rotation for critical production environments Qualifications we seek in you! Minimum Qualifications Bachelor’s degree in IS, Computer Science, MIS Management, or related field, or equivalent combination of education and experience required. Proficient in MySQL DBA, with significant exposure to cloud database environments. Strong knowledge of cloud databases and cloud services (AWS RDS, GCP Cloud SQL, or Azure MySQL). Proficient in MySQL tuning, backup/recovery strategies, and disaster recovery. Experience with high availability solutions (replication, clustering, failover) in MySQL. Knowledge of automation tools and scripting languages (Bash, Python, etc.). Experience with database monitoring tools like Prometheus, Datadog, or Nagios. Excellent troubleshooting skills with a proactive approach to database health and optimization. Strong communication skills, with the ability to collaborate effectively with cross-functional teams. Must be well organized, thrive in a sense-of-urgency environment, leverage best practices, and most importantly, innovate through any problem with a can-do attitude. Strong leadership skills when troubleshooting across multiple vendor platforms and working out technical issues. Preferred Qualifications/ Skills Certifications in MySQL or cloud platforms (AWS Certified Database - Specialty, Google Cloud Professional Database Engineer, etc.). Familiarity with NoSQL databases or other RDBMS like PostgreSQL or MariaDB. Experience with DevOps tools and methodologies (Terraform, Ansible, CI/CD pipelines) for database infrastructure management Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career—Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Sep 10, 2025, 3:46:58 PM Unposting Date Ongoing Master Skills List Consulting Job Category Full Time

Posted 4 days ago

Apply

170.0 years

10 - 10 Lacs

hyderābād

On-site

Country/Region: IN Requisition ID: 29499 Work Model: Position Type: Salary Range: Location: INDIA - HYDERABAD - BIRLASOFT OFFICE Title: Snowflake Developer Description: Area(s) of responsibility About Us: Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities.• Experience: 5+ years Location: India, Hybrid Employment Type: Full-time Job Summary We are seeking an experienced Snowflake Architect with 15+ years of expertise in data warehousing, cloud architecture, and Snowflake implementations. The ideal candidate will design, optimize, and manage large-scale Snowflake data platforms, ensuring scalability, performance, and security. This role requires deep technical knowledge of Snowflake, cloud ecosystems, and data engineering best practices. Key Responsibilities 1. Snowflake Architecture & Design Lead the design and implementation of Snowflake data warehouses, data lakes, and data marts. Define best practices for Snowflake schema design, clustering, partitioning, and optimization. Architect multi-cloud (AWS) Snowflake deployments with seamless integration. Design data sharing, replication, and failover strategies for high availability. 2. Performance Optimization & Scalability Optimize query performance using Snowflake features (warehouse sizing, caching, materialized views). Implement automated scaling strategies for dynamic workloads. Troubleshoot and resolve performance bottlenecks in large-scale Snowflake environments. 3. Data Integration & Pipeline Development Architect ETL/ELT pipelines using Snowflake, Coalesce and other tools. Integrate Snowflake with BI tools (Tableau, Power BI), ML platforms, and APIs. Implement CDC (Change Data Capture), streaming, and batch processing solutions. 4. Security, Governance & Compliance Define RBAC, data masking, row-level security, and encryption policies in Snowflake. Ensure compliance with GDPR, CCPA, HIPAA, and SOC2 regulations. Establish data lineage, cataloging, and auditing using Snowflake’s governance features. 5. Team Leadership & Strategy Mentor data engineers, analysts, and developers on Snowflake best practices. Collaborate with C-level executives to align Snowflake strategy with business goals. Evaluate emerging trends (AI/ML in Snowflake, Iceberg tables, Unistore) for innovation. Deep knowledge of Snowflake features (Time Travel, Zero-Copy Cloning, Snowpark, Streams & Tasks). Experience with cloud platforms (AWS S3, Azure Blob) Strong understanding of data modeling (star schema, data vault, 3NF). Certifications: Snowflake Advanced Architect. Preferred Skills Knowledge of DataOps, MLOps, and CI/CD pipelines. Familiarity with DBT, Airflow, SSIS & IICS

Posted 4 days ago

Apply

8.0 - 10.0 years

3 - 7 Lacs

hyderābād

On-site

Company Profile: We’re Hiring at CGI for our GCC - Right Here in Hyderabad! Join us at the intersection of technology, finance, and innovation. You will be working to support the PNC Financial Services Group, one of the top-tier financial institutions in the U.S. You’ll help shape digital solutions for a global enterprise—from the ground up. This is more than a job. It’s your opportunity to: Work on cutting-edge technologies Collaborate with global teams Build a career with purpose and impact Ready to build the future of banking? Let’s talk. Job Title:Lead Analyst Position: Java Developer Experience:8-10 Years Category: Software Development/ Engineering Shift: General Main location: India, Telangana, Hyderabad Position ID: J0225-1964 Employment Type: Full Time CGI is looking for a talented and motivated Java developer - The developer is one of the most critical roles on the Data Streaming Platform team. The ability to build java applications for data pipelines using Kafka, and Oracle is essential to the platform. Here are some skills required: Core Java Skills* o Strong understanding of Java Apache Kafka Basics* o Understanding of Kafka architecture (brokers, partitions, topics, producers, consumers) (High level) o Experience with Kafka Producers and Consumers using the Kafka Java client o Knowledge of Kafka topic configurations (retention, replication, partitioning) (High level) o Understanding of the Kafka Streams Distributed Processing Concepts (Just a high level) o Familiarity with event-driven architecture o Knowledge of exactly-once processing vs at-least-once processing o Understanding of stream-table duality (Kafka Streams vs. KTables) o Schema Management o Experience with Avro, Protobuf, or JSON for structured messages Integration with External Systems o Connecting Kafka Streams with databases (PostgreSQL, MongoDB, Cassandra) o Using Kafka Connect for external data integration o Knowledge of REST APIs and how to expose data from Kafka Streams DevOps and Deployment* o Familiarity with Docker and Kubernetes for containerized deployment o Using CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI) o Logging and tracing using ELK (Elasticsearch, Logstash, Kibana) or OpenTelemetry (High level understanding) Testing Kafka Streams Applications o Writing unit tests with Mockito and JUnit o Using TestContainers for integration testing with Kafka o Validating Kafka Streams topologies using TopologyTestDriver API developers: o Experience building REST APIs using Spring Boot o Experience with Spring Data/Spring Data JPA for connecting to and reading from databases via APIs o Experience writing unit tests using JUnit/Spock o Familiarity with CI/CD pipelines using Jenkins o Familiarity with SQL/NoSQL databases Nice-to-have Skills: o Monitoring and Optimization o Understanding of Kafka Streams metrics (through JMX, Grafana, Prometheus) o Profiling performance and tuning configurations (buffer sizes, commit intervals) o Handling out-of-order events and rebalancing issues o Knowledge of Apache Flink or KSQLDB for alternative stream processing o Knowledge of Docker, OpenShift o Experience with tools like Dynatrace for troubleshooting Your future duties and responsibilities Design, develop, and optimize Oracle relational databases tables, ensuring high availability, scalability, and performance. Optimize SQL queries, indexes, and execution plans for efficient data processing. Develop ETL pipelines and PL/SQL to transform and integrate data from multiple sources. Implement job scheduling, store procedure, data validation, and monitoring solutions. Work closely with data architecture, DA teams, and application developers to enable data-driven decision-making. Strong in creating logical and physical data model for RDBMS and NoSQL technologies. Strong expertise in PL/SQL, SQL tuning, stored procedures, and triggers. Knowledge of data modeling, data lakes, and warehousing. Familiarity with Python, shell scripting for transformation and automation. Experience with Big Data & NoSQL technologies (e.g: MongoDB, Kafka, Hadoop). Nice to have: Experience with BIAN (Banking Industry Architecture Network) Required qualifications to be successful in this role Core Java Skills* o Strong understanding of Java Apache Kafka Basics* o Understanding of Kafka architecture (brokers, partitions, topics, producers, consumers) (High level) o Experience with Kafka Producers and Consumers using the Kafka Java client o Knowledge of Kafka topic configurations (retention, replication, partitioning) (High level) o Understanding of the Kafka Streams Distributed Processing Concepts (Just a high level) o Familiarity with event-driven architecture o Knowledge of exactly-once processing vs at-least-once processing o Understanding of stream-table duality (Kafka Streams vs. KTables) o Schema Management o Experience with Avro, Protobuf, or JSON for structured messages Integration with External Systems o Connecting Kafka Streams with databases (PostgreSQL, MongoDB, Cassandra) o Using Kafka Connect for external data integration o Knowledge of REST APIs and how to expose data from Kafka Streams DevOps and Deployment* o Familiarity with Docker and Kubernetes for containerized deployment o Using CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI) o Logging and tracing using ELK (Elasticsearch, Logstash, Kibana) or OpenTelemetry (High level understanding) Testing Kafka Streams Applications o Writing unit tests with Mockito and JUnit o Using TestContainers for integration testing with Kafka o Validating Kafka Streams topologies using TopologyTestDriver API developers: o Experience building REST APIs using Spring Boot o Experience with Spring Data/Spring Data JPA for connecting to and reading from databases via APIs o Experience writing unit tests using JUnit/Spock o Familiarity with CI/CD pipelines using Jenkins o Familiarity with SQL/NoSQL databases Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 4 days ago

Apply

5.0 - 8.0 years

4 - 6 Lacs

gurgaon

On-site

Team: Sonata Close date: Tuesday, 30 September 2025 Working pattern: Full time Contract Type: Permanent Location: Gurgaon (SEZ1) Department: 17 - 17 Service Management EMEA WM Description & Requirements: Bravura’s Commitment and Mission At Bravura Solutions, collaboration, diversity and excellence matter. We value your ideas, giving you room to be curious and innovate in an exciting, fast-paced, and flexible environment. We look for many different skills and abilities, as well as how you can add value to Bravura and our culture. As a Global FinTech market leader and ASX listed company, Bravura is a trusted partner to over 350 leading financial services clients, delivering wealth management technology and products. We invest significantly in our technology hubs and innovation labs, which inspire and drive our creative, future-focused mindset. We take pride in developing cutting-edge, digital first technology solutions that support our clients to achieve financial security and prosperity for their customers. About The Role The Sonata Application Support Senior Analyst will provide frontline support to Bravura Solutions’ clients using the Sonata Administration Platform. The role involves managing the full client issue lifecycle, including analysis, replication, and testing of reported defects, while maintaining a solution-focused approach. The analyst will also be responsible for escalating requests to the relevant development or consulting teams as needed. What You’ll Do Analyze and resolve client-reported issues, including: Identifying software defects or missing functionality (debugging code when necessary). Performing data fixes on client databases as required. Resolving issues arising from incorrect application usage. Addressing system configuration errors. Proactively manage and respond to client service requests and product defect reports. Handle client queries and incidents in line with defined SLAs. Route defects, service requests, and enhancement requests to the appropriate teams within Bravura. Assess the severity, impact, and risk of incidents, escalating to management where necessary. Monitor, track, and review incident progress while keeping clients informed. Facilitate and participate in client meetings focused on incident management and support processes. Record and maintain accurate data within the JIRA Service Desk system. Assist with client software release coordination. Perform technical administration and housekeeping tasks. Regularly review SLAs for assigned tickets to ensure timely responses. Monitor aging and on-hold tickets, ensuring proper client communication. Escalate capacity or capability challenges to the respective Squad Lead. Unleash your potential To be successful in this role, your background and experience will include: Education: B.E./B.Tech/MCA Experience: 5–8 years in application development, support, or consulting Technical Skills Strong knowledge of SQL (preferably Oracle) and databases Hands-on experience with Core Java , Eclipse IDE, and frameworks like Hibernate, JSP/JSF, and Web Services Ability to write/debug application code in modern object-oriented languages Familiarity with software development lifecycle and service delivery processes (Incident, Problem, Change, Configuration Management) Experience working with Helpdesk/JIRA Service Desk environment Exposure to cloud platforms (AWS or Azure) preferred Java Certification (good to have) Professional Skills Strong troubleshooting and debugging abilities Excellent oral and written communication skills ; able to interact with both business and technical stakeholders Customer-centric mindset with proven service skills Ability to multi-task, prioritize, and work under pressure with minimal supervision Strong team player with interpersonal and problem-solving skills Other Details Flexibility to work in rotational shifts (General, UK, occasional night shift) Based in Gurgaon , with occasional travel to Bravura offices or client sites Knowledge of the financial services industry (Wealth Management/Superannuation) is a plus Prior experience in Application Support roles will be advantageous So, what’s next? We make hiring decisions based on your experience, skills and passion so even if you don’t match every listed skill or tick all the boxes, we’d still love to hear from you. Please note that interviews are primarily conducted virtually and if you require any reasonable adjustments or would like to note which pronouns you use, please let us know. All final applicants for this position will be asked to consent to a criminal record and background check. Please note that people with criminal records are not automatically barred from applying for this position. Each application will be considered on its merits.

Posted 4 days ago

Apply

0 years

1 Lacs

gurgaon

On-site

Overview: We are seeking an experienced hardware engineer to design and prototype a compact electromechanical dispensing mechanism. The project involves creating a reliable single-item feed/dispense system integrated with sensors and control electronics. Key Responsibilities: Design and assemble a motor-driven feed mechanism for consistent single-item dispensing. Select and integrate DC/stepper motors, optical/photoelectric sensors, and control circuitry. Develop basic control logic (Arduino/PLC) and wiring diagrams. Ensure proper isolation for 12–24 V industrial signaling. Test and iterate for speed, accuracy, and reliability. Provide documentation for replication and future scaling. Requirements: Proven experience with embedded hardware prototyping, mechatronics, or industrial automation. Strong knowledge of DC/stepper motors, relays, optocouplers, and PLC/DIO interfaces. Familiarity with Ethernet communication for PC or SCADA integration. Ability to fabricate or guide fabrication of simple frames or mounts. Strong troubleshooting and problem-solving skills. Availability for milestone-based updates and communication. Preferred: Experience with automated dispensing or similar feed mechanisms. Located in Gurgaon, Delhi or Noida or able to ship prototype hardware. Job Types: Contractual / Temporary, Freelance Contract length: 2 months Pay: From ₹10,000.00 per month Work Location: In person

Posted 4 days ago

Apply

8.0 - 11.0 years

5 - 8 Lacs

gurgaon

On-site

Team: Sonata Close date: Thursday, 30 October 2025 Working pattern: Full time Contract Type: Permanent Location: Gurgaon (SEZ1) Department: 17 - 17 Service Management EMEA WM Description & Requirements: Bravura’s Commitment and Mission At Bravura Solutions, collaboration, diversity and excellence matter. We value your ideas, giving you room to be curious and innovate in an exciting, fast-paced, and flexible environment. We look for many different skills and abilities, as well as how you can add value to Bravura and our culture. As a Global FinTech market leader and ASX listed company, Bravura is a trusted partner to over 350 leading financial services clients, delivering wealth management technology and products. We invest significantly in our technology hubs and innovation labs, which inspire and drive our creative, future-focused mindset. We take pride in developing cutting-edge, digital first technology solutions that support our clients to achieve financial security and prosperity for their customers. About The Role The Sonata Application Support Consultant will be providing front line support for Bravura Solutions clients using the Sonata Administration Platform. In addition, the Sonata Support Consultant will have responsibility for managing the client issue lifecycle; the analysis, replication and testing of identified defects with a solution-oriented approach; and escalating requests to the appropriate development and consulting teams. Key Responsibilities The analysis and resolution of issues raised by Clients including: Identification of faulty software and omissions in functionality (debugging code where necessary) Correction of data (Data Fix) where required on client databases Correction of problems caused by incorrect use of application functionality Correction of system configuration faults Actively responding to clients relating to service requests and product defects. Respond to client queries and incidents as per defined service level agreements. Routing defects, service requests or enhancements to the appropriate teams within Bravura as required. Ensuring severity, impact and risk of incidents is understood and escalate to management if required. Monitoring, tracking and reviewing the progress of an incident, and keeping the customer informed. Facilitate and attend client meetings to discuss incident management and support processes Look ways to optimize the IRT (Incident Response Time) with acceptable quality parameters. Assist junior team members with their stuck issues and act as a mentor. Work with SDMs and clients to facilitate issues requiring feedback. Review the SLAs on tickets assigned to ensure that a timely response is provided. Promote the Problem-Solving Techniques within the team and foster application of its use. Escalate capacity and capability issues to respective Squad Lead. In Leads absence, manage the Squad scrum and keep up-to-date status of issues assigned to team members. Help Squad members with any impediments and work as first point of escalation.(Position Overview / The Sell) Unleash your potential To be successful in this role, your background and experience will include: B.E./B-Tech/MCA 8-11 years of experience A good understanding of best practice application development methodology, together with: An excellent working knowledge of SQL language Ability to develop basic application code using a modern object-based language Working knowledge of Microsoft Office A basic understanding of service delivery processes i.e. Incident management Problem management Change and Configuration management Experience within a helpdesk/JIRA service desk environment. Knowledge of software development lifecycle. Experience in business analysis, consulting or system testing role. Experience in providing consultancy and support to clients. Whilst the role will be predominantly based in Gurgaon, the ability to travel between Bravura offices and Client sites may be required Technical Excellent working knowledge of Core Java, including the Eclipse Development Platform Excellent working knowledge of popular Java frameworks such as Hibernate, JSP/JSF and web services. Troubleshooting and debugging capabilities/techniques Proven knowledge of databases, to include solid experience of SQL preferable on Oracle Database Good to have Java Certification Cloud exposure AWS or Azure Personal Requirement of working in rotational shifts General, UK (2:30 PM – 11:30 PM) and occasionally night shift Excellent spoken English Excellent oral and written communication skills with the ability to distinguish between business and technical audiences Proven aptitude with regards to good customer service skills Ability to multi-task, prioritize workload and work under pressure Ability to work unsupervised, managing goals and deliverables Demonstrated Solution based problem solving skills Excellent team and interpersonal skills Prior knowledge of working on Application Support model will be a plus A knowledge of the financial service industry, preferably Wealth Management or Superannuation products will be a plus Working at Bravura Our people are the heart of our business. We work hard to provide a rich employee experience and a robust framework for ongoing career development. Competitive salary and employee benefits scheme 2 paid volunteering days and a range of community-based initiatives to get involved in Parental (including secondary) leave policy Free meals and transport Medical and Accident Insurance So, what’s next? We make hiring decisions based on your experience, skills and passion so even if you don’t match every listed skill or tick all the boxes, we’d still love to hear from you. Please note that interviews are primarily conducted virtually and if you require any reasonable adjustments or would like to note which pronouns you use, please let us know. All final applicants for this position will be asked to consent to a criminal record and background check. Please note that people with criminal records are not automatically barred from applying for this position. Each application will be considered on its merits.

Posted 4 days ago

Apply

0 years

0 Lacs

greater kolkata area

Remote

India| EST | Remote | Work from Home Work shift timings : EST : 5:30PM to 2:30 AM IST Why Pythian? At Pythian, we are experts in strategic database and analytics services, driving digital transformation and operational excellence. Pythian, a multinational company, was founded in 1997 and started by ensuring the reliability and performance of mission-critical databases. We quickly earned a reputation for solving tough data challenges. We were there when the industry moved from on-premises to cloud environments, and as enterprises sought more from their data, we expanded our competencies to include advanced analytics. Today, we empower organizations to embrace transformation and leverage advanced technologies, including AI, to stay competitive. We deliver innovative solutions that meet each client’s data goals and have built strong partnerships with Google Cloud, AWS, Microsoft, Oracle, SAP, and Snowflake. The powerful combination of our extensive expertise in data and cloud and our ability to keep on top of the latest bleeding edge technologies make us the perfect partner to help mid and large-sized businesses transform to stay ahead in today’s rapidly changing digital economy. Why you? As an Oracle Database Consultant you will be a part of a team to supply complete support for all aspects of managed database and application infrastructure operations to a variety of Pythian’s customers. If this is you, and you wonder what it would be like to work at Pythian, reach out to us and find out! Intrigued to see what a life is like at Pythian? Check out #pythianlife on LinkedIn and follow @loveyourdata on Instagram! Not the right job for you? Check out what other great jobs Pythian has open around the world! Pythian Careers What will you be doing? Installing, configuring and upgrading Oracle databases. Oracle Administration including: Experience with RAC, RMAN, Data Guard, Golden Gate, Exadata, Performance Tuning, WebLogic middleware - Forms and Reports, Various storage engines, Oracle customer tools, Performance tuning of Oracle databases, Oracle technical support, Oracle tools. Designing and implementing various Oracle backup/recovery strategies. Oracle replication and slave setup, coding scripts, procedures, functions, etc. Developing methods for monitoring, Linux/Unix and Shell scripting. Experience with RAC, working directly with external customers, Project managing. Coordinating, analyzing, designing, implementing and administering IT solutions. Recommending best practices for improvements to current operational processes. Administering backup procedures and disaster recovery plans. Presenting technical courses to customers. Participating in on-call coverage rotation plan. Communicating status and planning activities to customers and team members. Collaborating with remote team members. Working Conditions Participate in on-call rotation and periodic overtime. Ability to perform primary job functions while standing or sitting for extended periods of time. Dexterity of hands and fingers (or skill with adaptive devices) to operate a computer keyboard, mouse, and other computing equipment. The incumbent must spend long hours in intense concentration. Stress may be caused by the need to complete tasks within tight deadlines. What do we need from you? Interfacing with external customers, strong customer service focus with the ability to maintain customer expectations and priorities. Excellent oral and written communication. Self-motivated and directed, while working in a fast-paced demanding environment. Keen attention to detail. Strong analytical, evaluative, and problem-solving abilities. Very effective organizational skills. Ability to work in a team. Demonstrate sound work ethics. Understanding of current IT service standards such as ITIL. Undergraduate degree in computer science, computer engineering, information technology or related field or equivalent experience. What do you get in return? Love your career: Competitive total rewards and salary package. Blog during work hours; take a day off and volunteer for your favorite charity. Love your work/life balance: Flexibly work remotely from your home, there’s no daily travel requirement to an office! All you need is a stable internet connection. Love your coworkers: Collaborate with some of the best and brightest in the industry! Love your development: Hone your skills or learn new ones with our substantial training allowance; participate in professional development days, attend training, become certified, whatever you like! Love your workspace: We give you all the equipment you need to work from home including a laptop with your choice of OS, and an annual budget to personalize your work environment! Love yourself: Pythian cares about the health and well-being of our team. You will have an annual wellness budget to make yourself a priority (use it on gym memberships, massages, fitness and more). Additionally, you will receive a generous amount of paid vacation and sick days, as well as a day off to volunteer for your favorite charity. Disclaimer The successful applicant will need to fulfill the requirements necessary to obtain a background check. Accommodations are available upon request for candidates taking part in any aspect of the selection process.

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies