Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Area(s) of responsibility Jd Proven experience as a Splunk Engineer with a focus on Splunk Cost Management, Performance Bottlenecks, search and dashboard optimization. Optimize search queries and ensure efficient use of resources within the Splunk environment Strong understanding of Splunk architecture, search processing language (SPL), and data models Proficiency in system monitoring and triaging with monitoring tools Proficiency in scripting languages such as Python. Roles & Responsibilities List down all Splunk Dashboards across all apps – Perform clean-up of unused ones Optimize Splunk Queries for heavy usage dashboards Splunk Index level access to be tracked and understand the usage cost (Users vs Cost) All this to be done for 80-100 Splunk indexes and direct 240+ RTS team Essential job tasks Skills with M/O flag are part of Specialization Solution Design -PL2 (Functional) Test Execution -PL3 (Functional) Help the tribe -PL2 (Behavioural) Think Holistically -PL2 (Behavioural) Knowledge Management -PL2 (Functional) Win the Customer -PL2 (Behavioural) One Birlasoft -PL2 (Behavioural) Results Matter -PL2 (Behavioural) Get Future Ready -PL2 (Behavioural) Requirements Definition And Management -PL2 (Functional) Estimation & Scheduling -PL2 (Functional) Testing Process And Metrics (Management) -PL2 (Functional) Test Planning & Strategizing -PL2 (Functional) Test Design -PL3 (Functional) REST API's - PL3 (Optional) Jira - PL2 (Optional) Worksoft - PL3 (Mandatory) Jenkins - PL2 (Optional) SAP HANA DB - PL3 (Mandatory) Java - PL3 (Mandatory)
Posted 1 month ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Requirements Description and Requirements Position Summary The SQL Database Administrator is responsible for the design, implementation, and support of database systems for applications across the Enterprise . Database Administrator is a part of the Database end to end delivery team working and collaborating with Application Development, Infrastructure Engineering and Operation Support teams to deliver and support secured, high performing and optimized database solutions. Database Administrator specializes in the SQL database platform. Job Responsibilities Manages design, distribution, performance, replication, security, availability, and access requirements for large and complex SQL & Sybase databases. Designs and develops physical layers of databases to support various application needs; Implements back-up, recovery, archiving, conversion strategies, and performance tuning; Manages job scheduling, application release, database change and compliance. Identifies and resolves problem utilizing structured tools and techniques. Provides technical assistance and mentoring to staff in all aspects of database management; Consults and advises application development teams on database security, query optimization and performance. Writes scripts for automating DBA routine tasks and documents database maintenance processing flows per standards. Implement industry best practices while performing database administration task Work in Agile model with the understanding of Agile concepts Collaborate with development teams to provide and implement new features. Able to debug production issues by analyzing the logs directly and using tools like Splunk. Begin tackling organizational impediments Learn new technologies based on demand and help team members by coaching and assisting. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 3+ years of IT and Infrastructure engineering work experience. Experience (In Years) 3+ Years Total IT experience & 2+ Years relevant experience in SQL Server + Sybase Database Technical Skills Database Management: Should have Basic knowledge in managing and administering SQL Server, Azure SQL Server, and Sybase databases, ensuring high availability and optimal performance. Data Infrastructure & Security: Basic knowledge in designing and implementing robust data infrastructure solutions, with a strong focus on data security and compliance. Backup & Recovery: Skilled in developing and executing comprehensive backup and recovery strategies to safeguard critical data and ensure business continuity. Performance Tuning & Optimization: Adept at performance tuning and optimization of databases, leveraging advanced techniques to enhance system efficiency and reduce latency. Cloud Computing & Scripting: Basic knowledge in cloud computing environments and proficient in operating system scripting, enabling seamless integration and automation of database operations. Management of database elements, including creation, alteration, deletion and copying of schemas, databases, tables, views, indexes, stored procedures, triggers, and declarative integrity constraints . Basic knowledge analytical skills to improve application performance. Basic knowledge of database performance Tuning, Backup & Recovery, Infrastructure as a Code and Observability tools (Elastic). Strong knowledge in ITSM process and tools (ServiceNow). Ability to work 24*7 rotational shift to support the Database platforms Other Critical Requirements Automation tools and programming such as Ansible and Python are preferrable Excellent Analytical and Problem-Solving skills Excellent written and oral communication skills, including the ability to clearly communicate/articulate technical and functional issues with conclusions and recommendations to stakeholders. Demonstrate ability to work independently and in a team environment About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us!
Posted 1 month ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Responsibilities Develop RESTful APIs in Golang for high-throughput applications. Work with relational databases (MySQL/PostgreSQL) for designing schemas, queries, and stored procedures. Integrate third-party services and internal systems securely. Write clean, maintainable, and well-tested code. Collaborate with frontend, DevOps, and QA teams to ensure seamless deployment. Requirements Strong hands-on experience in Golang development. Good understanding of MySQL, including joins, indexes, and optimization. Experience with REST API design and development. Familiarity with JSON, HTTP, and secure data transmission (JWT, OAuth, etc. ). Knowledge of Git, CI/CD practices, and Docker is a plus. This job was posted by Shashank Patil from Oneture Technologies.
Posted 1 month ago
2.0 - 5.0 years
7 - 9 Lacs
Calicut
On-site
Voix Me Technologies powers one of the leading B2C platforms in the Middle East, delivering real-time retail insights and promotional offers to millions of users. Our technology stack processes millions of product records monthly , helping both consumers and retailers make smarter decisions. We work with some of the biggest retail brands across the GCC region. Role Overview: We're looking for a Data Engineer to build scalable, high-performance data pipelines and help unlock the full value of Big Data across our internal BI dashboards and analytics modules. You’ll transform complex datasets into optimized, reliable structures that support decision-making across the company. Key Responsibilities: Big Data Processing: Design scalable data workflows that handle large, high-velocity datasets across multiple countries and markets. ETL Development: Build robust data pipelines for extracting, cleaning, and transforming structured and semi-structured data. Data Modeling & Optimization: Design efficient schemas and indexes to support BI use cases and ensure optimal performance at scale. Cross-System Integration: Integrate data from multiple sources and systems to support internal reporting and analytics. Team Collaboration: Work closely with data analysts, frontend developers, and product teams to deliver clean, ready-to-use datasets. Monitoring & Documentation: Maintain clear documentation and proactively monitor pipeline health and performance. Required Skills & Qualifications: Bachelor’s degree in Computer Science , Engineering , or a related technical field. 2–5 years of experience in data engineering , ETL development , or backend data roles . Strong SQL skills (MySQL, MariaDB) and knowledge of relational database optimization. Proficiency in Python for scripting, automation, and data transformation. Understanding of Big Data principles and how to manage data at scale. Bonus / Preferred Skills: Experience with retail, e-commerce , or consumer-facing analytics platforms . Familiarity with dashboard tools like Metabase, Power BI, or custom data visualization systems. Exposure to OCR-derived data , semi-structured sources , or Elasticsearch . Prior experience working with datasets containing millions of records and ensuring performance optimization. Familiarity with Linux environments and Git-based version control . Job Type: Full-time Pay: ₹60,000.00 - ₹80,000.00 per month Supplemental Pay: Performance bonus Ability to commute/relocate: Kozhikode, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Do you have experience supporting BI dashboards with clean, structured datasets ? Do you have experience working with retail or e-commerce product ? Education: Bachelor's (Required) Experience: Data Engineering: 2 years (Required) ETL or ELT Development: 2 years (Required) SQL like MySQL or MariaDB: 2 years (Required) Python: 2 years (Preferred) Work Location: In person Application Deadline: 10/07/2025
Posted 1 month ago
0.0 - 2.0 years
0 - 0 Lacs
Calicut, Kerala
On-site
Voix Me Technologies powers one of the leading B2C platforms in the Middle East, delivering real-time retail insights and promotional offers to millions of users. Our technology stack processes millions of product records monthly , helping both consumers and retailers make smarter decisions. We work with some of the biggest retail brands across the GCC region. Role Overview: We're looking for a Data Engineer to build scalable, high-performance data pipelines and help unlock the full value of Big Data across our internal BI dashboards and analytics modules. You’ll transform complex datasets into optimized, reliable structures that support decision-making across the company. Key Responsibilities: Big Data Processing: Design scalable data workflows that handle large, high-velocity datasets across multiple countries and markets. ETL Development: Build robust data pipelines for extracting, cleaning, and transforming structured and semi-structured data. Data Modeling & Optimization: Design efficient schemas and indexes to support BI use cases and ensure optimal performance at scale. Cross-System Integration: Integrate data from multiple sources and systems to support internal reporting and analytics. Team Collaboration: Work closely with data analysts, frontend developers, and product teams to deliver clean, ready-to-use datasets. Monitoring & Documentation: Maintain clear documentation and proactively monitor pipeline health and performance. Required Skills & Qualifications: Bachelor’s degree in Computer Science , Engineering , or a related technical field. 2–5 years of experience in data engineering , ETL development , or backend data roles . Strong SQL skills (MySQL, MariaDB) and knowledge of relational database optimization. Proficiency in Python for scripting, automation, and data transformation. Understanding of Big Data principles and how to manage data at scale. Bonus / Preferred Skills: Experience with retail, e-commerce , or consumer-facing analytics platforms . Familiarity with dashboard tools like Metabase, Power BI, or custom data visualization systems. Exposure to OCR-derived data , semi-structured sources , or Elasticsearch . Prior experience working with datasets containing millions of records and ensuring performance optimization. Familiarity with Linux environments and Git-based version control . Job Type: Full-time Pay: ₹60,000.00 - ₹80,000.00 per month Supplemental Pay: Performance bonus Ability to commute/relocate: Kozhikode, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Do you have experience supporting BI dashboards with clean, structured datasets ? Do you have experience working with retail or e-commerce product ? Education: Bachelor's (Required) Experience: Data Engineering: 2 years (Required) ETL or ELT Development: 2 years (Required) SQL like MySQL or MariaDB: 2 years (Required) Python: 2 years (Preferred) Work Location: In person Application Deadline: 10/07/2025
Posted 1 month ago
4.0 years
6 - 10 Lacs
Hyderābād
On-site
Description At Vitech, we believe in the power of technology to simplify complex business processes. Our mission is to bring better software solutions to market, addressing the intricacies of the insurance and retirement industries. We combine deep domain expertise with the latest technological advancements to deliver innovative, user-centric solutions that future-proof and empower our clients to thrive in an ever-changing landscape. With over 1,600 talented professionals on our team, our innovative solutions are recognized by industry leaders like Gartner, Celent, Aite-Novarica, and ISG. We offer a competitive compensation package along with comprehensive benefits that support your health, well-being, and financial security. Senior Site Reliability Engineer (SRE) Location: Hyderabad (Hybrid Role) Senior Site Reliability Engineer (SRE) – Join Our Global Engineering Team At Vitech we believe that excellence in production systems starts with engineering-driven solutions to operational challenges. Our Site Reliability Engineering (SRE) team is at the heart of ensuring seamless performance for our clients, preventing potential outages, and proactively identifying and resolving issues before they arise. Our SRE team is a diverse group of talented engineers across India, the US, and Canada. We have T-shaped expertise spanning application development, database management, networking, and system administration across both on-premise environments and AWS cloud. Together, we support mission-critical client environments and drive automation to reduce manual toil, freeing our team to focus on innovation. About the Role: Senior SRE As a SRE, you’ll be a key player in revolutionizing how we operate production systems for single and multi-tenant environments. You'll support SRE initiatives, support production, and drive infrastructure automation. Working in an Agile team environment, you’ll have the opportunity to explore and implement the latest technologies, engage in on-call duties, and contribute to continuous learning as part of an ever-evolving tech landscape. If you’re passionate about scalability, reliability, security, and automation of business-critical infrastructure, this role is for you. What you will do: Own and manage our AWS cloud-based technology stack, using native AWS services and top-tier SRE tools to support multiple client environments with Java-based applications and microservices architecture. Design, deploy, and manage AWS Aurora PostgreSQL clusters for high availability and scalability. Optimize SQL queries, indexes, and database parameters for performance tuning. Automate database operations using Terraform, Ansible, AWS Lambda, and AWS CLI. Manage Aurora’s read replicas, auto-scaling, and failover mechanisms. Enhance infrastructure as code (IAC) patterns using technologies like Terraform, CloudFormation, Ansible, Python, and SDK. Collaborate with DevOps teams to integrate Aurora with CI/CD pipelines. Provide full-stack support, as per assigned schedule, on applications across technologies such as Oracle WebLogic, AWS Aurora PostgreSQL, Oracle Database, Apache Tomcat, AWS Elastic Beanstalk, Docker/ECS, EC2, S3, etc., Troubleshoot database incidents, perform root cause analysis, and implement preventive measures. Document database architecture, configurations, and operational procedures. Ensure high availability, scalability, and performance of PostgreSQL databases on AWS Aurora. Monitor database health, troubleshoot issues, and perform root cause analysis for incidents. Embrace SRE principles such as Chaos Engineering, Reliability, Reducing Toil, etc., What We're Looking For: Proven hands-on experience as an SRE for critical, client-facing applications, with the ability to dive deep into daily SRE tasks, manage incidents, and oversee operational tools. 4+ years of experience in managing relational databases (Oracle, and/or PostgreSQL) in both cloud and on-prem environments, including SRE tasks like backup/restore, Performance issues and replication (primary skill required for this role) 3+ years of experience hosting enterprise applications in AWS (EC2, EBS, ECS/EKS, Elastic Beanstalk, RDS, CloudWatch). Strong understanding of AWS networking concepts (VPC, VPN/DX/Endpoints, Route53, CloudFront, Load Balancers, WAF). Familiarity with tools like pgAdmin, psql, or other database management utilities. Automate routine database maintenance tasks (e.g., vacuuming, reindexing, patching). Knowledge of backup and recovery strategies (e.g., pg_dump, PITR). Automate routine database maintenance tasks (e.g., vacuuming, reindexing, patching). Set up and maintain monitoring and alerting systems for database performance and availability (e.g., CloudWatch, Honeycomb, New Relic, Dynatrace etc.,). Work closely with development teams to optimize database schemas, queries, and application performance. Provide database support during application deployments and migrations. Hands-on experience with web/application layers (Oracle WebLogic, Apache Tomcat, AWS Elastic Beanstalk, SSL certificates, S3 buckets). Experience with containerized applications (Docker, Kubernetes, ECS). Leverage AWS Aurora features (e.g., read replicas, auto-scaling, multi-region deployments) to enhance database performance and reliability. Automation experience with Infrastructure as Code (Terraform, CloudFormation, Python, Jenkins, GitHub/Actions). Knowledge of multi-region Aurora Global Databases for disaster recovery. Scripting experience in Python, Bash, Java, JavaScript, Node.js. Excellent written/verbal communication, critical thinking. Willingness to work in shifts and assist your team to resolve issues efficiently. Join Us at Vitech! At Vitech, we believe in empowering our teams to drive innovation through technology. If you thrive in a dynamic environment and are eager to drive innovation in SRE practices, we want to hear from you! You’ll be part of a forward-thinking team that values collaboration, innovation, and continuous improvement. We provide a supportive and inclusive environment where you can grow as a leader while helping shape the future of our organization. About Vitech At Vitech, Your Expertise Drives Transformative Change in Fintech For over 30 years, Vitech has empowered leading players in insurance, pensions, and retirement with cutting-edge, cloud-native solutions and implementation services. Our mission is clear: harness technology to simplify complex business processes and deliver intuitive, user-centric software that propels our clients' success. At Vitech, you won’t just fill a position; you’ll join a purpose-driven team on a mission that truly matters. Innovation is at our core, and we empower you to push boundaries, unleash creativity, and contribute to projects that make a real difference in the financial sector. Though our name may be new to you, our impact is recognized by industry leaders like Gartner, Celent, Aite-Novarica, ISG, and Everest Group. Why Choose Us? With Vitech, you won’t just fill a position; you’ll be part of a purpose-driven mission that truly matters. We pursue innovation relentlessly, empowering you to unleash your creativity and push boundaries. Here, you’ll work on cutting-edge projects that allow you to make a real difference—driving change and improving lives. We value strong partnerships that foster mutual growth. You will collaborate with talented colleagues and industry leaders, building trust and forming relationships that drive success. Your insights and expertise will be essential as you become an integral part of our collaborative community, amplifying not just your career but the impact we have on our clients. We are committed to a focus on solutions that makes a tangible difference. In your role, you will embrace the challenge of understanding the unique pain points faced by our clients. Your analytical skills and proactive mindset will enable you to develop innovative solutions that not only meet immediate needs but also create lasting value. Here, your contributions will directly influence our success and propel your professional growth. At Vitech, we foster an actively collaborative culture where open communication and teamwork are paramount. With our “yes and” philosophy, your ideas will be welcomed and nurtured, allowing you to contribute your unique insights and perspectives. This environment will enhance your ability to work effectively within diverse teams, empowering you to lead initiatives that result in exceptional outcomes. We believe in remaining curious and promoting continuous learning. You will have access to extensive resources and professional development opportunities that will expand your knowledge and keep you at the forefront of the industry. Your curiosity will fuel innovation, and we are committed to supporting your growth every step of the way. In addition to a rewarding work environment, we offer a competitive compensation package with comprehensive benefits designed to support your health, well-being, and financial security. At Vitech, you’ll find a workplace that challenges and empowers you to make meaningful contributions, develop your skills, and grow with a team that’s dedicated to excellence. If you’re ready to make a real impact in fintech and join a forward-thinking organization, explore the incredible opportunities that await at Vitech. Apply today and be part of our journey to drive transformative change!
Posted 1 month ago
0 years
0 Lacs
Bengaluru
On-site
Ready to build the future with AI? At Genpact, we don’t just keep up with technology—we set the pace. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory, our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what’s possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Principal Consultant -Sr. Performance DBA Teradata! Responsibilities Strong expertise in writing and optimizing Teradata SQL queries, TPT script etc. Manage Production/Development databases performance. Review Teradata system reports and provide “performance assessment” report with recommendations to optimize system. Investigate and quantify opportunities from “performance assessment” reports and Apply best practices in each of the areas. Monitor using Viewpoint tool for Teradata system performance using different portlets. Review poor performing queries generated from BI/ETL tools and provide best practice recommendations on how to simplify and restructure views, apply PPI or other index changes Closely monitor the performance of various work groups on the system and make sure data is available to business as per the SLA requirement. Optimal index analysis - Review Index usage on tables and recommend adding dropping indexes for optimal data access. Review uncompressed tables, analyse its usage, and implement compression to save space and reduce IO activity – using various algorithms like MVC/ BLC/ ALC. Optimize locking statements in views, macros & queries to eliminate blocking contention invest. Review the Spool Limits for the users and recommend optimal limit for the Ad-hoc users to avoid run away queries over consuming system resources. Check for Mismatch data types in the system and make them unique to avoid costly translations during query processing. Review Set tables and check for the options to convert to MultiSet to avoid costly duplicate row checking operation. Review Large Scan Tables on the system and analyze for using PPI, MLPPI, Compression, Secondary indexes & Join Indexes Analyze various applications and understand the space requirements and segregate the disk space under the categories of perm, spool, and temp space. Setting up the database hierarchy that includes database creation and management of objects such as users, Roles, Profiles, tables, views. Maintain profiles, roles, access rights and permissions for Teradata user groups and objects. Generate periodic performance reports using PDCR and identify bottlenecks with the system performance. Establish PDCR canary performance baselines. Utilize standard canary queries to identify variance from baseline. Effective usage of TASM & Priority distribution to penalize the resource intensive queries, Give high priority to business-critical workloads, Throttling of different workloads for optimal throughput and provide performance reports to check workload management health. Qualifications we seek in you! Minimum qualifications Teradata Performance DBA experience. Experience in review of poor performing queries and provide best practice recommendations on how to simplify and restructure views, apply PPI or other index changes. Statistics Management and Optimization Exposure to DWH Env (Knowledge of ETL/DI/BI Reporting). Exposure to troubleshoot the TPT/ FastLoad / Multiload/ FastExport/ BTEQ/ TPump errors, should be good at error handling. Experience in fine tuning various application parameters/number of sessions to ensure optimal functioning of the application. Well conversant with various ticketing system/production change request/ Teradata Incident management. Should be good at automating various processes. Ability to write efficient SQL & exposure to query tuning. Preferably understand Normalization and De-normalization concepts. Preferable exposure to visualization tools like Tableau, PowerBI. Preferably have good working knowledge on UNIX shell, Python scripting. Good to have exposure to FSLDM Good to have exposure to GCFR framework. Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career—Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Principal Consultant Primary Location India-Bangalore Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 27, 2025, 11:07:39 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 1 month ago
3.0 years
0 Lacs
Ahmedabad
On-site
Hey, What’s the Role About? We’re building high-scale backend systems that power smart email sequences, mailbox warmup, real-time analytics, and more all used daily by thousands of sales teams globally. As an SDE-2 Backend Engineer , you won’t just write endpoints you’ll architect microservices, stream millions of events via Kafka, and optimize multi-terabyte datasets in MySQL and MongoDB. You’ll work at the intersection of scale, reliability, and product impact and own your services end-to-end. This role is for engineers who think in systems, care about clean, reliable code, and are driven by scale, performance, and ownership. Why Join Us? Purpose: You’ll own key parts of the Saleshandy backend, powering mission-critical product features and directly shaping user experience and reliability. Growth: You’ll work on high-scale systems, Kafka eventing, job queues, real-time insights, and API infra. This is the best place to grow into SDE-3 or Tech Lead roles. Motivation: If you enjoy solving architecture problems, optimizing bottlenecks, and working with high-agency engineers — this is the team you want to be on. Your Main Goals 1. Design and Ship Scalable Microservices (within 60 days) Build and deploy new backend services using Node.js and clean architectural boundaries. Prioritize scalability, separation of concerns, and reusability. Goal: Ship 1–2 production-grade services used in live flows with clear ownership and on-call readiness. 2. Optimize MySQL and MongoDB Access Patterns (within 45 days) Fix slow queries, optimize schema design, and reduce DB load using smart indexes and query planning. Target: Improve average query latency for key endpoints by 30%+ and reduce DB CPU usage on heavy tables. 3. Handle Real-Time Streams Using Kafka (within 90 days) Process, partition, and consume Kafka streams reliably. Implement idempotent processing, back-pressure handling, and scale-out consumers. Outcome: Achieve >99.99% success rate on core Kafka stream with clean retry behavior and minimal lag. 4. Strengthen Backend Reliability and Observability (within 75 days) Improve service uptime, reduce flaky retries, and integrate robust logging/metrics using Grafana, Tempo, and Loki. Result: Cut production escalations for your services by 50%, and ensure clear dashboards and alerts are in place. Important Tasks 1. First 30 Days – Service Audit & Ownership Setup Review existing backend services and take full ownership of 1–2 of them. Set up on-call readiness, alerts, and health dashboards. 2. Design a New Microservice From product requirement to deployment — design, implement, test, and release a new backend service with real business value. 3. Kafka Consumption at Scale Implement a Kafka consumer that handles 100k+ events/day with offset tracking, retries, dead-letter queue, and observability. 4. Solve a High-Impact DB Performance Issue Pick a slow API or query and improve it end-to-end — from EXPLAIN to caching to schema change or code refactor. 5. Contribute to Infra Best Practices Collaborate with SREs and infra team to improve Docker setup, deploys, or rate-limiting logic. Bonus if you eliminate flaky CI/CD issues. 6. AI Tooling Adoption Use Copilot or Cursor to write, test, or debug code faster such as writing unit tests for a queue processor or scaffolding boilerplate routes. 7. Participate in System Design Reviews Present at least one design document in engineering review showing trade-offs, performance considerations, and reasoning. Experience Level: 3–5 years Tech Stack: Node.js, Kafka, MySQL, MongoDB, Redis, Docker Culture Fit – Are You One of Us? We don’t do “just ship it.” We do “ship fast, own the outcome, and make it scalable.” You’ll work with engineers who value clean design, measured trade-offs, and long-term thinking. We expect you to speak up, take charge, and grow fast. If you love solving system design puzzles, own your numbers, and get a thrill when things just work you’ll feel right at home here.
Posted 1 month ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Hey, What’s the Role About? We’re building high-scale backend systems that power smart email sequences, mailbox warmup, real-time analytics, and more all used daily by thousands of sales teams globally. As an SDE-2 Backend Engineer , you won’t just write endpoints you’ll architect microservices, stream millions of events via Kafka, and optimize multi-terabyte datasets in MySQL and MongoDB. You’ll work at the intersection of scale, reliability, and product impact and own your services end-to-end. This role is for engineers who think in systems, care about clean, reliable code, and are driven by scale, performance, and ownership. Why Join Us? Purpose: You’ll own key parts of the Saleshandy backend, powering mission-critical product features and directly shaping user experience and reliability. Growth: You’ll work on high-scale systems, Kafka eventing, job queues, real-time insights, and API infra. This is the best place to grow into SDE-3 or Tech Lead roles. Motivation: If you enjoy solving architecture problems, optimizing bottlenecks, and working with high-agency engineers — this is the team you want to be on. Your Main Goals Design and Ship Scalable Microservices (within 60 days) Build and deploy new backend services using Node.js and clean architectural boundaries. Prioritize scalability, separation of concerns, and reusability. Goal: Ship 1–2 production-grade services used in live flows with clear ownership and on-call readiness. Optimize MySQL and MongoDB Access Patterns (within 45 days) Fix slow queries, optimize schema design, and reduce DB load using smart indexes and query planning. Target: Improve average query latency for key endpoints by 30%+ and reduce DB CPU usage on heavy tables. Handle Real-Time Streams Using Kafka (within 90 days) Process, partition, and consume Kafka streams reliably. Implement idempotent processing, back-pressure handling, and scale-out consumers. Outcome: Achieve >99.99% success rate on core Kafka stream with clean retry behavior and minimal lag. Strengthen Backend Reliability and Observability (within 75 days) Improve service uptime, reduce flaky retries, and integrate robust logging/metrics using Grafana, Tempo, and Loki. Result: Cut production escalations for your services by 50%, and ensure clear dashboards and alerts are in place. Important Tasks First 30 Days – Service Audit & Ownership Setup Review existing backend services and take full ownership of 1–2 of them. Set up on-call readiness, alerts, and health dashboards. Design a New Microservice From product requirement to deployment — design, implement, test, and release a new backend service with real business value. Kafka Consumption at Scale Implement a Kafka consumer that handles 100k+ events/day with offset tracking, retries, dead-letter queue, and observability. Solve a High-Impact DB Performance Issue Pick a slow API or query and improve it end-to-end — from EXPLAIN to caching to schema change or code refactor. Contribute to Infra Best Practices Collaborate with SREs and infra team to improve Docker setup, deploys, or rate-limiting logic. Bonus if you eliminate flaky CI/CD issues. AI Tooling Adoption Use Copilot or Cursor to write, test, or debug code faster such as writing unit tests for a queue processor or scaffolding boilerplate routes. Participate in System Design Reviews Present at least one design document in engineering review showing trade-offs, performance considerations, and reasoning. Experience Level: 3–5 years Tech Stack: Node.js, Kafka, MySQL, MongoDB, Redis, Docker Culture Fit – Are You One of Us? We don’t do “just ship it.” We do “ship fast, own the outcome, and make it scalable.” You’ll work with engineers who value clean design, measured trade-offs, and long-term thinking. We expect you to speak up, take charge, and grow fast. If you love solving system design puzzles, own your numbers, and get a thrill when things just work you’ll feel right at home here.
Posted 1 month ago
5.0 years
20 - 27 Lacs
Chennai, Tamil Nadu, India
On-site
Industry: Information Technology | Database & Infrastructure Services We are a fast-scaling managed services provider helping enterprises in finance, retail, and digital-native sectors keep mission-critical data available, secure, and high-performing. Our on-site engineering team in India safeguards petabytes of transactional data and drives continuous optimisation across hybrid environments built on open-source technologies. Role & Responsibilities Administer and optimise PostgreSQL clusters across development, staging, and production workloads. Design, implement, and automate backup, recovery, and disaster-recovery strategies with point-in-time restore. Tune queries, indexes, and configuration parameters to achieve sub-second response times and minimise resource consumption. Configure and monitor logical and streaming replication, high availability, and failover architectures. Harden databases with role-based security, encryption, and regular patching aligned to compliance standards. Collaborate with DevOps to integrate CI/CD, observability, and capacity planning into release pipelines. Skills & Qualifications Must-Have 5+ years PostgreSQL administration in production. Expertise in query tuning, indexing, and vacuum strategies. Proficiency with Linux shell scripting and automation tools. Hands-on experience with replication, HA, and disaster recovery. Preferred Exposure to cloud-hosted PostgreSQL (AWS RDS, GCP Cloud SQL). Knowledge of Ansible, Python, or Kubernetes for infrastructure automation. Benefits & Culture Highlights Engineer-led culture that values technical depth, peer learning, and continuous improvement. Access to enterprise-grade lab environments and funded certifications on PostgreSQL and cloud platforms. Competitive salary, health insurance, and clear growth paths into architecture and SRE roles. Work Location: On-site, India. Skills: postgresql,shell scripting,vacuum strategies,dba,linux shell scripting,python,disaster recovery,automation tools,cloud-hosted postgresql,indexing,query tuning,replication,ansible,high availability,kubernetes,postgresql administration
Posted 1 month ago
5.0 years
20 - 27 Lacs
Greater Kolkata Area
On-site
Industry: Information Technology | Database & Infrastructure Services We are a fast-scaling managed services provider helping enterprises in finance, retail, and digital-native sectors keep mission-critical data available, secure, and high-performing. Our on-site engineering team in India safeguards petabytes of transactional data and drives continuous optimisation across hybrid environments built on open-source technologies. Role & Responsibilities Administer and optimise PostgreSQL clusters across development, staging, and production workloads. Design, implement, and automate backup, recovery, and disaster-recovery strategies with point-in-time restore. Tune queries, indexes, and configuration parameters to achieve sub-second response times and minimise resource consumption. Configure and monitor logical and streaming replication, high availability, and failover architectures. Harden databases with role-based security, encryption, and regular patching aligned to compliance standards. Collaborate with DevOps to integrate CI/CD, observability, and capacity planning into release pipelines. Skills & Qualifications Must-Have 5+ years PostgreSQL administration in production. Expertise in query tuning, indexing, and vacuum strategies. Proficiency with Linux shell scripting and automation tools. Hands-on experience with replication, HA, and disaster recovery. Preferred Exposure to cloud-hosted PostgreSQL (AWS RDS, GCP Cloud SQL). Knowledge of Ansible, Python, or Kubernetes for infrastructure automation. Benefits & Culture Highlights Engineer-led culture that values technical depth, peer learning, and continuous improvement. Access to enterprise-grade lab environments and funded certifications on PostgreSQL and cloud platforms. Competitive salary, health insurance, and clear growth paths into architecture and SRE roles. Work Location: On-site, India. Skills: postgresql,shell scripting,vacuum strategies,dba,linux shell scripting,python,disaster recovery,automation tools,cloud-hosted postgresql,indexing,query tuning,replication,ansible,high availability,kubernetes,postgresql administration
Posted 1 month ago
5.0 years
20 - 27 Lacs
Hyderabad, Telangana, India
On-site
Industry: Information Technology | Database & Infrastructure Services We are a fast-scaling managed services provider helping enterprises in finance, retail, and digital-native sectors keep mission-critical data available, secure, and high-performing. Our on-site engineering team in India safeguards petabytes of transactional data and drives continuous optimisation across hybrid environments built on open-source technologies. Role & Responsibilities Administer and optimise PostgreSQL clusters across development, staging, and production workloads. Design, implement, and automate backup, recovery, and disaster-recovery strategies with point-in-time restore. Tune queries, indexes, and configuration parameters to achieve sub-second response times and minimise resource consumption. Configure and monitor logical and streaming replication, high availability, and failover architectures. Harden databases with role-based security, encryption, and regular patching aligned to compliance standards. Collaborate with DevOps to integrate CI/CD, observability, and capacity planning into release pipelines. Skills & Qualifications Must-Have 5+ years PostgreSQL administration in production. Expertise in query tuning, indexing, and vacuum strategies. Proficiency with Linux shell scripting and automation tools. Hands-on experience with replication, HA, and disaster recovery. Preferred Exposure to cloud-hosted PostgreSQL (AWS RDS, GCP Cloud SQL). Knowledge of Ansible, Python, or Kubernetes for infrastructure automation. Benefits & Culture Highlights Engineer-led culture that values technical depth, peer learning, and continuous improvement. Access to enterprise-grade lab environments and funded certifications on PostgreSQL and cloud platforms. Competitive salary, health insurance, and clear growth paths into architecture and SRE roles. Work Location: On-site, India. Skills: postgresql,shell scripting,vacuum strategies,dba,linux shell scripting,python,disaster recovery,automation tools,cloud-hosted postgresql,indexing,query tuning,replication,ansible,high availability,kubernetes,postgresql administration
Posted 1 month ago
5.0 years
20 - 27 Lacs
Pune, Maharashtra, India
On-site
Industry: Information Technology | Database & Infrastructure Services We are a fast-scaling managed services provider helping enterprises in finance, retail, and digital-native sectors keep mission-critical data available, secure, and high-performing. Our on-site engineering team in India safeguards petabytes of transactional data and drives continuous optimisation across hybrid environments built on open-source technologies. Role & Responsibilities Administer and optimise PostgreSQL clusters across development, staging, and production workloads. Design, implement, and automate backup, recovery, and disaster-recovery strategies with point-in-time restore. Tune queries, indexes, and configuration parameters to achieve sub-second response times and minimise resource consumption. Configure and monitor logical and streaming replication, high availability, and failover architectures. Harden databases with role-based security, encryption, and regular patching aligned to compliance standards. Collaborate with DevOps to integrate CI/CD, observability, and capacity planning into release pipelines. Skills & Qualifications Must-Have 5+ years PostgreSQL administration in production. Expertise in query tuning, indexing, and vacuum strategies. Proficiency with Linux shell scripting and automation tools. Hands-on experience with replication, HA, and disaster recovery. Preferred Exposure to cloud-hosted PostgreSQL (AWS RDS, GCP Cloud SQL). Knowledge of Ansible, Python, or Kubernetes for infrastructure automation. Benefits & Culture Highlights Engineer-led culture that values technical depth, peer learning, and continuous improvement. Access to enterprise-grade lab environments and funded certifications on PostgreSQL and cloud platforms. Competitive salary, health insurance, and clear growth paths into architecture and SRE roles. Work Location: On-site, India. Skills: postgresql,shell scripting,vacuum strategies,dba,linux shell scripting,python,disaster recovery,automation tools,cloud-hosted postgresql,indexing,query tuning,replication,ansible,high availability,kubernetes,postgresql administration
Posted 1 month ago
3.0 years
7 - 9 Lacs
Hyderābād
On-site
Country India Working Schedule Full-Time Work Arrangement Hybrid Relocation Assistance Available Yes Posted Date 26-Jun-2025 Job ID 10120 Description and Requirements Job Responsibilities Manages design, distribution, performance, replication, security, availability, and access requirements for large and complex IBM DB2/LUW databases from version 10.1 to 11.5/12.1 on AIX and RedHat Linux Designs and develops physical layers of databases to support various application needs; Implements back-up, recovery, archiving, conversion strategies, and performance tuning; Manages job scheduling, application release, database change and compliance. Identifies and resolves problem utilizing structured tools and techniques. Provides technical assistance and mentoring to team in all aspects of database management; Consults and advises application development teams on database security, query optimization and performance. Basic knowledge in writing scripts to automate DBA routine tasks. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 3+ years of IT and Infrastructure engineering work experience. Experience (In Years) 3+ Years Total IT experience & 2+ Years relevant experience in UDB database administration Technical Skills 2+ years of related work experience with database design, installation configuration and implementation. Limited knowledge of all key IBM DB2/LUW utilities such as HADR, Reorg, run stats, Load on (Linux/Unix/Windows) At least 1+ years of experience working on Unix and Linux operating systems. Basic experience in database Upgrades and Patching Basic experience in cloud computing (Azure, AWS RDS, IBM Cloud PAK) Experience administering IBM Informix databases is a Big Plus. Working knowledge of backup and recovery utilities like Rubrik, Networker Management of database elements, including creation, alteration, deletion and copying of schemas, databases, tables, views, indexes, stored procedures, triggers, and declarative integrity constraints Working knowledges in IBM db2 LUW replication (Db2 SQL replication and Q Replication, a Queue -based Replication) as well as Using Third party tools for Replications. Handling data security (User Access, Groups and Roles). Should have ability to work closely with IBM-PMR to resolve any ongoing production issues. Knowledge of the ITSM process with Change, Incident, Problem, Service Management using ServiceNow tools. Good database analytical skills to improve application and database performance. Other Critical Requirements Automation tools and programming such as Ansible, Shell scripting and MS PowerShell is preferrable. Basic Database monitoring with Observability tools (Elastic) is preferable. Excellent written and oral communication skills, including the ability to clearly communicate/articulate technical and functional issues with conclusions and recommendations to stakeholders. Basic Project management experience in creating document and presentation. Demonstrate ability to work independently and in a team environment Ability to work 24*7 rotational shift to support for production, development, and test databases About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible. Join us!
Posted 1 month ago
5.0 - 7.0 years
2 - 6 Lacs
Mohali
On-site
Mohali Experience 5-7years Work From Office Role And Responsibilities ✅ Managing, optimizing, and securing our cloud-based SQL databases, ensuring high availability and performance. ✅ Design and implement scalable and secure SQL database structures in AWS and GCP environments. ✅ Plan and execute data migration from on-premises or legacy systems to AWS and GCP cloud platforms. ✅ Monitor database performance, identify bottlenecks, and fine-tune queries and indexes for optimal efficiency. ✅ Implement and manage database security protocols, including encryption, access controls, and compliance with regulations. ✅ Develop and maintain robust backup and recovery strategies to ensure data integrity and availability. ✅ Perform regular maintenance tasks such as patching, updates, and troubleshooting database issues. ✅ Work closely with developers, DevOps, and data engineers to support application development and deployment. ✅ Ensure data quality, consistency, and governance across distributed systems. ✅ Keep up with emerging technologies, cloud services, and best practices in database management. Required Skills: ✅ Proven experience as a SQL Database Administrator with expertise in AWS and GCP cloud platforms. ✅ Strong knowledge of SQL database design, implementation, and optimization. ✅ Experience with data migration to cloud environments. ✅ Proficiency in performance monitoring and query optimization. ✅ Knowledge of database security protocols and compliance regulations. ✅ Familiarity with backup and disaster recovery strategies. ✅ Excellent troubleshooting and problem-solving skills. ✅ Strong collaboration and communication skills. ✅ Knowledge of DevOps integration.
Posted 1 month ago
0 years
0 Lacs
Mumbai Metropolitan Region
Remote
Senior PostgreSQL DBC India| IST | Remote | Work from Home Available Shifts PST - 10 PM - 6 AM IST Why Pythian? At Pythian, we are experts in strategic database and analytics services, driving digital transformation and operational excellence. Pythian, a multinational company, was founded in 1997 and started by ensuring the reliability and performance of mission-critical databases. We quickly earned a reputation for solving tough data challenges. We were there when the industry moved from on-premises to cloud environments, and as enterprises sought more from their data, we expanded our competencies to include advanced analytics. Today, we empower organizations to embrace transformation and leverage advanced technologies, including AI, to stay competitive. We deliver innovative solutions that meet each client’s data goals and have built strong partnerships with Google Cloud, AWS, Microsoft, Oracle, SAP, and Snowflake. The powerful combination of our extensive expertise in data and cloud and our ability to keep on top of the latest bleeding edge technologies make us the perfect partner to help mid and large-sized businesses transform to stay ahead in today’s rapidly changing digital economy. Why you? Are you a Senior PostgreSQL who lives in India (any location)? Are you community minded? Do you blog, contribute to the Open Source community? Are you inspired by ever-shifting challenges, constant growth and collaboration with a team of peers who push you constantly to up your game? At Pythian, we are actively shaping what it means to be an open-source database engineer and administrator, and we want you to be a part of the world’s top team of MongoDB, Cassandra, and MySQL professionals. If this is you, and you wonder what it would be like to work at Pythian, reach out to us and find out! Intrigued to see what a life is like at Pythian? Check out #pythianlife on LinkedIn and follow @loveyourdata on Instagram! Not the right job for you? Check out what other great jobs Pythian has open around the world! Pythian Careers What will you be doing? As a Senior PostgreSQL Consultant (DBC) you will work as part of Pythian's open source team and supply complete support for all aspects of database and application infrastructure to a variety of our customers. Our collaborative environment means everyone works together to solve complex puzzles and develop innovative solutions for our customers. You'll work closely with the customer teams to understand their needs, in both a project based and long term support capacity. You'll create and document database standards, create optimized queries, indexes, and data structure. Monitor and support database environments and serve as an escalation point for complex troubleshooting and interactive production support. Use database vendor provided tools and Pythian developed accelerators to performance tune various database system, specific queries and applications scenarios. Diagnose and address database performance issues using performance monitors and various tuning techniques. Identify areas of opportunity and recommend appropriate improvement suggestions. Cross-functional training in NoSQL, Site Reliability Engineering and DevOps methodologies are encouraged. When you're not fixing things, you'll be authoring new blog posts on interesting topics for our open-source community to digest, creating new articles in our customer facing knowledge base for more frequently seen issues, and hosting webinars amongst other things like participating in conferences and meetups promoting Pythian to the open source community. What do we need from you? While we understand you might not have everything on the list, to be the successful candidate for the PostgreSQL & MySQL job you are likely to have skills such as; Knowledge and experience in installing, configuring and upgrading PostgreSQL & MySQL databases & tools relevant in PostgreSQL Administration. Experience administering PostgreSQL & MySQL in virtualized and cloud environments, especially AWS, GCP or Azure. Experience with scripting (bash/python) and software development (C++, Java, Go) Automation technologies such as Ansible, Terraform, Puppet, Chef, SALT experience. Previous remote working experience a plus. Debugging skills and the ability to troubleshoot methodically, identifying and applying fixes for known errors, and when necessary, capacity to think outside of the box to resolve complex issues Very good documentation skills. Nice to haves include; Understanding of current IT service standards such as ITIL. Being a contributor to Open Source projects relevant to PostgreSQL, MySQL or other database or infrastructure software. What do you get in return? Love your career: Competitive total rewards and salary package. Blog during work hours; take a day off and volunteer for your favorite charity. Love your work/life balance: Flexibly work remotely from your home, there’s no daily travel requirement to an office! All you need is a stable internet connection. Love your coworkers: Collaborate with some of the best and brightest in the industry! Love your development: Hone your skills or learn new ones with our substantial training allowance; participate in professional development days, attend training, become certified, whatever you like! Love your workspace: We give you all the equipment you need to work from home including a laptop with your choice of OS, and an annual budget to personalize your work environment! Love yourself: Pythian cares about the health and well-being of our team. You will have an annual wellness budget to make yourself a priority (use it on gym memberships, massages, fitness and more). Additionally, you will receive a generous amount of paid vacation and sick days, as well as a day off to volunteer for your favorite charity. Disclaimer The successful applicant will need to fulfill the requirements necessary to obtain a background check. Accommodations are available upon request for candidates taking part in any aspect of the selection process.
Posted 1 month ago
14.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Requirements Description and Requirements Position Summary The SQL Database Administrator is responsible for the design, implementation, and support of database systems for applications across MSSQL Database platform (MSQL 2019,2022 server) Administrator is a part of the Database end to end delivery team working and collaborating with Application Development, Infrastructure Engineering and Operation Support teams to deliver and support secured, high performing and optimized database solutions. Database Administrator specializes in the SQL database platform. Job Responsibilities Manages design, distribution, performance, replication, security, availability, and access requirements for large and complex SQL & Sybase databases. Designs and develops physical layers of databases to support various application needs; Implements back-up, recovery, archiving, conversion strategies, and performance tuning; Manages job scheduling, application release, database change and compliance. Identifies and resolves problem utilizing structured tools and techniques. Provides technical assistance and mentoring to staff in all aspects of database management; Consults and advises application development teams on database security, query optimization and performance. Writes scripts for automating DBA routine tasks and documents database maintenance processing flows per standards. Implement industry best practices while performing database administration task Work in Agile model with the understanding of Agile concepts Collaborate with development teams to provide and implement new features. Able to debug production issues by analyzing the logs directly and using tools like Splunk. Begin tackling organizational impediments Learn new technologies based on demand and help team members by coaching and assisting. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 14+ years of IT and Infrastructure engineering work experience. Experience (In Years) 14+ Years Total IT experience & 10+ Years relevant experience in SQL Server + Sybase Database Technical Skills Database Management: Extensive Experience in managing and administering SQL Server, Azure SQL Server, and Sybase databases, ensuring high availability and optimal performance. Data Infrastructure & Security: Expertise in designing and implementing robust data infrastructure solutions, with a strong focus on data security and compliance. Backup & Recovery: expert in developing and executing comprehensive backup and recovery strategies to safeguard critical data and ensure business continuity. Performance Tuning & Optimization: Adept at performance tuning and optimization of databases, leveraging advanced techniques to enhance system efficiency and reduce latency. Cloud Computing & Scripting: Experienced in cloud computing environments and proficient in operating system scripting, enabling seamless integration and automation of database operations. Management of database elements, including creation, alteration, deletion and copying of schemas, databases, tables, views, indexes, stored procedures, triggers, and declarative integrity constraints Strong database analytical skills to improve application performance. Expert in working Knowledge of database performance Tuning, Backup & Recovery, Infrastructure as a Code and Observability tools (Elastic). Automation tools and programming such as Ansible ,Python and Power Shell. Strong knowledge in ITSM process and tools (ServiceNow). Strong knowledge in AgriSafe Methodios . Ability to work 24*7 rotational shift to support the Database and Splunk platforms Other Critical Requirements Excellent Analytical and Problem-Solving skills Experience managing geographically distributed and culturally diverse workgroups with strong team management, leadership and coaching skills Excellent written and oral communication skills, including the ability to clearly communicate/articulate technical and functional issues with conclusions and recommendations to stakeholders. Prior experience in handling state side and offshore stakeholders Experience in creating and delivering Business presentations. Demonstrate ability to work independently and in a team environment About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us!
Posted 1 month ago
6.0 - 11.0 years
0 - 2 Lacs
Bengaluru
Remote
Location : PAN India Work Timings : 24/7 Rotational shifts Exp : 6 10 Years Mandatory Skills : Below highlighted in yellow are key skills required. JD: Good hands-on experience on Database Administration, deep understanding of DBA concepts, best practices. Experience on Installation, configuration and upgrading MongoDB software and products. Creating and sizing database storage structures and database objects Design/Implement Backup and Recovery setups based on application requirements. Experience on managing Docker MongoDB Images on Linux servers. Excellent MongoDB internals knowledge storage engines, concurrency, memory, journaling, and checkpoints. Troubleshooting performance issues profiler, indexes, and server status and host metrics. MongoDB Security: Auditing, Kerberos, authorization and internal/external authentication methods available in Enterprise MongoDB version. Strong Experience implementing multi-data center replica set and sharded clusters in Linux and windows environments. Ops Manager: implementation, monitoring, backup and automation. Experience on MongoDB Clustering, Sharding and scaling across data centers. Expert coding skills with at least one of Shell Script , Python or Java Expert in Mongo DB CRUD operations and in aggregation framework.
Posted 1 month ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Orion Innovation is a premier, award-winning, global business and technology services firm. Orion delivers game-changing business transformation and product development rooted in digital strategy, experience design, and engineering, with a unique combination of agility, scale, and maturity. We work with a wide range of clients across many industries including financial services, professional services, telecommunications and media, consumer products, automotive, industrial automation, professional sports and entertainment, life sciences, ecommerce, and education. Enter job post details This, is for Noida Location, work from office 5 days from . Shift is 12 noon to 9 pm Qualification 8+ years of DBA experience, generally with Database technologies. At least 5 years’ experience in PostgreSQL and good understanding of change control management in cloud database services. Experience with automation platform tools like Ansible, Github Actions and experience in any of the Public Cloud solutions Proficiency in writing automation procedures, functions and packages for administration and application support. Should have strong Linux platform skills and understanding of network, storage, tiered application environments and security. Cloud deployment experience would be a big plus. Experience with multi-tenant database design would be a plus Job Description Database Planning & Monitoring: Monitor, plan, and coordinate secure data solutions in alignment with data requirements; Design and build data models and schemas to support application requirements; Create and maintain databases, tables, views, stored procedures, and other database objects; Create database metric alerts; Plan back-ups, test recovery procedures, and clone database. Database Performance & Optimization: Performance-tune databases, manage storage space, and administer database instances; Create and manage keys and indexes; Work closely with the application development teams to resolve any performance-related issues and provide application support. Documentation & Process: Produce and maintain documentation on database structure, procedures, and data standards. Orion is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, citizenship status, disability status, genetic information, protected veteran status, or any other characteristic protected by law. Candidate Privacy Policy Orion Systems Integrators, LLC And Its Subsidiaries And Its Affiliates (collectively, “Orion,” “we” Or “us”) Are Committed To Protecting Your Privacy. This Candidate Privacy Policy (orioninc.com) (“Notice”) Explains What information we collect during our application and recruitment process and why we collect it; How we handle that information; and How to access and update that information. Your use of Orion services is governed by any applicable terms in this notice and our general Privacy Policy.
Posted 1 month ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About BNP Paribas India Solutions Established in 2005, BNP Paribas India Solutions is a wholly owned subsidiary of BNP Paribas SA, European Union’s leading bank with an international reach. With delivery centers located in Bengaluru, Chennai and Mumbai, we are a 24x7 global delivery center. India Solutions services three business lines: Corporate and Institutional Banking, Investment Solutions and Retail Banking for BNP Paribas across the Group. Driving innovation and growth, we are harnessing the potential of over 10000 employees, to provide support and develop best-in-class solutions. About BNP Paribas Group BNP Paribas is the European Union’s leading bank and key player in international banking. It operates in 65 countries and has nearly 185,000 employees, including more than 145,000 in Europe. The Group has key positions in its three main fields of activity: Commercial, Personal Banking & Services for the Group’s commercial & personal banking and several specialized businesses including BNP Paribas Personal Finance and Arval; Investment & Protection Services for savings, investment, and protection solutions; and Corporate & Institutional Banking, focused on corporate and institutional clients. Based on its strong diversified and integrated model, the Group helps all its clients (individuals, community associations, entrepreneurs, SMEs, corporate and institutional clients) to realize their projects through solutions spanning financing, investment, savings and protection insurance. In Europe, BNP Paribas has four domestic markets: Belgium, France, Italy, and Luxembourg. The Group is rolling out its integrated commercial & personal banking model across several Mediterranean countries, Turkey, and Eastern Europe. As a key player in international banking, the Group has leading platforms and business lines in Europe, a strong presence in the Americas as well as a solid and fast-growing business in Asia-Pacific. BNP Paribas has implemented a Corporate Social Responsibility approach in all its activities, enabling it to contribute to the construction of a sustainable future, while ensuring the Group's performance and stability Commitment to Diversity and Inclusion At BNP Paribas, we passionately embrace diversity and are committed to fostering an inclusive workplace where all employees are valued, respected and can bring their authentic selves to work. We prohibit Discrimination and Harassment of any kind and our policies promote equal employment opportunity for all employees and applicants, irrespective of, but not limited to their gender, gender identity, sex, sexual orientation, ethnicity, race, colour, national origin, age, religion, social status, mental or physical disabilities, veteran status etc. As a global Bank, we truly believe that inclusion and diversity of our teams is key to our success in serving our clients and the communities we operate in. About Business Line/Function Securities services department of CIB Job Title PLSQL Developer Date 05-Jun-2025 Department Securities Services Location: Chennai Business Line / Function Banking Services Reports To (Direct) Project Manager (direct) Grade (if applicable) (Functional) Number Of Direct Reports Directorship / Registration: NA Position Purpose This position is for Tax processing application development. The candidate should pose the relevant technical skills to develop code for various flagship and technical migration projects. Candidate should develop a good understanding of the existing application (functional and technical) Responsibilities Direct Responsibilities Oracle Developer Will Be Performing Consistent work experience of 8 years in Oracle SQL and PL/SQL development. Develop schemas, tables, indexes, sequences, constraints, functions, and procedures, Packages, Collections, Users and Roles. Understand business requirements and accordingly develop database models. Provide optimal design solutions to improve system quality and efficiency. Follow best practices for database design. Perform capacity analysis and oversee database tuning. Maintain technical documentations for reference purposes. Write complex codes and queries and participate in code reviews. Perform design review, modify codes and test upgrades. Work closely with other developers to improve applications and establish best practices. An ability to understand front-end users requirements(Java) and a problem-solving attitude In-depth understanding of data management (e.g. permissions, recovery, security and monitoring). Provide training and knowledge sharing with the development team. Maintains all databases required for development, testing, Pre-Production and production usage. Takes care of the Database design and implementation. Implement and maintain database security (create and maintain users and roles, assign privileges). Performance tuning and monitoring, proactively propose solutions. Perform Data anonymization Technical & Behavioral Competencies Knowledge and/or experience of the financial services industry Good understanding of software development life cycle and Agile/Iterative methodology Technical competency in the following: Experience in SQL and PL/SQL development - Oracle Database Good understanding of ER Diagrams, Data Flows Good to have experience on DB design & modelling Hands-on on performance tuning tools and debugging Ability to perform technical analysis, design and identify impacts (functional/technical) Prior experience in High Volume / Mission critical Systems is a plus Contributing Responsibilities Work in duet with our offshore and in-site technical team to coordinate the database initiatives. Perform detailed technical analysis with impacts (technical/functionally) and prepare Technical specification document. Mentor and carry out database peer code reviews of development team. Bug fixing & performance optimization Keep the development team up-to-date about the best-practices and in-site feedbacks. Challenge the time-response performance and the maintainability of the treatments/queries. Maintains data standards and security measures and anonymize Production data and import in Development and Testing environments. Performance tuning and Monitoring of All databases and proactively propose solutions in case of issues. Develop And Unit Test The Code Develop the Code to suffice the business requirements Unit test the code and bug fix all the defects arising out of the unit testing Properly check in the code to avoid issues arising out of configuration management Deploy And Integrate Test The Application Developed Deploy the developed code into the IST Environments and perform the Integration testing by working with the cross teams Fix all the defects arising out of IST and UAT Testing. Keep the development team up-to-date about the best practices and feedback. Skills Referential Specific Qualifications (if required) Behavioural Skills: (Please select up to 4 skills) Adaptability Ability to deliver / Results driven Creativity & Innovation / Problem solving Ability to share / pass on knowledge Transversal Skills: (Please select up to 5 skills) Ability to manage / facilitate a meeting, seminar, committee, training… Ability to understand, explain and support change Analytical Ability Ability To Develop Others & Improve Their Skills Ability to develop and adapt a process Education Level Bachelor Degree or equivalent Experience Level At least 5 years
Posted 1 month ago
6.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
About the Role We are looking for a detail-oriented and proactive Data Operations Lead to oversee the execution of brand metadata, taxonomy, and index management within our Digital Insights practice. This role combines team supervision, process governance, and hands-on operational delivery for proprietary SaaS platforms. The ideal candidate will also manage in-app technical support (e.g., Zendesk), ensuring users have a seamless experience with platform configurations and issue resolution. Key Responsibilities ● Lead day-to-day operations of the data associate team to deliver high-quality outputs ● Oversee regular updates to proprietary industry indexes, brand metadata, and brand set tagging ● Manage technical support requests (account setup, platform issues) via Zendesk and coordinate with product teams ● Enforce strict SOP adherence and implement checks for tagging consistency and version control ● Act as the primary liaison with product teams and internal stakeholders for tool configuration and workflow alignment ● Identify and drive opportunities to reduce manual workload through process innovation or automation ● Track team performance and support skill development and QA efforts across junior roles ● Handle internal/external escalations and ensure service continuity What We’re Looking For ● 4–6 years of experience in data operations, metadata management, or platform support (preferably in digital/social media domains) ● Strong command of content tagging, taxonomy standards, and operational workflows ● Hands-on experience with support platforms (e.g., Zendesk) and content management tools ● Proven ability to lead small-to-mid sized teams and manage delivery timelines ● Excellent problem-solving, documentation, and communication skills ● Experience collaborating with cross-functional stakeholders like insights, product, and engineering teams ● Passion for improving operational processes and driving scalable solutions
Posted 1 month ago
14.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Backdrop AVIZVA is a Healthcare Technology Organization that harnesses technology to simplify, accelerate, & optimize the way healthcare enterprises deliver care. Established in 2011, we have served as strategic enablers for healthcare enterprises, helping them enhance their overall care delivery. With over 14 years of expertise, we have engineered more than 150 tailored products for leading Medical Health Plans, Dental and Vision Plan Providers, PBMs, Medicare Plan Providers, TPAs, and more. Overview Of The Role As a Senior System Analyst within a product development team in AVIZVA, you will be one of the front- liners of the team spearheading your product’s solution design activities alongside the product owners, system architect, lead developers while collaborating with all business & technology stakeholders. Job Responsibilities Gather & analyze business, functional, data requirements with the PO, & relevant stakeholders and derive system requirements from the same. Work with the system architect to develop an understanding of the product's architecture, components, Interaction, flow, and build clarity around the technological nuances & constructs involved. Develop an understanding of the various datasets relevant to the industry, their business significance and logical structuring from a data modeling perspective. Conduct in-depth industry research around datasets pertinent to the underlying problem statements. Identify, (data) model & document the various entities, relationships & attributes alongwith appropriate cardinality and normalization. Apply ETL principles to formulate & document data dictionaries, business rules, transformation & enrichment logic, for various datasets in question pertaining to various source & target systems in context. Define data flow, validations & business rules driving the interchange of data between components of a system or multiple systems. Define requirements around system integrations and exchange of data such as systems involved, services (APIs) involved, nature of integration, handshake details (data involved, authentication, etc.) Identify use-cases for exposure of data within an entity/dataset via APIs and define detailed API signatures and create API documentation. Provide clarifications to the development team around requirements, system design, integrations, data flows, scenarios. Support to other product teams dependent on the APIs, integrations defined by your product team, in understanding the endpoints, logics, business, entity structure etc. Provide backlog grooming support to the Product Owner through activities such as functional analysis and data analysis. Skills & Qualifications Bachelor’s or Master’s degree in Computer Science or any other analytically inclined field of study. At least 5 years of relevant experience in roles such as Business Analyst, Systems Analyst or Business System Analyst. Experience in analysing & defining systems involving varying levels of complexity in terms of underlying components, data, integrations, flows, etc. Experience working with data (structured, semi-structureed), data modeling, writing database queries with hands-on SQL, and working knowledge of Elasticsearch indexes. Experience with Unstructured data will be a huge plus. Experience of identifying & defining entities & APIs, writing API specifications, & API consumer specifications. Ability to map data from various sources to various consumer endpoints such as a system, a service, UI, process, sub-process, workflow etc. Experience with data management products based on ETL principles, involving multitudes of datasets, disparate data sources and target systems. A strong analytical mindset with a proven ability to understand a variety of business problems through stakeholder interactions and other methods to ideate the most aligned and appropriate technology solutions. Exposure to diagrammatic analysis & elicitation of business processes, data & system flows using BPMN & UML diagrams, such as activity flow, use-cases, sequence diagrams, DFDs, etc. Exposure to writing requirements documentation such as BRD, FRD, SRS, Use-Cases, User-Stories etc. An appreciation for the systems’ technological and architectural concepts with an ability to speak about the components of an IT system, inter-component interactions, database, external and internal data sources, data flows & system flows. Experience (at least familiarity) of working with the Atlassian suite (Jira, & Confluence). Experience in product implementations & customisations through system configurations will be an added plus. Experience of driving UX design activities in collaboration with graphic & UI design teams, by means of enabler tools such as Wireframes, sketches, flow diagrams, information architecture etc. will be an added plus. Exposure to UX designing & collaboration tools such as Figma, Zeplin, etc. will be an added plus. Awareness or prior exposure to Healthcare & Insurance business & data will be a huge advantage.
Posted 1 month ago
6.0 years
7 - 14 Lacs
Mumbai Metropolitan Region
On-site
Role: SME - SOLR Engineer Experience: 6+ Years’ experience specially in Solr Database Education : B.E./B.Tech/MCA in Computer Science Mode of Working - 5 Days WFO- Rotational Shift 6+ yrs of experience working with Apache Lucene -SOLR Experience in Apache Solr installation, configuration, administration, patching, up-gradation and migration. Experience in implementing SOLR builds of indexes, shards, and refined searches across semi- structured data sets. Understanding of Linux basic troubleshooting. Familiar with basic distributed systems concepts. Good analytical ability Ability to execute and prioritize tasks, and resolve issues without aid from direct manager. Ability to multi-task and context-switch effectively between different activities and teams Provide 24x7 support for critical production systems. Experience working with Unix, Linux server. Excellent written and verbal communication. Ability to organize and plan work independently. Ability to work in a rapidly changing environment. Skills: unix,patching,task prioritization,solr,computer science,migration,administration,apache lucene,database administration,communication,apache solr,linux,analytical ability,apache,solr database,distributed systems,upgradation,configuration
Posted 1 month ago
4.0 years
7 - 9 Lacs
Mumbai Metropolitan Region
On-site
Role: L2 - SOLR Engineer Experience: 4+ Years’ experience specially in Solr Database Education : B.E./B.Tech/MCA in Computer Science Mode of Working - 5 Days WFO- Rotational Shift 4+ yrs of experience working with Apache Lucene -SOLR Experience in Apache Solr installation, configuration, administration, patching, up-gradation and migration. Experience in implementing SOLR builds of indexes, shards, and refined searches across semi- structured data sets. Understanding of Linux basic troubleshooting. Familiar with basic distributed systems concepts. Good analytical ability Ability to execute and prioritize tasks, and resolve issues without aid from direct manager. Ability to multi-task and context-switch effectively between different activities and teams Provide 24x7 support for critical production systems. Experience working with Unix, Linux server. Excellent written and verbal communication. Ability to organize and plan work independently. Ability to work in a rapidly changing environment. Skills: apache,apache lucene,unix,analytical ability,computer science,solr,database administration,linux,task prioritization,communication,apache solr
Posted 1 month ago
0 years
0 Lacs
Mumbai Metropolitan Region
Remote
This job is with Morningstar, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Title - Data Consultant Shift - UK The Group Morningstar's Data group provides data and analytics on hundreds of thousands of investment offerings, including stocks, mutual funds, and similar vehicles, along with real-time global market data on millions of equities, indexes, futures, options, commodities, and precious metals, in addition to foreign exchange and treasury markets. Morningstar is one of the largest independent sources of fund, equity, and credit data and research in the world, and our advocacy for investors' interests is the foundation of our company. The Role The Managed Investment Data Team requires a Data Consultant to drive Morningstar funds data coverage in European Offshore markets . The employee will collaborate with all global and local teams. The Data Consultant is responsible for the relationships with asset managers and other actors of the industry, demonstrate our capabilities and quality, and return the voice of the local market to the global and central teams. The consultant is the main point of contact between our centralized data team and the asset managers. Responsibilities Representing and explaining the Data workflows, processes and methodologies towards asset managers and clients. Take ownership of acquiring and onboarding new and complex data sets as we keep expanding the quality of analytics delivered to clients. Collaborate with members of the Data & Development Centres and global teams to align priorities based upon local business requirements. Manage projects focused on enhancing our database to meet changes in our industry and client's needs. This includes business analysis on market trends & regulatory changes to design data collection plans & bring back the voice of the market. Monitor competitor behaviour, trends and services in order Morningstar are well placed to act on any opportunities that may arise Requirements Solid understanding of the ever-evolving investment management industry and passionate about investment data. Excellent written and verbal communication, problem solving, organizational, and analytical skills. Ability to demonstrate Client centric approach Data expert on investment data points, processes, methodologies, calculations and different fund structures is a plus. Previous experience in project management and relationship management role is highly preferred. Previous experience within data methodology/quality and processes is preferred. Morningstar is an equal opportunity employer. Morningstar's hybrid work environment gives you the opportunity to work remotely and collaborate in-person each week. We've found that we're at our best when we're purposely together on a regular basis, at least three days each week. A range of other benefits are also available to enhance flexibility as needs change. No matter where you are, you'll have tools and resources to engage meaningfully with your global colleagues. I10_MstarIndiaPvtLtd Morningstar India Private Ltd. (Delhi) Legal Entity
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough