Jobs
Interviews

192 Sharding Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

Greater Bengaluru Area

On-site

Redefine the future of customer experiences. One conversation at a time. We’re changing the game with a first-of-its-kind, conversation-centric platform that unifies team collaboration and customer experience in one place. Powered by AI, built by amazing humans. Our culture is forward-thinking, customer-obsessed and built on an unwavering belief that connection fuels business and life; connections to our customers with our signature Amazing Service®, our products and services, and most importantly, each other. Since 2008, 100,000+ companies and 1M+ users rely on Nextiva for customer and team communication. If you’re ready to collaborate and create with amazing people, let your personality shine and be on the frontlines of helping businesses deliver amazing experiences, you’re in the right place. Build Amazing - Deliver Amazing - Live Amazing - Be Amazing We are seeking a passionate and experienced Staff Engineer to spearhead our growing development team. In this leadership role, you'll play a pivotal role in driving innovation and excellence across our software engineering efforts. Who You Are You are a seasoned software engineer with a proven track record of success in building and leading high-performing teams. You possess a blend of technical expertise, strong leadership skills, and a passion for building elegant and efficient software solutions. What You'll Do Technical Expertise: Bring in the best practices for writing high quality (bug free and acceptable performance), reliable, maintainable software. Architect and Design: Lead the design and architecture of complex software systems/problem, ensuring scalability, maintainability, and security. High impact work and excellence delivered: Champion a culture of continuous improvement, driving efficient development processes and high-quality code delivery. Make a positive impact in team's output by mentoring and coaching younger members, helping out and unblock people to achieve the objectives. Hands-on Problem Solving: Tackle intricate technical challenges and provide effective solutions. Communication & Collaboration: Lead by example with clear and concise communication. Collaborate effectively with stakeholders across various teams. Technical Skills Proven experience as a Software Engineer with a minimum of 10+ years of experience ⁠Strong understanding of system design principles (CAP theorem, PACELC, USL, Consistency, Hashing, Sharding, Partitioning etc) especially how to tackle functional and non-functional requirements like scaling, security, reliability In-depth knowledge of modern software development methodologies (Agile, DevOps) Delivering high quality software with best practices like SOLID, Base Paradigm, design patterns, different architectural styles Expertise in building RESTful web applications using Java 11+ and Spring Framework Advanced understanding with tools like Maven, Gradle, Git, Docker, Kubernetes, and cloud platforms (GCP) is highly desired Deep Experience in at least one of MySQL/Postgres/MongoDB, caching solutions (Redis) is desired Nextiva DNA (Core Competencies) Nextiva’s most successful team members share common traits and behaviors: Drives Results: Action-oriented with a passion for solving problems. They bring clarity and simplicity to ambiguous situations, challenge the status quo, and ask what can be done differently. They lead and drive change, celebrating success to build more success. Critical Thinker: Understands the "why" and identifies key drivers, learning from the past. They are fact-based and data-driven, forward-thinking, and see problems a few steps ahead. They provide options, recommendations, and actions, understanding risks and dependencies. Right Attitude: They are team-oriented, collaborative, competitive, and hate losing. They are resilient, able to bounce back from setbacks, zoom in and out, and get in the trenches to help solve important problems. They cultivate a culture of service, learning, support, and respect, caring for customers and teams. Total Rewards Our Total Rewards offerings are designed to allow our employees to take care of themselves and their families so they can be their best, in and out of the office. Our compensation packages are tailored to each role and candidate's qualifications. We consider a wide range of factors, including skills, experience, training, and certifications, when determining compensation. We aim to offer competitive salaries or wages that reflect the value you bring to our team. Depending on the position, compensation may include base salary and/or hourly wages, incentives, or bonuses. Medical 🩺 - Medical insurance coverage is available for employees, their spouse, and up to two dependent children with a limit of 500,000 INR, as well as their parents or in-laws for up to 300,000 INR. This comprehensive coverage ensures that essential healthcare needs are met for the entire family unit, providing peace of mind and security in times of medical necessity. Group Term & Group Personal Accident Insurance 💼 - Provides insurance coverage against the risk of death / injury during the policy period sustained due to an accident caused by violent, visible & external means. Coverage Type - Employee Only Sum Insured - 3 times of annual CTC with minimum cap of INR 10,00,000 Free Cover Limit - 1.5 Crore Work-Life Balance ⚖️ - 15 days of Privilege leaves per calendar year, 6 days of Paid Sick leave per calendar year, 6 days of Casual leave per calendar year. Paid 26 weeks of Maternity leaves, 1 week of Paternity leave, a day off on your Birthday, and paid holidays Financial Security💰 - Provident Fund & Gratuity Wellness 🤸‍ - Employee Assistance Program and comprehensive wellness initiatives Growth 🌱 - Access to ongoing learning and development opportunities and career advancement At Nextiva, we're committed to supporting our employees' health, well-being, and professional growth. Join us and build a rewarding career! Established in 2008 and headquartered in Scottsdale, Arizona, Nextiva secured $200M from Goldman Sachs in late 2021, valuing the company at $2.7B.To check out what’s going on at Nextiva, check us out on Instagram, Instagram (MX), YouTube, LinkedIn, and the Nextiva blog.

Posted 6 days ago

Apply

5.0 - 8.0 years

0 Lacs

Gurgaon

On-site

202505104 Gurugram, Haryana, India Bevorzugt Description Job Responsibility: Design, develop, and optimize MongoDB data models for various business and analytics use cases. Implement and maintain efficient MongoDB CRUD operations, indexes, and schema evolution strategies. Experience with self-hosted MongoDB deployments, including installation, configuration, scaling, backup/restore, and monitoring. Build and maintain reporting and analytics pipelines using MongoDB Reporting suite. Develop, monitor, and tune MongoDB (both self-hosted and cloud-managed) deployments for scalability, reliability, and security. Collaborate with engineering and product teams to translate requirements into MongoDB-backed solutions. Support integration with Azure cloud services (e.g., Azure Cosmos DB for MongoDB, Azure Functions, Blob Storage). Maintain documentation and contribute to database standards and best practices. (Nice to have) Support data ingestion and automation tasks using Python. Qualifications Qualifications: Bachelor’s or master’s in computer science, Engineering, or related quantitative discipline. Experience: 5 to 8 years of hands-on experience in data engineering or backend development with MongoDB. Demonstrated experience with self-hosted MongoDB, including cluster setup, maintenance, and troubleshooting. Technical Competencies: Deep hands-on experience with MongoDB data modelling , schema design, and normalization/denormalization strategies. Strong proficiency in MongoDB development : aggregation pipelines, CRUD, performance tuning, and index management. Experience in building reporting and analytics using MongoDB Reporting suite. Experience with self-hosted MongoDB deployments (e.g., sharding, replication, monitoring, security configuration). Working knowledge of Azure cloud services (Azure Cosmos DB, VMs, App Service, networking for secure deployments). (Nice to have) Experience in Python for backend integration, data processing, or scripting

Posted 6 days ago

Apply

4.0 years

0 Lacs

Roorkee, Uttarakhand, India

Remote

Company Description Miratech helps visionaries change the world. We are a global IT services and consulting company that brings together enterprise and start-up innovation. Today, we support digital transformation for some of the world's largest enterprises. By partnering with both large and small players, we stay at the leading edge of technology, remain nimble even as a global leader, and create technology that helps our clients further enhance their business. We are a values-driven organization and our culture of Relentless Performance has enabled over 99% of Miratech's engagements to succeed by meeting or exceeding our scope, schedule, and/or budget objectives since our inception in 1989. Miratech has coverage across 5 continents and operates in over 25 countries around the world. Miratech retains nearly 1000 full-time professionals, and our annual growth rate exceeds 25%. Job Description We are seeking passionate and forward-thinking engineers to join us in revolutionizing customer experiences alongside our client, a global leader in cloud contact center software. Together, we bring the power of cloud innovation to enterprises worldwide—enabling seamless, personalized, and joyful customer interactions. You will be part of our Product Engineering Team, which leads the development of AI-driven solutions tailored for modern contact centers. Our flagship product, Studio, is built using PHP/Laravel, Python, and Vue.js, and is deployed across both private and public cloud infrastructures.In this role, you'll work with the latest cloud AI technologies including Azure OpenAI, Google AI APIs, IBM Watson, and Amazon Lex, helping shape the future of intelligent customer engagement. Responsibilities: Design, develop, and maintain scalable backend and front-end solutions for the Studio platform. Enhance the drag-and-drop flow builder to integrate voice, SMS, and chatbot channels. Collaborate with product managers, designers, and engineers to deliver new features. Ensure performance, security, and reliability through code reviews and best practices. Write tests and documentation to support high-quality releases. Explore and integrate cutting-edge AI technologies (Azure OpenAI, Google AI, IBM Watson, Amazon Lex). Participate in agile processes and continuous improvements (CI/CD, automation). Qualifications 4+ years of professional experience in software development with a strong full-stack background. Proficiency in a variety of programming languages, including but not limited to PHP, Javascript, Python or others as required. Expertise in server-side technologies, databases (SQL and NoSQL), and back-end frameworks like PHP/Laravel framework. Strong experience with web development technologies such as HTML, CSS, JavaScript, and modern front-end frameworks like Vue.js or React. Awareness of web security best practices and the ability to implement security measures to protect applications and data. A portfolio of past projects showcasing design and full-stack development skills. Ability to work independently and as part of a collaborative team. Strong commitment to delivering high-quality code and solutions on time and within scope. Bachelor’s degree (or equivalent) in relevant discipline. Nice to have: Experience with Java/SpringBoot. Experience with multi-tenanted systems. Expertise with Sharding in Mysql, Redis and Mongodb is highly advantageous. Experience with Contact Centre, IVR, Virtual Agents, VoIP and telecommunications service providers is advantageous. Experience with Google Cloud Platform, Kubernetes, and CI/CD. We offer: Culture of Relentless Performance: join an unstoppable technology development team with a 99% project success rate and more than 30% year-over-year revenue growth. Competitive Pay and Benefits: enjoy a comprehensive compensation and benefits package, including health insurance, language courses, and a relocation program. Work From Anywhere Culture: make the most of the flexibility that comes with remote work. Growth Mindset: reap the benefits of a range of professional development opportunities, including certification programs, mentorship and talent investment programs, internal mobility and internship opportunities. Global Impact: collaborate on impactful projects for top global clients and shape the future of industries. Welcoming Multicultural Environment: be a part of a dynamic, global team and thrive in an inclusive and supportive work environment with open communication and regular team-building company social events. Social Sustainability Values: join our sustainable business practices focused on five pillars, including IT education, community empowerment, fair operating practices, environmental sustainability, and gender equality. Miratech is an equal opportunity employer and does not discriminate against any employee or applicant for employment on the basis of race, color, religion, sex, national origin, age, disability, veteran status, sexual orientation, gender identity, or any other protected status under applicable law.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Roorkee, Uttarakhand, India

Remote

Company Description Miratech helps visionaries change the world. We are a global IT services and consulting company that brings together enterprise and start-up innovation. Today, we support digital transformation for some of the world's largest enterprises. By partnering with both large and small players, we stay at the leading edge of technology, remain nimble even as a global leader, and create technology that helps our clients further enhance their business. We are a values-driven organization and our culture of Relentless Performance has enabled over 99% of Miratech's engagements to succeed by meeting or exceeding our scope, schedule, and/or budget objectives since our inception in 1989. Miratech has coverage across 5 continents and operates in over 25 countries around the world. Miratech retains nearly 1000 full-time professionals, and our annual growth rate exceeds 25%. Job Description We are seeking a Senior Full Stack Engineer who will join us in revolutionizing customer experiences with our client, a global leader in cloud contact center software. We bring the power of cloud innovation to enterprises worldwide, empowering businesses to deliver seamless, personalized, and joyful customer interactions. Our vision for practical AI involves equipping contact centre agents, supervisors, and managers with user interfaces that guide and summarize their work, identify points of coaching and support, fully automate routine interactions, and allow creation, deployment, and ongoing management of the AI agents required. You will join the team, which drives the development of practical AI solutions. Our product is built on a robust technology stack including PHP/Laravel, Python, and Vue.js, and is deployed across both private and public cloud infrastructures. We leverage the latest cloud AI services such as Azure OpenAI, Google AI APIs, IBM Watson and Amazon Lex. Responsibilities: Design, develop, and maintain scalable backend and front-end solutions for the Studio platform. Enhance the drag-and-drop flow builder to integrate voice, SMS, and chatbot channels. Collaborate with product managers, designers, and engineers to deliver new features. Ensure performance, security, and reliability through code reviews and best practices. Write tests and documentation to support high-quality releases. Explore and integrate cutting-edge AI technologies (Azure OpenAI, Google AI, IBM Watson, Amazon Lex). Participate in agile processes and continuous improvements (CI/CD, automation). Qualifications 7+ years of professional experience in software development with a strong full-stack background. Proficiency in a variety of programming languages, including but not limited to PHP, JavaScript, Python, or others as required. Expertise in server-side technologies, databases (SQL and NoSQL), and back-end frameworks like PHP/Laravel framework. Strong experience with web development technologies such as HTML, CSS, JavaScript, and modern front-end frameworks like Vue.js or React. Awareness of web security best practices and the ability to implement security measures to protect applications and data. A portfolio of past projects showcasing design and full-stack development skills. Ability to work independently and as part of a collaborative team. Strong commitment to delivering high-quality code and solutions on time and within scope. Bachelor’s degree (or equivalent) in relevant discipline. Nice to have: Experience with Java/SpringBoot. Experience with multi-tenanted systems. Expertise with Sharding in Mysql, Redis and Mongodb is highly advantageous. Experience with Contact Centre, IVR, Virtual Agents, VoIP and telecommunications service providers is advantageous. Experience with Google Cloud Platform, Kubernetes, and CI/CD. We offer: Culture of Relentless Performance: join an unstoppable technology development team with a 99% project success rate and more than 30% year-over-year revenue growth. Competitive Pay and Benefits: enjoy a comprehensive compensation and benefits package, including health insurance, language courses, and a relocation program. Work From Anywhere Culture: make the most of the flexibility that comes with remote work. Growth Mindset: reap the benefits of a range of professional development opportunities, including certification programs, mentorship and talent investment programs, internal mobility and internship opportunities. Global Impact: collaborate on impactful projects for top global clients and shape the future of industries. Welcoming Multicultural Environment: be a part of a dynamic, global team and thrive in an inclusive and supportive work environment with open communication and regular team-building company social events. Social Sustainability Values: join our sustainable business practices focused on five pillars, including IT education, community empowerment, fair operating practices, environmental sustainability, and gender equality. Miratech is an equal opportunity employer and does not discriminate against any employee or applicant for employment on the basis of race, color, religion, sex, national origin, age, disability, veteran status, sexual orientation, gender identity, or any other protected status under applicable law.

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Description Job Responsibility: Design, develop, and optimize MongoDB data models for various business and analytics use cases. Implement and maintain efficient MongoDB CRUD operations, indexes, and schema evolution strategies. Experience with self-hosted MongoDB deployments, including installation, configuration, scaling, backup/restore, and monitoring. Build and maintain reporting and analytics pipelines using MongoDB Reporting suite. Develop, monitor, and tune MongoDB (both self-hosted and cloud-managed) deployments for scalability, reliability, and security. Collaborate with engineering and product teams to translate requirements into MongoDB-backed solutions. Support integration with Azure cloud services (e.g., Azure Cosmos DB for MongoDB, Azure Functions, Blob Storage). Maintain documentation and contribute to database standards and best practices. (Nice to have) Support data ingestion and automation tasks using Python. Qualifications Qualifications: Bachelor’s or master’s in computer science, Engineering, or related quantitative discipline. Experience 5 to 8 years of hands-on experience in data engineering or backend development with MongoDB. Demonstrated experience with self-hosted MongoDB, including cluster setup, maintenance, and troubleshooting. Technical Competencies Deep hands-on experience with MongoDB data modelling, schema design, and normalization/denormalization strategies. Strong proficiency in MongoDB development: aggregation pipelines, CRUD, performance tuning, and index management. Experience in building reporting and analytics using MongoDB Reporting suite. Experience with self-hosted MongoDB deployments (e.g., sharding, replication, monitoring, security configuration). Working knowledge of Azure cloud services (Azure Cosmos DB, VMs, App Service, networking for secure deployments). (Nice to have) Experience in Python for backend integration, data processing, or scripting

Posted 1 week ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are looking for an experienced Search Developer skilled in Java and Apache SOLR to design, develop, and maintain high-performance, scalable search solutions for enterprise or consumer-facing applications. The ideal candidate will work closely with cross-functional teams to optimize search relevance, speed, and reliability while handling large, complex datasets. Essential Functions Key Responsibilities Design, implement, and optimize search applications and services using Java and Apache SOLR. Develop and maintain SOLR schemas, configurations, indexing pipelines, and query optimization for datasets often exceeding 100 million documents. Build and enhance scalable RESTful APIs and microservices around search functionalities. Work with business analysts and stakeholders to gather search requirements and improve user experience through advanced search features such as faceting, filtering, and relevance tuning. Perform SOLR cluster management, including sharding, replication, scaling, and backup/recovery operations. Monitor application performance, troubleshoot issues, and implement fixes to ensure system stability and responsiveness. Integrate SOLR with relational and NoSQL databases, streaming platforms, and ETL processes. Participate in code reviews, adopt CI/CD processes, and contribute to architectural decisions. Stay updated with latest developments in SOLR, Java frameworks, and search technologies. Qualifications Required Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related discipline. 7+ years of hands-on experience in Java development, including frameworks like Spring and Hibernate. 3+ years of solid experience working with Apache SOLR, including SOLRCloud, schema design, indexing, query parsing, and search tuning. Strong knowledge of search technologies (Lucene, Solr) and experience managing large-scale search infrastructures. Experience in RESTful API design and microservices architecture. Familiarity with SQL and NoSQL databases. Ability to write efficient, multi-threaded, and distributed system code. Strong problem-solving skills and debugging expertise. Experience with version control (Git), build tools (Maven/Gradle), and CI/CD pipelines (Jenkins, GitHub Actions). Understanding of Agile/Scrum software development methodologies. Excellent communication skills and ability to collaborate with cross-functional teams. Would be a plus Preferred Skills Experience with other search platforms like Elasticsearch is a plus. Knowledge of cloud platforms (AWS, Azure, GCP), containerization (Docker, Kubernetes). Familiarity with streaming platforms such as Kafka. Exposure to analytics and machine learning for search relevance enhancement. Prior experience in large-scale consumer web or e-commerce search applications. We offer Opportunity to work on bleeding-edge projects Work with a highly motivated and dedicated team Competitive salary Flexible schedule Benefits package - medical insurance, sports Corporate social events Professional development opportunities Well-equipped office About Us Grid Dynamics (NASDAQ: GDYN) is a leading provider of technology consulting, platform and product engineering, AI, and advanced analytics services. Fusing technical vision with business acumen, we solve the most pressing technical challenges and enable positive business outcomes for enterprise companies undergoing business transformation. A key differentiator for Grid Dynamics is our 8 years of experience and leadership in enterprise AI, supported by profound expertise and ongoing investment in data, analytics, cloud & DevOps, application modernization and customer experience. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the Americas, Europe, and India.

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

As a MongoDB Data Engineer, you will be a key contributor in architecting, modelling, and developing data solutions using MongoDB to support our document and metadata workflows. You will collaborate closely with cross-functional teams to deliver scalable, performant, and secure data platforms, with exposure to Azure cloud infrastructure. You will play a central role in modelling document and transactional data, building aggregation and reporting pipelines, and ensuring best practices in database performance and reliability,including deploying, configuring, and tuning self-hosted MongoDB environments. You will work in a start-up-like environment but with the scale and mission of a global business behind you. The Role: Design, develop, and optimize MongoDB data models for various business and analytics use cases. Implement and maintain efficient MongoDB CRUD operations, indexes, and schema evolution strategies. Experience with self-hosted MongoDB deployments, including installation, configuration, scaling, backup/restore, and monitoring. Build and maintain reporting and analytics pipelines using MongoDB Reporting suite. Develop, monitor, and tune MongoDB (both self-hosted and cloud-managed) deployments for scalability, reliability, and security. Collaborate with engineering and product teams to translate requirements into MongoDB-backed solutions. Support integration with Azure cloud services (e.g., Azure Cosmos DB for MongoDB, Azure Functions, Blob Storage). Maintain documentation and contribute to database standards and best practices. (Nice to have) Support data ingestion and automation tasks using Python. Qualifications: Bachelor’s or master’s in computer science, Engineering, or related quantitative discipline. Experience: 5 to 8 years of hands-on experience in data engineering or backend development with MongoDB. Demonstrated experience with self-hosted MongoDB, including cluster setup, maintenance, and troubleshooting. . Technical Competencies: Deep hands-on experience with MongoDB data modelling , schema design, and normalization/denormalization strategies. Strong proficiency in MongoDB development : aggregation pipelines, CRUD, performance tuning, and index management. Experience in building reporting and analytics using MongoDB Reporting suite. Experience with self-hosted MongoDB deployments (e.g., sharding, replication, monitoring, security configuration). Working knowledge of Azure cloud services (Azure Cosmos DB, VMs, App Service, networking for secure deployments). (Nice to have) Experience in Python for backend integration, data processing, or scripting

Posted 1 week ago

Apply

4.0 years

0 Lacs

India

On-site

What we do We are currently in the process of building the next generation of cargo management applications using modern and widely use technologies and we are looking for motivated people to join our team of interdisciplinary developers. What we currently use: We build backend services with Java, Spring Boot, Web Services, Mongo DB We integrate with existing core Java Cargo applications via REST APIs We build frontends with Angular and Ionic framework for mobile apps We deploy to Linux servers, private datacenters, AWS, using Ansible & Maven We do continuous integration with Gitlab/Bamboo We use Scrum to organize ourselves What we expect from you: Bachelor's Degree in Information Technology, Computer Science, Computer Engineering, or equivalent Proven experience as a MongoDB DBA or similar role (4+ years recommended) Strong understanding of MongoDB architecture, including sharding, replication, and indexing Experience working with MongoDB Atlas or self-managed clusters Proficiency with Linux systems and shell scripting Familiarity with monitoring tools and performance tuning techniques Experience with backup and disaster recovery processes Review the current Debezium deployment architecture, including Oracle connector configuration, Kafka integration, and downstream consumers. Analyze Oracle database setup for CDC compatibility (e.g., redo log configuration, supplemental logging, privileges). Evaluate connector performance, lag, and error handling mechanisms. Identify bottlenecks, misconfigurations, or anti-patterns in the current implementation. Provide a detailed report with findings, best practices, and actionable recommendations. Optionally, support implementation of recommended changes and performance tuning. What we require from you 4+ years of experience as a MongoDB DBA in production environments Deep expertise in MongoDB architecture, including replication, sharding, backup, and recovery Strong hands-on experience with Debezium, especially the Oracle connector (LogMiner). Deep understanding of Oracle internals relevant to CDC: redo logs, SCNs, archive log mode, supplemental logging. Proficiency with Apache Kafka and Kafka ecosystem tools. Experience with monitoring and debugging Debezium connectors in production environments. Ability to analyze logs, metrics, and connector configurations to identify root causes of issues. Strong documentation and communication skills for delivering technical assessments.

Posted 1 week ago

Apply

0.0 - 8.0 years

0 Lacs

Gurugram, Haryana

On-site

202505104 Gurugram, Haryana, India Bevorzugt Description Job Responsibility: Design, develop, and optimize MongoDB data models for various business and analytics use cases. Implement and maintain efficient MongoDB CRUD operations, indexes, and schema evolution strategies. Experience with self-hosted MongoDB deployments, including installation, configuration, scaling, backup/restore, and monitoring. Build and maintain reporting and analytics pipelines using MongoDB Reporting suite. Develop, monitor, and tune MongoDB (both self-hosted and cloud-managed) deployments for scalability, reliability, and security. Collaborate with engineering and product teams to translate requirements into MongoDB-backed solutions. Support integration with Azure cloud services (e.g., Azure Cosmos DB for MongoDB, Azure Functions, Blob Storage). Maintain documentation and contribute to database standards and best practices. (Nice to have) Support data ingestion and automation tasks using Python. Qualifications Qualifications: Bachelor’s or master’s in computer science, Engineering, or related quantitative discipline. Experience: 5 to 8 years of hands-on experience in data engineering or backend development with MongoDB. Demonstrated experience with self-hosted MongoDB, including cluster setup, maintenance, and troubleshooting. Technical Competencies: Deep hands-on experience with MongoDB data modelling , schema design, and normalization/denormalization strategies. Strong proficiency in MongoDB development : aggregation pipelines, CRUD, performance tuning, and index management. Experience in building reporting and analytics using MongoDB Reporting suite. Experience with self-hosted MongoDB deployments (e.g., sharding, replication, monitoring, security configuration). Working knowledge of Azure cloud services (Azure Cosmos DB, VMs, App Service, networking for secure deployments). (Nice to have) Experience in Python for backend integration, data processing, or scripting

Posted 1 week ago

Apply

4.0 - 12.0 years

0 Lacs

maharashtra

On-site

Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, supported and inspired by a collaborative community of colleagues around the world, and able to reimagine what's possible. Join us and help the world's leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Your Role Ensure high availability of MongoDB clusters using replication and sharding. Perform regular health checks and capacity planning for scaling MongoDB instances. Performance Tuning & Indexing: Analyze query performance and optimize queries using indexes (single; compound; TTL; text; hashed; wildcard). Perform index maintenance and monitoring to ensure efficient query execution. Backup & Recovery: Implement automated backup strategies using mongodump/mongorestore; MongoDB Atlas backup; or Ops Manager. Test disaster recovery plans to ensure minimal downtime in case of failure. Your Profile 4-12 years of experience on MongoDB Administrator. Security & Compliance: Implement MongoDB authentication and authorization (Role-Based Access Control - RBAC). Manage TLS/SSL encryption; auditing; and logging for security compliance. Scaling & High Availability: Manage sharded clusters for horizontal scaling. Set up and maintain MongoDB replica sets for fault tolerance. Handle failover and replication lag issues. Monitoring & Automation: Use monitoring tools (CloudWatch; MongoDB Ops Manager etc) to track performance. Automate administrative tasks using scripts (Python; Bash). What will you love working at Capgemini Work on MongoDB database architecture and administration. Expand your expertise with the cloud platforms (e.g., AWS, Google Cloud) and MongoDB Atlas is often preferred. Clear career progression paths from L2 support to architecture and consulting roles. Be part of mission-critical projects that secure and optimize networks for Fortune 500 clients. Thrive in a diverse, inclusive, and respectful environment that values your voice and ideas as well work in agile, cross-functional teams with opportunities to lead and mentor. Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem.,

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Tittle: Data Engineer Experience: 3 +Years Location: Hyderabad, India Key Responsibilities Administer monitor, and maintain MongoDB databases, including MongoDB Atlas clusters Handle database performance tuning, backups, upgrades, and failover mechanisms Design and implement database strategies, security, and high availability solutions Develop and maintain Python scripts in AWS Lambda Deploy manage, and scale backend applications on AWS infrastructure Implement database monitoring, alerts, and data security best practices Analyze slow queries and implement optimizations Participate in disaster recovery planning and execution Write documentation for database standards, procedures, and architectures Required Skills 3 5 years of hands on experience in MongoDB Administration Experience with MongoDB Atlas (cluster management, security, performance tuning) Proficiency in Python Experience with AWS Services such as EC 2 S 3 Lambda, CloudWatch and IAM Understanding of backup, restore, and disaster recovery strategies for MongoDB Knowledge of indexing, sharding and replication strategies Good understanding of Linux systems and shell scripting

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are seeking a skilled Senior Data Modeller to design, implement, and maintain conceptual, logical, and physical data models that support enterprise information management and business intelligence efforts. The ideal candidate will collaborate with business analysts, data architects, and developers to ensure high-quality data models that meet both business and technical requirements. • GCP, Data Modelling (OLTP, OLAP), indexing, DBSchema, CloudSQL, BigQuery • Data Modeller - Hands-on data modelling for OLTP and OLAP systems. • In-Depth knowledge of Conceptual, Logical and Physical data modelling. • Strong understanding of Indexing, partitioning, data sharding with practical experience of having done the same. • Strong understanding of variables impacting database performance for near-real time reporting and application interaction. • Should have working experience on at least one data modelling tool, preferably DBSchema. • People with functional knowledge of the mutual fund industry will be a plus. • Good understanding of GCP databases like AlloyDB, CloudSQL and BigQuery

Posted 1 week ago

Apply

4.0 years

4 - 9 Lacs

Gurgaon

On-site

About the Team: Join a highly skilled and collaborative team dedicated to ensuring data reliability, performance, and security across our organization’s critical systems. We work closely with developers, architects, and DevOps professionals to deliver seamless and scalable database solutions in a cloud-first environment, leveraging the latest in AWS and open-source technologies. Our team values continuous learning, innovation, and the proactive resolution of database challenges. About the Role: As a Database Administrator specializing in MySQL and Postgres within AWS environments, you will play a key role in architecting, deploying, and supporting the backbone of our data infrastructure. You’ll leverage your expertise to optimize database instances, manage large-scale deployments, and ensure our databases are secure, highly available, and resilient. This is an opportunity to collaborate across teams, stay ahead with emerging technologies, and contribute directly to our business success. Responsibilities: Design, implement, and maintain MySQL and Postgres database instances on AWS, including managing clustering and replication (MongoDB, Postgres solutions). Write, review, and optimize stored procedures, triggers, functions, and scripts for automated database management. Continuously tune, index, and scale database systems to maximize performance and handle rapid growth. Monitor database operations to ensure high availability, robust security, and optimal performance. Develop, execute, and test backup and disaster recovery strategies in line with company policies. Collaborate with development teams to design efficient and effective database schemas aligned with application needs. Troubleshoot and resolve database issues, implementing corrective actions to restore service and prevent recurrence. Enforce and evolve database security best practices, including access controls and compliance measures. Stay updated on new database technologies, AWS advancements, and industry best practices. Plan and perform database migrations across AWS regions or instances. Manage clustering, replication, installation, and sharding for MongoDB, Postgres, and related technologies. Requirements: 4-7 Years of Experinece in Database Management Systems as a Database Engineer. Proven experience as a MySQL/Postgres Database Administrator in high-availability, production environments. Expertise in AWS cloud services, especially EC2, RDS, Aurora, DynamoDB, S3, and Redshift. In-depth knowledge of DR (Disaster Recovery) setups, including active-active and active-passive master configurations. Hands-on experience with MySQL partitioning and AWS Redshift. Strong understanding of database architectures, replication, clustering, and backup strategies (including Postgres replication & backup). Advanced proficiency in optimizing and troubleshooting SQL queries; adept with performance tuning and monitoring tools. Familiarity with scripting languages such as Bash or Python for automation/maintenance. Experience with MongoDB, Postgres clustering, Cassandra, and related NoSQL or distributed database solutions. Ability to provide 24/7 support and participate in on-call rotation schedules. Excellent problem-solving, communication, and collaboration skills. What we offer? A positive, get-things-done workplace A dynamic, constantly evolving space (change is par for the course – important you are comfortable with this) An inclusive environment that ensures we listen to a diverse range of voices when making decisions. Ability to learn cutting edge concepts and innovation in an agile start-up environment with a global scale Access to 5000+ training courses accessible anytime/anywhere to support your growth and development (Corporate with top learning partners like Harvard, Coursera, Udacity) About us: At PayU, we are a global fintech investor and our vision is to build a world without financial borders where everyone can prosper. We give people in high growth markets the financial services and products they need to thrive. Our expertise in 18+ high-growth markets enables us to extend the reach of financial services. This drives everything we do, from investing in technology entrepreneurs to offering credit to underserved individuals, to helping merchants buy, sell, and operate online. Being part of Prosus, one of the largest technology investors in the world, gives us the presence and expertise to make a real impact. Find out more at www.payu.com Our Commitment to Building A Diverse and Inclusive Workforce As a global and multi-cultural organization with varied ethnicities thriving across locations, we realize that our responsibility towards fulfilling the D&I commitment is huge. Therefore, we continuously strive to create a diverse, inclusive, and safe environment, for all our people, communities, and customers. Our leaders are committed to create an inclusive work culture which enables transparency, flexibility, and unbiased attention to every PayUneer so they can succeed, irrespective of gender, color, or personal faith. An environment where every person feels they belong, that they are listened to, and where they are empowered to speak up. At PayU we have zero tolerance towards any form of prejudice whether a specific race, ethnicity, or of persons with disabilities, or the LGBTQ communities.

Posted 1 week ago

Apply

12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

gStore is GreyOrange’s flagship SaaS platform that transforms physical retail operations through realtime, AI-driven inventory visibility and intelligent in-store task execution. It integrates advanced technologies like RFID, computer vision, and machine learning to deliver 98%+ inventory accuracy with precise spatial mapping. gStore empowers store associates with guided workflows for omnichannel fulfillment (BOPIS, ship-from-store, returns), intelligent task allocation, and real-time replenishment — significantly improving efficiency, reducing shrinkage, and driving in-store conversions. The platform is cloud-native, hardware-agnostic, and built to scale across thousands of stores globally with robust integrations and actionable analytics. Roles & Responsibilities Define and drive the overall architecture for scalable, secure, and high-performance distributed systems. Write and review code for critical modules and performance-sensitive components to set quality and architectural standards. Collaborate with engineering leads and product managers to align technology strategy with business goals. Evaluate and recommend tools, technologies, and processes to ensure the highest quality product platform. Own and evolve the system design, ensuring modularity, multi-tenancy, and future extensibility. Establish and govern best practices around service design, API development, security, observability, and performance. Review code, designs, and technical documentation, ensuring adherence to architecture and design principles. Lead design discussions and mentor senior and mid-level engineers to improve design thinking and engineering quality. Partner with DevOps to optimise CI/CD, containerization, and infrastructure-as-code Stay abreast of industry trends and emerging technologies, assessing their relevance and value. Skills 12+ years of experience in backend development Strong understanding of data structures and algorithms Good knowledge of low-level and high-level system designs and best practices Strong expertise in Java & Spring Boot , with a deep understanding of microservice architectures and design patterns. Good knowledge of databases (both SQL and NoSQL ), including schema design, sharding, and performance tuning. Expertise in Kubernetes, Helm, and container orchestration** for deploying and managing scalable applications. Advanced knowledge of Kafka for stream processing, event-driven architecture, and data integration. Proficiency in Redis for caching, session management, and pub-sub use cases. Solid understanding of API design (REST/gRPC), authentication (OAuth2/JWT), and security best practices. Strong grasp of system design fundamentals—scalability, reliability, consistency, and observability. Experience with monitoring and logging frameworks (e.g. Datadog, Prometheus, Grafana, ELK, or equivalent). Excellent problem-solving, communication, and cross-functional leadership skills. Prior experience in leading architecture for SaaS or high-scale multi-tenant platforms is highly desirable.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Title: Senior DB Developer – Sports/Healthcare Location: Ahmedabad, Gujarat. Job Type: Full-Time. Job Description: We are seeking an exceptional Senior Database Developer with 8+ years of expertise who will play a critical role in design and development of a scalable, configurable, and customizable platform. Our new Senior Database Developer will help with the design and collaborate with cross-functional teams and provide data solutions for delivering high-performance applications. If you are passionate about bringing innovative technology to life, owning and solving problems in an independent, fail fast and highly supportive environment, and working with a creative and dynamic team, we want to hear from you. This role requires a strong understanding of enterprise applications and large-scale data processing platforms. Key Responsibilities: ● Design and architect scalable, efficient, high-availability and secure database solutions to meet business requirements. ● Designing the Schema and ER Diagram for horizontal scalable architecture ● Strong knowledge of NoSQL / MongoDB ● Knowledge of ETL Tools for data migration from source to destination. ● Establish database standards, procedures, and best practices for data modelling, storage, security, and performance. ● Implement data partitioning, sharding, and replication for high-throughput systems. ● Optimize data lake, data warehouse, and NoSQL solutions for fast retrieval. ● Collaborate with developers and data engineers to define data requirements and optimize database performance. ● Implement database security policies ensuring compliance with regulatory standards (e.g., GDPR, HIPAA). ● Optimize and tune databases for performance, scalability, and availability. ● Design disaster recovery and backup solutions to ensure data protection and business continuity. ● Evaluate and implement new database technologies and frameworks as needed. ● Provide expertise in database migration, transformation, and modernization projects. ● Conduct performance analysis and troubleshooting of database-related issues. ● Document database architecture and standards for future reference. Required Skills and Qualifications: ● 8+ years of experience in database architecture, design, and management. ● Experience with AWS (Amazon Web Services) and similar platforms like Azure and GCP (Google Cloud Platform). ● Experience deploying and managing applications, utilizing various cloud services (compute, storage, databases, etc.) ● Experience with specific services like EC2, S3, Lambda (for AWS) ● Proficiency with SQL and NoSQL databases (e.g., PostgreSQL, MySQL, Oracle, MongoDB , Cassandra). ● MongoDB and NoSQL Experience is a big added advantage. ● Expertise in data modelling, schema design, indexing, and partitioning. ● Experience with ETL processes, data warehousing, and big data technologies (e.g. Apache NiFi, Airflow, Redshift, Snowflake, Hadoop). ● Proficiency in database performance tuning, optimization, and monitoring tools. ● Strong knowledge of data security, encryption, and compliance frameworks. ● Excellent analytical, problem-solving, and communication skills. ● Proven experience in database migration and modernization projects. Preferred Qualifications: ● Certifications in cloud platforms (AWS, GCP, Azure) or database technologies. ● Experience with machine learning and AI-driven data solutions. ● Knowledge of graph databases and time-series databases. ● Familiarity with Kubernetes, containerized databases, and microservices architecture. Education: ● Bachelor's or Master’s degree in Computer Science , Software Engineering , or related technical field. Why Join Us? ● Be part of an exciting and dynamic project in the sports/health data domain. ● Work with cutting-edge technologies and large-scale data processing systems. ● Collaborative, fast-paced team environment with opportunities for professional growth. Competitive salary, bonus, and benefits package

Posted 1 week ago

Apply

5.0 years

4 - 10 Lacs

India

On-site

About MostEdge MostEdge empowers retailers with smart, trusted, and sustainable solutions to run their stores more efficiently. Through our Inventory Management Service, powered by the StockUPC app , we provide accurate, real-time insights that help stores track inventory, prevent shrink, and make smarter buying decisions. Our mission is to deliver trusted, profitable experiences—empowering retailers, partners and employees to accelerate commerce in a sustainable manner. Role Summary: We are seeking an experienced and highly motivated Database Administrator (DBA) to join our team. The ideal candidate will be responsible for the design, implementation, performance tuning, and maintenance of relational (MSSQL, PostgreSQL) and NoSQL (MongoDB) databases, both on-premises and in cloud environments (AWS, Azure, GCP). You will ensure data integrity, security, availability, and optimal performance across all platforms. Key Responsibilities: Database Management & Optimization · Install, configure, and upgrade database servers (MSSQL, PostgreSQL, MongoDB). · Monitor performance, optimize queries, and tune databases for efficiency. · Implement and manage database clustering, replication, sharding, and high availability. Cloud Database Administration · Manage cloud-based database services (e.g., Amazon RDS, Azure SQL Database, GCP Cloud SQL, MongoDB Atlas). · Automate backup, failover, patching, and scaling in the cloud environment. · Ensure secure access, encryption, and compliance in the cloud. · ETL and Dev Ops experience is desirable. Backup, Recovery & Security · Design and implement robust backup and disaster recovery plans. · Regularly test recovery processes to ensure minimal downtime. · Apply database security best practices (roles, permissions, auditing, encryption). Scripting & Automation · Develop scripts for automation (using PowerShell, Bash, Python, etc.). · Automate repetitive DBA tasks using DevOps/CI-CD tools (Terraform, Ansible, etc.). Collaboration & Support · Work closely with developers, DevOps, and system admins to support application development. · Assist with database design, indexing strategy, schema changes, and query optimization. · Provide 24/7 support for critical production issues (on-call rotation may apply). Key Skills & Qualifications: · Bachelor’s degree in computer science, Information Technology, or related field. · 5+ years of experience as a DBA with production experience in: MSSQL Server (SQL Server 2016 and above) PostgreSQL (including PostGIS, logical/physical replication) MongoDB (including MongoDB Atlas, replica sets, sharding) · Experience with cloud database services (AWS RDS, Azure SQL, GCP Cloud SQL). · Strong understanding of performance tuning, indexing, and query optimization. · Solid grasp of backup and restore strategies, disaster recovery, and HA setups. · Familiarity with monitoring tools (e.g., Prometheus, Datadog, New Relic, Zabbix). · Knowledge of scripting languages (PowerShell, Bash, or Python). · Understanding of DevOps principles, version control (Git), CI/CD pipelines. Preferred Qualifications: · Certification in any cloud platform (AWS/Azure/GCP). · Microsoft Certified: Azure Database Administrator Associate. · Experience with Kubernetes Operators for databases (e.g., Crunchy Postgres Operator). · Experience with Infrastructure as Code (Terraform, CloudFormation). Benefits: · Competitive salary and performance bonus. · Health insurance, paid leaves. · Opportunity to work with cutting-edge cloud and database technologies. Job Types: Full-time, Permanent Pay: ₹400,000.00 - ₹1,000,000.00 per year Benefits: Health insurance Life insurance Paid sick time Paid time off Provident Fund Schedule: Evening shift Monday to Friday Morning shift Night shift Rotational shift US shift Weekend availability Supplemental Pay: Performance bonus Quarterly bonus Work Location: In person Application Deadline: 25/07/2025 Expected Start Date: 01/08/2025

Posted 1 week ago

Apply

6.0 years

0 Lacs

Delhi

Remote

Overview WELCOME TO SITA We're the team that keeps airports moving, airlines flying smoothly, and borders open. Our tech and communication innovations are the secret behind the success of the world’s air travel industry. You'll find us at 95% of international hubs. We partner closely with over 2,500 transportation and government clients, each with their own unique needs and challenges. Our goal is to find fresh solutions and cutting-edge tech to make their operations run like clockwork. Want to be a part of something big? Are you ready to love your job? The adventure begins right here, with you, at SITA. ABOUT THE ROLE & TEAM The Senior Software Developer (Database Administrator) will play a pivotal role in the design, development, and maintenance of high-performance and scalable database environments. This individual will ensure seamless integration of various database components, leveraging advanced technologies to support applications and data systems. The candidate should possess expertise in SQL Server, MongoDB and other NoSQL solutions would be a plus. WHAT YOU’LL DO Manage, monitor, and maintain SQL Server databases both On-Prem and Cloud across production and non-production environments. Design and implement scalable and reliable database architectures. Develop robust and secure database systems, ensuring high availability and performance. Create and maintain shell scripts for database automation, monitoring, and administrative tasks. Troubleshoot and resolve database issues to ensure system stability and optimal performance. Implement backup, recovery, Migration and disaster recovery strategies. Collaborate with cross-functional teams to understand requirements and deliver database solutions that align with business objectives. Qualifications ABOUT YOUR SKILLS Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Over 6 years of experience in database administration, specializing in MongoDB and SQL Server. Proficient in shell scripting (e.g., Bash, PowerShell) for database automation. Expertise in query optimization, database performance tuning, and high-availability setups such as replica sets, sharding, and failover clusters. Familiarity with cloud-based database solutions and DevOps pipelines. Skilled in database security, including role-based access and encryption. Experienced with monitoring tools like mongotop, mongostat, and SQL Profiler. Knowledge of messaging queues (RabbitMQ, IBM MQ, or Solace) is a plus. Strong understanding of database administration best practices, design patterns, and standards. Demonstrates excellent problem-solving skills, attention to detail, and effective communication and teamwork abilities. NICE-TO-HAVE Professional certification is a plus. WHAT WE OFFER We’re all about diversity. We operate in 200 countries and speak 60 different languages and cultures. We’re really proud of our inclusive environment. Our offices are comfortable and fun places to work, and we make sure you get to work from home too. Find out what it's like to join our team and take a step closer to your best life ever. Flex Week: Work from home up to 2 days/week (depending on your team’s needs) Flex Day: Make your workday suit your life and plans. Flex Location: Take up to 30 days a year to work from any location in the world. Employee Wellbeing: We’ve got you covered with our Employee Assistance Program (EAP), for you and your dependents 24/7, 365 days/year. We also offer Champion Health – a personalized platform that supports a range of wellbeing needs. Professional Development : Level up your skills with our training platforms, including LinkedIn Learning! Competitive Benefits : Competitive benefits that make sense with both your local market and employment status. SITA is an Equal Opportunity Employer. We value a diverse workforce. In support of our Employment Equity Program, we encourage women, aboriginal people, members of visible minorities, and/or persons with disabilities to apply and self-identify in the application process.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Company: The healthcare industry is the next great frontier of opportunity for software development, and Health Catalyst is one of the most dynamic and influential companies in this space. We are working on solving national-level healthcare problems, and this is your chance to improve the lives of millions of people, including your family and friends. Health Catalyst is a fast-growing company that values smart, hardworking, and humble individuals. Each product team is a small, mission-critical team focused on developing innovative tools to support Catalyst’s mission to improve healthcare performance, cost, and quality. POSITION OVERVIEW: We are looking for a highly skilled Senior Database Engineer & Storage Expert with 5+ years of hands-on experience in managing and optimizing large-scale, high-throughput database systems. The ideal candidate will possess deep expertise in handling complex ingestion pipelines across multiple data stores and a strong understanding of distributed database architecture. The candidate will play a critical technical leadership role in ensuring our data systems are robust, performant, and scalable to support massive datasets ingested from various sources without bottlenecks. You will work closely with data engineers, platform engineers, and infrastructure teams to continuously improve database performance and reliability. performance bottlenecks. KEY RESPONSIBILITIES: • Query Optimization: Design, write, debug and optimize complex queries for RDS (MySQL/PostgreSQL), MongoDB, Elasticsearch, and Cassandra. • Large-Scale Ingestion: Configure databases to handle high-throughput data ingestion efficiently. • Database Tuning: Optimize database configurations (e.g., memory allocation, connection pooling, indexing) to support large-scale operations. • Schema and Index Design: Develop schemas and indexes to ensure efficient storage and retrieval of large datasets. • Monitoring and Troubleshooting: Analyze and resolve issues such as slow ingestion rates, replication delays, and performance bottlenecks. • Performance Debugging: Analyze and troubleshoot database slowdowns by investigating query execution plans, logs, and metrics. • Log Analysis: Use database logs to diagnose and resolve issues related to query performance, replication, and ingestion bottlenecks • Data Partitioning and Sharding: Implement partitioning, sharding, and other distributed database techniques to improve scalability. • Batch and Real-Time Processing: Optimize ingestion pipelines for both batch and real-time workloads. • Collaboration: Partner with data engineers and Kafka experts to design and maintain robust ingestion pipelines. • Stay Updated: Stay up to date with the latest advancements in database technologies and recommend improvements. REQUIRED SKILLS AND QUALIFICATIONS: • Database Expertise: Proven experience with MySQL/PostgreSQL (RDS), MongoDB, Elasticsearch, and Cassandra. • High-Volume Operations: Proven experience in configuring and managing databases for large-scale data ingestions. • Performance Tuning: Hands-on experience with query optimization, indexing strategies, and execution plan analysis for large datasets. • Database Internals: Strong understanding of replication, partitioning, sharding, and caching mechanisms. • Data Modeling: Ability to design schemas and data models tailored for high throughput use cases. • Programming Skills: Proficiency in at least one programming language (e.g., Python, Java, Go) for building data pipelines. • Debugging Proficiency: Strong ability to debug slowdowns by analyzing database logs, query execution plans, and system metrics. • Log Analysis Tools: Familiarity with database log formats and tools for parsing and analyzing logs. • Monitoring Tools: Experience with monitoring tools such as AWS CloudWatch, Prometheus, and Grafana to track ingestion performance. • Problem-Solving: Analytical skills to diagnose and resolve ingestion-related issues effectively. PREFERRED QUALIFICATIONS: • Certification in any of the mentioned database technologies. • Hands-on experience with cloud platforms such as AWS (preferred), Azure, or GCP. • Knowledge of distributed systems and large-scale data processing. • Familiarity with cloud-based database solutions and infrastructure. • Familiarity with large scale data ingestion tools like Kafka, Spark or Flink. EDUCATIONAL REQUIREMENTS: • Bachelor’s degree in computer science, Information Technology, or a related field. Equivalent work experience will also be considered Equal Employment Opportunity has been, and will continue to be, a fundamental principle at Health Catalyst, where employment is based upon personal capabilities and qualification without discrimination or harassment on the basis of race, color, national origin, religion, sex, sexual orientation, gender identity, age, disability, citizenship status, marital status, creed, genetic predisposition or carrier status, sexual orientation or any other characteristic protected by law.. Health Catalyst is committed to a work environment where all individuals are treated with respect and dignity.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Greater Chennai Area

On-site

Responsibilities Participate in requirements definition, analysis, and the design of logical and physical data models for Dimensional Data Model, NoSQL, or Graph Data Model. Lead data discovery discussions with Business in JAD sessions and map the business requirements to logical and physical data modeling solutions. Conduct data model reviews with project team members. Capture technical metadata through data modeling tools. Ensure database designs efficiently support BI and end user requirements. Drive continual improvement and enhancement of existing systems. Collaborate with ETL/Data Engineering teams to create data process pipelines for data ingestion and transformation. Collaborate with Data Architects for data model management, documentation, and version control. Maintain expertise and proficiency in the various application areas. Maintain current knowledge of industry trends and standards. Required Skills Strong data analysis and data profiling skills. Strong conceptual, logical, and physical data modeling for VLDB Data Warehouse and Graph DB. Hands-on experience with modeling tools such as ERWIN or another industry-standard tool. Fluent in both normalized and dimensional model disciplines and techniques. Minimum of 3 years' experience in Oracle Database. Hands-on experience with Oracle SQL, PL/SQL, or Cypher. Exposure to Databricks Spark, Delta Technologies, Informatica ETL, or other industry-leading tools. Good knowledge or experience with AWS Redshift and Graph DB design and management. Working knowledge of AWS Cloud technologies, mainly on the services of VPC, EC2, S3, DMS, and Glue. Bachelor's degree in Software Engineering, Computer Science, or Information Systems (or equivalent experience). Excellent verbal and written communication skills, including the ability to describe complex technical concepts in relatable terms. Ability to manage and prioritize multiple workstreams with confidence in making decisions about prioritization. Data-driven mentality. Self-motivated, responsible, conscientious, and detail-oriented. Effective oral and written communication skills. Ability to learn and maintain knowledge of multiple application areas. Understanding of industry best practices pertaining to Quality Assurance concepts and Level : Bachelor's degree in Computer Science, Engineering, or relevant fields with 3+ years of experience as a Data and Solution Architect supporting Enterprise Data and Integration Applications or a similar role for large-scale enterprise solutions. 3+ years of experience in Big Data Infrastructure and tuning experience in Lakehouse Data Ecosystem, including Data Lake, Data Warehouses, and Graph DB. AWS Solutions Architect Professional Level certifications. Extensive experience in data analysis on critical enterprise systems like SAP, E1, Mainframe ERP, SFDC, Adobe Platform, and eCommerce systems. Skill Set Required GCP, Data Modelling (OLTP, OLAP), indexing, DBSchema, CloudSQL, BigQuery Data Modeller - Hands-on data modelling for OLTP and OLAP systems. In-Depth knowledge of Conceptual, Logical and Physical data modelling. Strong understanding of Indexing, partitioning, data sharding with practical experience of having done the same. Strong understanding of variables impacting database performance for near-real time reporting and application interaction. Should have working experience on at least one data modelling tool, preferably DBSchema. People with functional knowledge of the mutual fund industry will be a plus. Good understanding of GCP databases like AlloyDB, CloudSQL and BigQuery (ref:hirist.tech)

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

HEROIC Cybersecurity ( HEROIC.com ) is seeking a Senior Data Infrastructure Engineer with deep expertise in DataStax Enterprise (DSE) and Apache Cassandra to help architect, scale, and maintain the data infrastructure that powers our cybersecurity intelligence platforms. You will be responsible for designing and managing fully automated, big data pipelines that ingest, process, and serve hundreds of billions of breached and leaked records sourced from the surface, deep, and dark web. You'll work with DSE Cassandra, Solr, and Spark, helping us move toward a 99% automated pipeline for data ingestion, enrichment, deduplication, and indexing — all built for scale, speed, and reliability. This position is critical in ensuring our systems are fast, reliable, and resilient as we ingest thousands of unique datasets daily from global threat intelligence sources. What you will do: Design, deploy, and maintain high-performance Cassandra clusters using DataStax Enterprise (DSE) Architect and optimize automated data pipelines to ingest, clean, enrich, and store billions of records daily Configure and manage DSE Solr and Spark to support search and distributed processing at scale Automate dataset ingestion workflows from unstructured surface, deep, and dark web sources Cluster management, replication strategy, capacity planning, and performance tuning Ensure data integrity, availability, and security across all distributed systems Write and manage ETL processes, scripts, and APIs to support data flow automation Monitor systems for bottlenecks, optimize queries and indexes, and resolve production issues Research and integrate third-party data tools or AI-based enhancements (e.g., smart data parsing, deduplication, ML-based classification) Collaborate with engineering, data science, and product teams to support HEROIC’s AI-powered cybersecurity platform Requirements Minimum 5 years experience with Cassandra / DataStax Enterprise in production environments Hands-on experience with DSE Cassandra, Solr, Apache Spark, CQL, and data modeling at scale Strong understanding of NoSQL architecture, sharding, replication, and high availability Advanced knowledge of Linux/Unix, shell scripting, and automation tools (e.g., Ansible, Terraform) Proficient in at least one programming language: Python, Java, or Scala Experience building large-scale automated data ingestion systems or ETL workflows Solid grasp of AI-enhanced data processing, including smart cleaning, deduplication, and classification Excellent written and spoken English communication skills Prior experience with cybersecurity or dark web data (preferred but not required) Benefits Position Type: Full-time Location: Pune, India (Remote – Work from anywhere) Compensation: Competitive salary based on experience Benefits: Paid Time Off + Public Holidays Professional Growth: Amazing upward mobility in a rapidly expanding company. Innovative Culture: Fast-paced, innovative, and mission-driven. Be part of a team that leverages AI and cutting-edge technologies. About Us: HEROIC Cybersecurity ( HEROIC.com ) is building the future of cybersecurity. Unlike traditional cybersecurity solutions, HEROIC takes a predictive and proactive approach to intelligently secure our users before an attack or threat occurs. Our work environment is fast-paced, challenging and exciting. At HEROIC, you’ll work with a team of passionate, engaged individuals dedicated to intelligently securing the technology of people all over the world. Position Keywords: DataStax Enterprise (DSE), Apache Cassandra, Apache Spark, Apache Solr, AWS, Jira, NoSQL, CQL (Cassandra Query Language), Data Modeling, Data Replication, ETL Pipelines, Data Deduplication, Data Lake, Linux/Unix Administration, Bash, Docker, Kubernetes, CI/CD, Python, Java, Distributed Systems, Cluster Management, Performance Tuning, High Availability, Disaster Recovery, AI-based Automation, Artificial Intelligence, Big Data, Dark Web Data

Posted 1 week ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Role: MongoDB Senior Database Administrator Location: Offshore/India Who are we looking for? We are looking for 7+ years of administrator experience in MongoDB/Cassandra/Snowflake Databases. This role is focused on production support, ensuring database performance, availability, and reliability across multiple clusters. The ideal candidate will be responsible for ensuring the availability, performance, and security of our NoSQL database environment. You will provide 24/7 production support, troubleshoot issues, monitor system health, optimize performance, and collaborate with cross-functional teams to maintain a reliable and efficient Snowflake platform . Technical Skills  Proven experience as a MongoDB/Cassandra/Snowflake Databases Administrator or similar role in production support environments.  7+ years of hands-on experience as a MongoDB DBA supporting production environments.  Strong understanding of MongoDB architecture, including replica sets, sharding, and aggregation framework.  Proficiency in writing and optimizing complex MongoDB queries and indexes.  Experience with backup and recovery solutions (e.g., mongodump, mongorestore, Ops Manager).  Solid knowledge of Linux/Unix systems and scripting (Shell, Python, or similar).  Experience with monitoring tools like Prometheus, Grafana, DataStax OpsCenter, or similar.  Understanding of distributed systems and high-availability concepts.  Proficiency in troubleshooting cluster issues, performance tuning, and capacity planning.  In-depth understanding of data management (e.g. permissions, recovery, security and monitoring)  Understanding of ETL/ELT tools and data integration patterns.  Strong troubleshooting and problem-solving skills.  Excellent communication and collaboration abilities.  Ability to work in a 24/7 support rotation and handle urgent production issues.  Strong understanding of relational database concepts.  Experience with database design, modeling, and optimization is good to have  Familiarity with data security is the best practice and backup procedures. Responsibilities  Production Support & Incident Management: Provide 24/7 support for MongoDB environments, including on-call rotation. Monitor system health and respond to s, incidents, and performance degradation issues. Troubleshoot and resolve production database issues in a timely manner.  Database Administration Install, configure, and upgrade MongoDB clusters in on-prem or cloud environments. Perform routine maintenance including backups, restores, indexing, and data migration. Monitor and manage replica sets, sharding, and cluster balancing.  Performance Tuning & Optimization Analyze query and indexing strategies to improve performance. Tune MongoDB server parameters and JVM settings where applicable. Monitor and optimize disk I/O, memory usage, and CPU utilization .  Security & Compliance Implement and manage access control, roles, and authentication mechanisms (LDAP, x.509, SCRAM). Ensure encryption, auditing, and compliance with data governance and security policies.  Automation & Monitoring Create and maintain scripts for automation of routine tasks (e.g., backups, health checks). Set up and maintain monitoring tools (e.g., MongoDB Ops Manager, Prometheus/Grafana, MMS).  Documentation & Collaboration Maintain documentation on architecture, configurations, procedures, and incident reports. Work closely with application and infrastructure teams to support new releases and deployments. Qualification  Experience with MongoDB Atlas and other cloud-managed MongoDB services.  MongoDB certification (MongoDB Certified DBA Associate/Professional).  Experience with automation tools like Ansible, Terraform, or Puppet.  Understanding of DevOps practices and CI/CD integration.  Familiarity with other NoSQL and RDBMS technologies is a plus.  Education qualification: Any degree from a reputed college  7+ years overall IT experience.

Posted 1 week ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Full-time | Entry-Level | Freshers Welcome (B.Tech Required) Location: Ahmedabad, Gujarat, India ⸻ About the Role We are seeking a detail-oriented and passionate Junior Database Engineer to join our growing infrastructure team at our Hyderabad office. This is an excellent opportunity for fresh graduates who are eager to dive deep into relational database systems, query optimization, and data infrastructure engineering. You will be responsible for maintaining, optimizing, and scaling MySQL-based database systems that power our marketplace platform—supporting real-time, high-availability operations across global trade networks. ⸻ Core Responsibilities • Support the administration and performance tuning of MySQL databases in production and development environments. • Implement database design best practices including normalization, indexing strategies, and query optimization. • Assist with managing master-slave replication, backup & recovery processes, and disaster recovery planning. • Learn and support sharding strategies, data partitioning, and horizontal scaling for large datasets. • Write and optimize complex SQL queries, stored procedures, and triggers. • Monitor database health using monitoring tools and address bottlenecks, slow queries, or deadlocks. • Collaborate with backend engineers and DevOps to ensure database reliability, scalability, and high availability. ⸻ Technical Skills & Requirements • Fresh graduates (B.Tech in Computer Science, IT, or related fields) with academic or project experience in SQL and RDBMS. • Strong understanding of relational database design, ACID principles, and transaction management. • Hands-on experience with MySQL or compatible systems (MariaDB, Percona). • Familiarity with ER modeling, data migration, and schema versioning. • Exposure to concepts like: • Replication (master-slave/master-master) • Sharding & partitioning • Write/read splitting • Backup strategies (mysqldump, Percona XtraBackup) • Connection pooling and resource utilization • Comfortable working in Linux environments and using CLI tools. • Strong analytical skills and a curiosity to explore and solve data-layer challenges. Interview Process 1. Shortlisting – Based on resume and relevant experience 2. Technical Assessment – Practical web development test 3. Final Interview – With the client’s hiring team ⸻ Why Join Us? • Be part of a cutting-edge AI project with global exposure • Work in a professional environment with real growth opportunities • Gain valuable experience in client-facing, production-level development • Strong potential for contract extension or full-time conversion ⸻ Interested in working on impactful web products for the future of AI?

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Dive in and do the best work of your career at DigitalOcean. Journey alongside a strong community of top talent who are relentless in their drive to build the simplest scalable cloud. If you have a growth mindset, naturally like to think big and bold, and are energized by the fast-paced environment of a true industry disruptor, you’ll find your place here. We value winning together—while learning, having fun, and making a profound difference for the dreamers and builders in the world. At DigitalOcean, we're not just simplifying cloud computing - we're revolutionizing it. We serve the developer community and the businesses they build with a relentless pursuit of simplicity. With our customers at the heart of what we do - and powered by a diverse culture that values boldness, speed, simplicity, ownership, and a growth mindset - we are committed to building truly useful products. Come swim with us! Position Overview We are looking for a Software Engineer who is passionate about writing clean, maintainable code and eager to contribute to the success of our platform. As a Software Engineer at DigitalOcean, you will join a dynamic team dedicated to revolutionizing cloud computing.We’re looking for an experienced Software Engineer II to join our growing engineering team. You’ll work on building and maintaining features that directly impact our users, from creating scalable backend systems to improving performance for thousands of customers. What You’ll Do Design, develop, and maintain backend systems and services that power our platform. Collaborate with cross-functional teams to design and implement new features, ensuring the best possible developer experience for our users. Troubleshoot complex technical problems and find efficient solutions in a timely manner. Write high-quality, testable code, and contribute to code reviews to maintain high standards of development practices. Participate in architecture discussions and contribute to the direction of the product’s technical vision. Continuously improve the reliability, scalability, and performance of the platform. Participate in rotating on-call support, providing assistance with production systems when necessary. Mentor and guide junior engineers, helping them grow technically and professionally. What You’ll Add To DigitalOcean A degree in Computer Science, Engineering, or a related field, or equivalent experience. Proficiency in at least one modern programming language (e.g., Go, Python, Ruby, Java, etc.), with a strong understanding of data structures, algorithms, and software design principles. Hands-on experience with cloud computing platforms and infrastructure-as-code practices. Strong knowledge of RESTful API design and web services architecture. Demonstrated ability to build scalable and reliable systems that operate in production at scale. Excellent written and verbal communication skills to effectively collaborate with teams. A deep understanding of testing principles and the ability to write automated tests that ensure the quality of code. A passion for mentoring junior engineers and helping build a culture of learning and improvement. Familiarity with agile methodologies, including sprint planning, continuous integration, and delivery. Knowledge of advanced database concepts such as sharding, indexing, and performance tuning. Exposure to monitoring and observability tools such as Prometheus, Grafana, or ELK Stack. Experience with infrastructure-as-code tools such as Terraform or CloudFormation. Familiarity with Kubernetes, Docker, and other containerization/orchestration tools. Why You’ll Like Working for DigitalOcean We innovate with purpose. You’ll be a part of a cutting-edge technology company with an upward trajectory, who are proud to simplify cloud and AI so builders can spend more time creating software that changes the world. As a member of the team, you will be a Shark who thinks big, bold, and scrappy, like an owner with a bias for action and a powerful sense of responsibility for customers, products, employees, and decisions. We prioritize career development. At DO, you’ll do the best work of your career. You will work with some of the smartest and most interesting people in the industry. We are a high-performance organization that will always challenge you to think big. Our organizational development team will provide you with resources to ensure you keep growing. We provide employees with reimbursement for relevant conferences, training, and education. All employees have access to LinkedIn Learning's 10,000+ courses to support their continued growth and development. We care about your well-being. Regardless of your location, we will provide you with a competitive array of benefits to support you from our Employee Assistance Program to Local Employee Meetups to flexible time off policy, to name a few. While the philosophy around our benefits is the same worldwide, specific benefits may vary based on local regulations and preferences. We reward our employees. The salary range for this position is based on market data, relevant years of experience, and skills. You may qualify for a bonus in addition to base salary; bonus amounts are determined based on company and individual performance. We also provide equity compensation to eligible employees, including equity grants upon hire and the option to participate in our Employee Stock Purchase Program. We value diversity and inclusion. We are an equal-opportunity employer, and recognize that diversity of thought and background builds stronger teams and products to serve our customers. We approach diversity and inclusion seriously and thoughtfully. We do not discriminate on the basis of race, religion, color, ancestry, national origin, caste, sex, sexual orientation, gender, gender identity or expression, age, disability, medical condition, pregnancy, genetic makeup, marital status, or military service. This job is located in Hyderabad, India

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Company Description Celcius Logistics Solutions Pvt Ltd is India's first and only asset-light cold chain marketplace, offering a web and app-based SaaS platform that brings the entire cold chain network online. Our platform enables seamless connections between transporters and manufacturers of perishable products, serving key sectors like dairy, pharmaceuticals, fresh agro produce, and frozen products. We provide comprehensive network monitoring and booking capabilities for reefer vehicle loads and cold storage space across India. With over 3,500 registered reefer trucks and 100+ cold storage facilities, we are revolutionizing the cold chain industry in India. Role Description We are looking for a Senior Database Administrator (DBA) to lead the design, implementation, and management of high-performance, highly available database systems. This role is critical to support real-time data ingestion, processing, and storage for our vehicle telemetry platforms. You will be responsible for ensuring 24/7 database availability, optimizing performance for millions of transactions per day, and enabling scalability for future growth. Key Responsibilities: Design and implement fault-tolerant, highly available database architectures. Manage clustering, replication, and automated failover systems. Ensure zero-downtime during updates, scaling, and maintenance. Monitor and optimize database performance and query efficiency. Tune database configurations for peak performance under load. Implement caching and indexing strategies. Design data models for real-time telemetry ingestion. Implement partitioning, sharding, and retention policies. Ensure data consistency, archival, and lifecycle management. Set up and enforce database access controls and encryption. Perform regular security audits and comply with data regulations. Implement backup, disaster recovery, and restore procedures. Qualifications 10+ years as a hands-on DBA managing production databases Experience handling high-volume, real-time data (ideally telemetry or IoT) Familiarity with microservices-based architectures Proven track record in implementing high-availability and disaster recovery solutions Advanced knowledge of enterprise RDBMS (Oracle, PostgreSQL, MongoDB, etc.) Experience with time-series and geospatial data Hands-on experience with clustering, sharding, and replication Expertise in performance tuning and query optimization Proficiency in database automation and monitoring tools Strong scripting skills (Python, Shell, etc.)

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 - 22 Lacs

Gurugram, Delhi / NCR

Hybrid

about the role Design, develop, and maintain MongoDB databases for high-performance applications. Optimize queries and indexing strategies to improve database performance. Ensure database security, backup, recovery, and disaster recovery planning. Monitor database performance and troubleshoot issues proactively. Implement and manage replication, sharding, and scaling strategies. Collaborate with development teams to optimize data models and queries. Perform regular upgrades, patches, and maintenance of MongoDB clusters. Establish and enforce best practices for database administration and development. Support and automate database operations using scripts and tools. about you Strong expertise in MongoDB development and administration . Experience with database performance tuning and optimization. Hands-on experience with replication, sharding, and indexing. Proficiency in MongoDB query language (Aggregation framework, CRUD operations). Knowledge of Database security, authentication, and authorization mechanisms. Experience with Backup and Recovery strategies . Good to have: Experience with Automation tools like Ansible, Shell Scripting, or Python. Good to have: Familiarity with Cloud-based MongoDB deployments (MongoDB Atlas, AWS, Azure, GCP). Good to have: Knowledge of any RDBMS, especially Oracle or PostgreSQL. Good to have: Exposure to other NoSQL databases like Cassandra, Redis, or DynamoDB

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies