Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 years
0 Lacs
india
On-site
About Latinum: Latinum is hiring for multiple backend engineering roles. You must demonstrate strong capabilities in either Core Java backend engineering or Microservices and Cloud architecture , with working knowledge in the other. Candidates with strengths in both areas will be considered for senior roles. You will be part of a high-performance engineering team solving complex business problems through robust, scalable, and high-throughput systems. Experience : Minimum 7+ years of hands on experience is mandatory. Java & Backend Engineering Java 8+ (Streams, Lambdas, Functional Interfaces, Optionals) Spring Core, Spring Boot, object-oriented principles, exception handling, immutability Multithreading (Executor framework, locks, concurrency utilities) Collections, data structures, algorithms, time/space complexity Kafka (producer/consumer, schema, error handling, observability) JPA, RDBMS/NoSQL, joins, indexing, data modelling, sharding, CDC JVM tuning, GC configuration, profiling, dump analysis Design patterns (GoF – creational, structural, behavioral) Microservices, Cloud & Distributed Systems REST APIs, OpenAPI/Swagger, request/response handling, API design best practices Spring Boot, Spring Cloud, Spring Reactive Kafka Streams, CQRS, materialized views, event-driven patterns GraphQL (Apollo/Spring Boot), schema federation, resolvers, caching Cloud-native apps on AWS (Lambda, IAM, S3, containers) API security (Oauth 2.0, JWT, Keycloak, API Gateway configuration) CI/CD pipelines, Docker, Kubernetes, Terraform Observability with ELK, Prometheus, Grafana, Jaeger, Kiali Additional Skills (Nice to Have) Node.js, React, Angular, Golang, Python, GenAI Web platforms: AEM, Sitecore Production support, rollbacks, canary deployments TDD, mocking, Postman, security/performance test automation Architecture artifacts: logical/sequence views, layering, solution detailing Key Responsibilities: Design and develop scalable backend systems using Java and Spring Boot Build event-driven microservices and cloud-native APIs Implement secure, observable, and high-performance solutions Collaborate with teams to define architecture, patterns, and standards Contribute to solution design, code reviews, and production readiness Troubleshoot, optimize, and monitor distributed systems in production Mentor junior engineers (for senior roles).
Posted 1 day ago
14.0 years
0 Lacs
hyderabad, telangana, india
On-site
At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We’re looking for people who are determined to make life better for people around the world. Lilly’s Purpose At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our 35,000 employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We’re looking for people who are determined to make life better for people around the world. Come bring to life technologies to lead in Pharma-tech! The Enterprise Data Platforms enable the platform has developed an integrated and intuitive databases, integration and analytics platform. This platform enables Lilly business functions enables quickly utilize the database services, integration services and analytics platform. What You Will Be Doing: Reporting to the Director and as Associate Director-Database Reliability Engineering will lead the operations, reliability, and performance engineering of on-premises and cloud database platforms supporting the Enterprise Data Platform . This leadership role focuses on scalability, automation, reliability, and operational excellence , enabling seamless support for mission-critical data services across analytical, transactional, and integration workloads. The role requires a blend of deep database expertise, SRE principles, and operational leadership to drive stability, resiliency, and continuous improvement for hybrid enterprise-scale environments. How You Will Succeed: Define and execute the database operations and reliability strategy , balancing performance, availability, and cost efficiency across on-prem and cloud ecosystems (AWS, Azure, GCP). Establish and mature the DBRE model for database platforms, embedding reliability, observability, and automation in daily operations. Partner with architects, engineering teams, and stakeholders to align operations with enterprise data and digital transformation goals . How You Will Succeed: Define and execute the database operations and reliability strategy , balancing performance, availability, and cost efficiency across on-prem and cloud ecosystems (AWS, Azure, GCP). Establish and mature the DBRE model for database platforms, embedding reliability, observability, and automation in daily operations. Partner with architects, engineering teams, and stakeholders to align operations with enterprise data and digital transformation goals . Team Leadership & Collaboration Lead, mentor, and grow a team of Database Administrators, RE engineers, and platform specialists supporting global environments. Foster a DevOps and SRE culture , driving collaboration between data engineering, analytics, application, and infrastructure teams . Create operational playbooks, runbooks and implementing Agentic AI on the DB Ops and Observability to standardize and streamline incident response and recovery processes. Operations & Platform Management Oversee day-to-day operations of enterprise databases including Oracle, SQL Server, PostgreSQL, and cloud-native services like RDS, Aurora, DynamoDB, and Azure SQL. Drive automation-first operations leveraging Infrastructure-as-Code (IaC), CI/CD pipelines, Ansible and self-healing mechanisms. Implement robust disaster recovery, failover, and capacity management processes to ensure high platform availability. Lead performance engineering efforts, including capacity planning, tuning, and cost optimization. SRE & Reliability Engineering Build and maintain observability frameworks with metrics, logging, and tracing for proactive monitoring and alerting. Define and manage Service Level Objectives (SLOs), Service Level Indicators (SLIs), and Service Level Agreements (SLAs) for database services. Establish incident management, problem management, and root cause analysis (RCA) processes to improve Mean Time to Detect (MTTD) and Mean Time to Recovery (MTTR). Champion a blameless post-mortem culture and drive continuous reliability improvements. Education & Experience Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. 14+ years of experience in enterprise database operations, with at least 5+ years in leadership roles managing hybrid or multi-cloud environments. Proven experience building and scaling SRE practices for data platforms in regulated, enterprise-scale environments. Technical Skills Expertise in relational and NoSQL database platforms (Oracle, SQL Server, PostgreSQL, Graph DB, Mongo DB, Vector DB). Hands-on experience with cloud-native database services (AWS RDS, Aurora, DynamoDB; Azure SQL; GCP BigQuery). Proficiency in automation tools (Terraform, Ansible, CloudFormation) and CI/CD pipelines. Strong understanding of observability tools (Datadog, Prometheus, Grafana, Splunk, New Relic, or equivalent). Knowledge of high-availability architectures, replication, sharding, and database performance tuning. Familiarity with security frameworks and compliance requirements in regulated industries. Key Performance Indicators (KPIs) Uptime and reliability of database services. Reduction in Mean Time to Detect (MTTD) and Mean Time to Recovery (MTTR). Automation coverage and reduction in manual operations. Cost efficiency and capacity utilization metrics. Compliance audit success rates and zero security incidents. Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form (https://careers.lilly.com/us/en/workplace-accommodation) for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response. Lilly does not discriminate on the basis of age, race, color, religion, gender, sexual orientation, gender identity, gender expression, national origin, protected veteran status, disability or any other legally protected status. #WeAreLilly
Posted 1 day ago
0 years
0 Lacs
hyderābād
On-site
Ready to build the future with AI? At Genpact, we don’t just keep up with technology—we set the pace. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what’s possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Lead Consultant, MySQL Database Administrator We are looking for a highly skilled Lead Consultant – MySQL Database Administrator to manage, optimize, and support enterprise-grade MySQL database environments. The candidate will be responsible for installation, configuration, performance tuning, backup/recovery, and ensuring high availability of MySQL databases. This role requires hands-on technical expertise along with the ability to collaborate with cross-functional teams and support business-critical applications. Responsibilities Install, configure, and maintain MySQL database servers (on-premise and cloud). Monitor database performance and implement performance tuning, indexing, and query optimization. Manage backup, restore, replication, and failover procedures to ensure high availability. Administer user access, roles, and security policies to maintain compliance. Implement partitioning, clustering, and sharding strategies for large datasets. Perform capacity planning, patching, and version upgrades. Troubleshoot incidents and resolve database-related issues as per defined SLAs. Collaborate with application and infrastructure teams to support deployments and DB changes. Automate routine DBA tasks using scripting (Shell, Python, or Ansible). Document database standards, procedures, and best practices. Support 24x7 on-call rotation for critical production environments. Qualifications we seek in you! Minimum Qualifications Bachelor's degree in information technology, Computer Science, or a related field. Strong expertise in MySQL 5.7/8.0 administration, replication, clustering (Galera), and backups. Solid knowledge of Linux administration (RHEL, CentOS, Ubuntu). Proficiency in performance tuning, query optimization, and indexing. Experience with backup & recovery tools (mysqldump, Percona XtraBackup, etc.). Knowledge of cloud-hosted MySQL (AWS RDS, Aurora, GCP CloudSQL, Azure Database for MySQL). Hands-on experience with HA/DR setup (replication, clustering, failover). Familiarity with monitoring tools (Nagios, Prometheus, Percona Monitoring, CloudWatch). Strong scripting skills in Shell / Python for automation. Understanding of security best practices and compliance (GDPR, HIPAA, PCI-DSS) Preferred Qualifications/ Skills Bachelor’s degree in computer science, IT, or related field. Certifications: MySQL DBA Certification / AWS Database Specialty / Google Cloud SQL certification. Exposure to NoSQL databases (MongoDB, Cassandra) is a plus. Experience in DevOps/CI-CD integration with DBs is desirable. Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career —Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Sep 12, 2025, 9:29:31 AM Unposting Date Ongoing Master Skills List Consulting Job Category Full Time
Posted 2 days ago
7.0 years
0 Lacs
chennai, tamil nadu, india
On-site
Job Description Role: MongoDB Senior Database Administrator Location: O ffshore/India Who are we looking for? We are looking for 7+ years of administrator experience in MongoDB/ Cassandra/ Snowflake Databases. This role is focused on production support, ensuring database performance, availability, and reliability across multiple clusters. The ideal candidate will be responsible for ensuring the availability, performance, and security of our NoSQL database environment. You will provide 24/7 production support, troubleshoot issues, monitor system health, optimize performance, and collaborate with cross-functional teams to maintain a reliable and efficient Snowflake platform. Technical Skills Proven experience as a MongoDB/Cassandra/Snowflake Databases Administrator or similar role in production support environments. 7+ years of hands-on experience as a MongoDB DBA supporting production environments. Strong understanding of MongoDB architecture, including replica sets, sharding, and aggregation framework. Proficiency in writing and optimizing complex MongoDB queries and indexes. Experience with backup and recovery solutions (e.g., mongodump, mongorestore, Ops Manager). Solid knowledge of Linux/Unix systems and scripting (Shell, Python, or similar). Experience with monitoring tools like Prometheus, Grafana, DataStax OpsCenter, or similar. Understanding of distributed systems and high-availability concepts. Proficiency in troubleshooting cluster issues, performance tuning, and capacity planning. In-depth understanding of data management (e.g. permissions, recovery, security and monitoring) Understanding of ETL/ELT tools and data integration patterns. Strong troubleshooting and problem-solving skills. Excellent communication and collaboration abilities. Ability to work in a 24/7 support rotation and handle urgent production issues. Strong understanding of relational database concepts. Experience with database design, modeling, and optimization is good to have Familiarity with data security is the best practice and backup procedures. Responsibilities Production Support & Incident Management: Provide 24/7 support for MongoDB environments, including on-call rotation. Monitor system health and respond to s, incidents, and performance degradation issues. Troubleshoot and resolve production database issues in a timely manner. Database Administration: Install, configure, and upgrade MongoDB clusters in on-prem or cloud environments. Perform routine maintenance including backups, restores, indexing, and data migration. Monitor and manage replica sets, sharding, and cluster balancing. Performance Tuning & Optimization: Analyze query and indexing strategies to improve performance. Tune MongoDB server parameters and JVM settings where applicable. Monitor and optimize disk I/O, memory usage, and CPU utilization. Security & Compliance: Implement and manage access control, roles, and authentication mechanisms (LDAP, x.509, SCRAM). Ensure encryption, auditing, and compliance with data governance and security policies. Automation & Monitoring: Create and maintain scripts for automation of routine tasks (e.g., backups, health checks). Set up and maintain monitoring tools (e.g., MongoDB Ops Manager, Prometheus/Grafana, MMS). Documentation & Collaboration: Maintain documentation on architecture, configurations, procedures, and incident reports. Work closely with application and infrastructure teams to support new releases and deployments. Qualification Experience with MongoDB Atlas and other cloud-managed MongoDB services. MongoDB certification (MongoDB Certified DBA Associate/Professional). Experience with automation tools like Ansible, Terraform, or Puppet. Understanding of DevOps practices and CI/CD integration. Familiarity with other NoSQL and RDBMS technologies is a plus. Education qualification: Any degree from a reputed college 7+ years overall IT experience.
Posted 2 days ago
10.0 years
29 - 35 Lacs
india
On-site
(India & North Macedonia pods · United States & India internship programs · Java Spring Boot Microservices Angular Flutter Amazon Web Services Google Cloud Platform OVHcloud MongoDB MySQL PostgreSQL Jira with sprint-based delivery · Sub-millisecond / microsecond-class latency ) About the role Lead two senior engineering pods in India and North Macedonia , plus coordinated internship programs in the United States and India . This player-coach role owns strategy and hands-on excellence for a platform that targets sub-millisecond and microsecond-class latency on critical paths. You will shape architecture, raise the code quality bar, and run a clear, Jira-driven sprint model that delivers predictable outcomes at extreme performance levels. What you’ll do Institutionalize latency as a first-class goal: Define service-level objectives in microseconds/milliseconds (p50, p95, p99, p99.9), set per-service latency budgets, and enforce them in pull requests, load tests, and release gates. Architect for ultra-low latency: Evolve an application programming interface–first Spring Boot microservices platform on Kubernetes (Amazon Elastic Kubernetes Service, Google Kubernetes Engine, or OVHcloud Managed Kubernetes) with: lightweight binary protocols and efficient serialization (for example, Protocol Buffers where appropriate), connection pooling and keep-alive tuning, zero-copy and off-heap patterns where beneficial, lock-free or low-contention designs (for example, ring buffers / disruptor patterns), asynchronous and reactive pipelines for back-pressure control, network and operating system tuning (receive side scaling, interrupt moderation, jumbo frames where safe, non-uniform memory access awareness, thread pinning). Engineer the Java Virtual Machine for speed: Standardize low-pause garbage collectors (Z Garbage Collector or Shenandoah), heap sizing, just-in-time compiler warm-up, class data sharing, and profiling (Java Flight Recorder, async-profiler) with performance baselines checked into the repository. Data paths built for microseconds: Drive designs in MySQL, PostgreSQL, and MongoDB with partitioning/sharding, change-data-capture, prepared statements, read/write separation, hot caches (Redis), page-cache warming, and point-in-time recovery and disaster-recovery plans that do not compromise latency on the happy path. Quality, reliability, and safety at speed: Implement contract tests , end-to-end smoke tests , progressive delivery (canary and blue-green releases), and observability with high-resolution histograms for latency. Use OpenTelemetry traces, metrics, and logs to visualize tail latency and eliminate jitter. Security that respects performance: Apply Transport Layer Security termination with sensible cipher choices and hardware acceleration where available; run a secure software development life cycle with static , dynamic , and software-composition security testing and software bill of materials / artifact signing. Operate the sprint system: Make Jira the source of truth—well-formed epics, stories with acceptance criteria, two-week sprints, and ceremonies (refinement, planning, daily stand-ups, reviews, retrospectives). Publish live dashboards for velocity, burndown/burnup, cycle/lead time, throughput, and work-in-progress. Build and mentor player-coaches: Hire and grow Developers, Senior Developers, and a hands-on Engineering Manager at each site. Lead by example with design spikes, reference implementations, and deep code reviews. Run internship programs (United States and India): Create 10–12 week curricula, sandboxed backlogs, pair-programming, weekly demos, and conversion paths to full-time roles. What success looks like (6–12 months) Latency targets met: Example targets—critical in-cluster request p99 ≤ 1 millisecond ; in-process hot path p99 ≤ 150–300 microseconds ; end-to-end user journey p95 ≤ 50 milliseconds (numbers will be finalized per service). Predictable delivery: ≥ 85% sprint predictability (planned versus completed) with reduced cycle time and mean time to recovery trending down quarter-over-quarter. Production confidence: Progressive delivery in place, service-level objectives consistently met, and zero critical vulnerabilities outstanding. Cost-aware performance: Measurable reduction in cost per customer or cost per transaction while maintaining latency goals. Talent engine: Two self-sufficient pods with strong engagement; internship programs meeting satisfaction and conversion targets. Qualifications Experience: 10+ years in software engineering; 5+ years leading multi-team organizations; proven leadership of distributed pods and early-career programs. Low-latency depth (Java focus): Spring Boot 3.x, asynchronous/reactive design, Netty-class networking, disruptor or ring-buffer patterns, off-heap strategies, garbage-collector tuning (Z Garbage Collector or Shenandoah), and Linux performance tuning (thread pinning, non-uniform memory access awareness, kernel parameters). Platform: Kubernetes, Helm, Argo Continuous Delivery, GitHub Actions or GitLab Continuous Integration, and infrastructure as code with Terraform across Amazon Web Services, Google Cloud Platform, and OVHcloud. Data: MySQL, PostgreSQL, MongoDB, and Redis; schema design, indexing, partitioning, performance tuning, and change-data-capture. Observability and resilience: OpenTelemetry traces/metrics/logs; Prometheus and Grafana; Elasticsearch/Logstash/Kibana or OpenSearch; incident management with blameless postmortems. Security: OAuth 2.0, OpenID Connect, and JSON Web Tokens; secrets management; static/dynamic/software-composition testing; supply-chain hardening. Leadership: A true player-coach who can set crisp strategy, mentor managers and senior engineers, and translate microsecond-level engineering choices into business outcomes. Our stack (you will influence and improve) Backend: Java 17+, Spring Boot 3.x, Spring Cloud, RESTful and GraphQL APIs Web/Mobile: Angular, Flutter Infrastructure and Cloud: Kubernetes, Helm, Argo Continuous Delivery, Terraform, GitHub Actions or GitLab Continuous Integration; Amazon Web Services; Google Cloud Platform; OVHcloud Data: MySQL, PostgreSQL, MongoDB; Redis for hot-path caching Observability and Security: OpenTelemetry; Prometheus and Grafana; Elasticsearch/Logstash/Kibana or OpenSearch; OAuth 2.0, OpenID Connect, JSON Web Tokens; Vault/Secrets Manager Process: Jira with Scrum/Kanban; Confluence for specifications and runbooks Job Types: Full-time, Permanent Pay: ₹2,913,711.81 - ₹3,581,863.33 per year Benefits: Health insurance Life insurance Paid sick time Paid time off Provident Fund
Posted 3 days ago
6.0 years
0 Lacs
india
On-site
Company Description LeanQubit is a Certified Integrator of Ignition, specializing in the manufacturing, energy, and utility industries. Our certification ensures expertise in understanding business fundamentals and applying Ignition-based solutions that fit specific needs. We provide flexible, cost-effective hardware solutions integrated with our proprietary FactoTools to enable automation and drive business results. Located in Mumbai, we maintain our certification for the latest Ignition software to offer cutting-edge solutions. Job Description: Python Technical Lead (6+ Years Experience) Job Summary: We are seeking a highly skilled and experienced Python Technical Lead to drive the development of scalable, high-performance software systems. The ideal candidate will have deep expertise in Python programming, data structures, memory optimization, database engineering, and machine learning/AI integration. You will lead a team of engineers, architect robust solutions, and ensure the delivery of high-quality, production-grade software. Key Responsibilities: Lead the design, development, and deployment of complex Python-based software systems. Architect scalable, low-latency systems for high-volume, high-frequency data processing. Optimize Python applications for performance, memory usage, and concurrency. Design and manage efficient, scalable database architectures. Integrate machine learning models into production systems and pipelines. Collaborate with cross-functional teams including data scientists, DevOps, and product managers. Conduct code reviews, mentor team members, and enforce engineering best practices. Stay current with emerging technologies and recommend adoption where appropriate. Technical Skills Required: Programming & System Design: Expert in Python, including advanced features such as generators, decorators, context managers, and metaclasses. Strong grasp of data structures and algorithms for performance-critical applications. Experience with asynchronous programming using asyncio and concurrent.futures. Proficient in memory management, profiling, and optimization using tools like memory_profiler, objgraph, and tracemalloc. Designing for low-latency, high-throughput, and real-time processing. Experience with distributed systems, microservices, and event-driven architectures. Database Engineering: Advanced experience with relational databases: PostgreSQL, MySQL, SQLite. NoSQL databases: MongoDB, Cassandra, Redis. Query optimization, indexing, partitioning, sharding, and replication. Data modeling for both OLTP and OLAP systems. Experience with streaming data and time-series databases (e.g., InfluxDB, Apache Druid). Machine Learning & AI: Experience working with data scientists to integrate ML models into production systems. Familiarity with ML frameworks: TensorFlow, PyTorch, Scikit-learn. Understanding of model serving (e.g., TensorFlow Serving, ONNX, TorchServe). Experience with feature engineering, data pipelines, and model versioning. Exposure to computer vision, NLP, or predictive analytics is a plus. Cloud & DevOps: Hands-on with AWS, Azure, or GCP. CI/CD pipelines: Jenkins, GitHub Actions, GitLab CI. Containerization: Docker, Kubernetes. Infrastructure as Code: Terraform, Ansible. Testing & Quality: Unit and integration testing: unittest, pytest, mock. Performance testing: Locust, JMeter. Code quality tools: flake8, pylint, black. Collaboration & Agile: Git-based version control (GitHub, GitLab, Bitbucket). Agile/Scrum methodologies. Tools: JIRA, Confluence, Slack, Miro. Preferred Qualifications: Experience in real-time analytics, financial systems, or IoT platforms. Certifications in Python, cloud platforms, or ML/AI. Strong communication, leadership, and mentoring skills.
Posted 3 days ago
5.0 years
0 Lacs
india
On-site
Responsibilities: Lead & Mentor: Guide a team of developers, ensuring best practices in software development, clean architecture, and performance optimisation. Architect & Scale: Design and build highly scalable and reliable backend services using Node.js, MongoDB, and ElasticSearch, ensuring optimal indexing, sharding, and query performance. Frontend Development: Develop and optimise user interfaces using Vue.js (or React/Angular) for an exceptional customer experience. Event-Driven Systems: Design and implement real-time data processing pipelines using Kafka, RabbitMQ, or ActiveMQ. Optimise Performance: Implement autoscaling, database sharding, and indexing strategies to efficiently handle millions of transactions. Cross-Functional Collaboration: Work closely with Product Managers, Data Engineers, and DevOps teams to align on vision, execution, and business goals. Quality & Security: Implement secure, maintainable, and scalable codebases while adhering to industry best practices. Code Reviews & Standards: Drive high engineering standards, perform code reviews, and enforce best practices across the development team. Ownership & Delivery: Manage timelines, oversee deployments, and ensure smooth product releases with minimal downtime. Requirements: 5+ years of hands-on software development experience with at least 2+ years in a leadership role. Strong proficiency in Node.js, Vue.js (or React/Angular), MongoDB, and Elasticsearch. Experience in real-time data processing, message queues (Kafka, RabbitMQ, or ActiveMQ), and event-driven architectures. Scalability expertise: Proven track record of scaling services to 200k+ MAUs and handling high-throughput systems. Strong understanding of database sharding, indexing, and performance optimisation. Experience with distributed systems, microservices, and cloud infrastructures (AWS, GCP, or Azure). Proficiency in CI/CD pipelines, Git version control, and automated testing. Strong problem-solving, analytical, and debugging skills. Excellent communication and leadership abilities, able to guide engineers while collaborating with stakeholders.
Posted 3 days ago
8.0 years
0 Lacs
gurugram, haryana, india
On-site
About Delhivery : Delhivery is India’s leading fulfillment platform for digital commerce. With a vast logistics network spanning 18,000+ pin codes and over 2,500 cities, Delhivery provides a comprehensive suite of services including express parcel transportation, freight solutions, reverse logistics, cross-border commerce, warehousing, and cutting-edge technology services. Since 2011, we’ve fulfilled over 550 million transactions and empowered 10,000+ businesses, from startups to large enterprises. Vision : To become the operating system for commerce in India by combining world-class infrastructure, robust logistics operations, and technology excellence. About the Role: We’re looking for an experienced Backend Technical Lead (5–8 years experience) who can lead the design, development, and scaling of backend systems powering large-scale logistics. In this role, you’ll architect high-throughput systems, guide a team of engineers, and integrate AI tools and agentic frameworks (Model Context Protocol (MCP), Multi Agent Systems (MAS), etc.) into your development and decision-making workflows. You’ll drive engineering excellence, contribute to architectural decisions, and nurture a culture of ownership, innovation, and AI-native development. This is a hands-on leadership position where you’ll build systems, lead design reviews, and mentor a high-performing team while pushing the boundaries of AI-assisted engineering. What You’ll Do: Technical Leadership & Ownership: Lead the architecture, design, and delivery of scalable RESTful and gRPC APIs. Guide system design and backend workflows for microservices handling millions of transactions daily. Drive engineering best practices in code quality, testing, observability, and deployment. Team Leadership: Mentor and coach a team of backend and frontend developers, helping them grow technically and professionally. Conduct regular design/code reviews and set high standards for system performance, reliability, and scalability. Drive sprint planning, task breakdowns, and technical execution with a bias for action and quality. AI-Native Development: Leverage modern AI tools (e.g., Cursor AI, Copilot, Codex, Gemini, Windsurf) for: Prompt-based code generation, refactoring, and documentation. Intelligent debugging and observability enhancements. Test case generation and validation. Experiment with and implement agentic frameworks (MCP, MAS etc) in building tools, automating workflows, optimization tasks, or intelligent system behavior. Contribute and maintain to internal AI prompt libraries and lead adoption of AI-assisted development practices. Systems & Infrastructure: Own the end-to-end lifecycle of backend services — from design to rollout and production monitoring. Work with data models across PostgreSQL, DynamoDB, or MongoDB. Optimize services using metrics, logs, traces, and performance profiling tools. Cross-functional Collaboration: Partner closely with Product, Design, QA, DevOps, and other Engineering teams to deliver cohesive product experiences. Influence product roadmaps and business decisions through a technical lens. What We’re Looking For: 5–8 years of backend development experience with at least 1+ years as a team/tech lead . Deep expertise in Python, Go, or Java and comfort across multiple backend frameworks. Solid understanding of Data Structures, Algorithms, System Design, and SQL . Proven experience in building, scaling, and maintaining microservices . Strong hands-on experience with REST/gRPC APIs , containerized deployments , and CI/CD pipelines . Strong database fundamentals — indexing, query optimization, replication, consistency models, and schema design Exposure to system scalability concepts — horizontal/vertical scaling, sharding, partitioning, rate-limiting, load balancing, and caching strategies.. Effective use of AI developer tools (Cursor AI, Codex, Copilot, Gemini, Windsurf) in real-world engineering workflows. Exposure to agentic frameworks and AI concepts such as: MCP, Multi-Agent Systems (MAS), Belief-Desire-Intention (BDI), or NLP pipelines. Cloud proficiency (any cloud, AWS preferred ), including deployment, cost optimization , and infrastructure scaling . Excellent communication and stakeholder management skills. A strong sense of ownership, problem-solving ability, and collaborative mindset.
Posted 3 days ago
4.0 years
0 Lacs
gurugram, haryana, india
On-site
Responsibilities Work on development of RESTFUL APIs in NODEJS/EXPRESSJS/NESTJS. Convert the requirements to functional modules using latest version of frameworks( Angular/Express) Develop new features using Object Oriented Standards and MVC development architecture. Work on independent modules using Micro Service architecture and design principles. Collaborate with UI team and integrate the new design feature on the frontend in Angular. Collaborate with cross functional team and create new dashboards, reporting screens for high volume data. Update the applications developed in older version of framework to latest versions of Angular/NodeJS. Should have strong knowledge of Database Schema creation and optimization of both MongoDB and PostgreSQL. Work on Unit testing, troubleshooting, and debug the MEAN applications. Manage Code Integrity and Security through version control management system like SVN. Migrate the application from one server to other server by ensuring the high availability. Lead, Mentor, Manage and Train junior developer and effectively use them on the application modules. Documenting and logging the deliverable to monitor the Errors. Skills 4+ Experience in backend development in NodeJS, specifically in RESTful API development. 4+ Experience in frontend development in Angular following Angular framework standards like Dependency Injection, Pipes, Service, and Directives etc. 5+ Experience in both relational database like PostgreSQL and document database like mongo. 4+ years of experience in Database Schema and DB ORMS like mongoose, Sequelize or similar. Experience in consuming, logging, and managing huge volume of data for optimized speed and accuracy. Experience with advance optimization concepts like Database Sharding, Database Cloning, Replication, Load Balancing, and multithreading. Experience in working with complex dashboards, and reporting is a big plus.
Posted 5 days ago
2.0 years
0 Lacs
coimbatore, tamil nadu, india
On-site
Backend Engineer - Integration Specialist Company: Cobay Technology Private Limited Location: Coimbatore, Tamil Nadu (On-site) Experience: 2 - 4 years Team: Platform Engineering Team 🚀 About the Role We're seeking a talented Senior Backend Engineer with strong integration expertise to architect and build the core infrastructure of our multi-tenant SaaS platform. You'll be responsible for designing scalable backend systems and creating robust integrations with various third-party services and APIs. As our Backend Engineer, you'll work on high-throughput systems processing millions of transactions daily, building RESTful APIs, implementing event-driven architectures, and ensuring our platform maintains 99.9% uptime while scaling to meet growing demands. 💼 Key Responsibilities Backend Development Design and implement RESTful and GraphQL APIs for internal and external consumption Build robust webhook handlers for real-time event processing Develop authentication and authorization systems (OAuth 2.0, JWT, API keys) System Architecture & Design Architect scalable microservices supporting a multi-tenant SaaS architecture Design event-driven systems using message queues for asynchronous processing Design systems capable of handling 10,000+ concurrent requests Build comprehensive monitoring, logging, and alerting systems Collaboration & Leadership Collaborate with frontend teams to design optimal API contracts Mentor junior developers on backend best practices 🛠 Technical Requirements Backend Development : 2+ years building production APIs with Node.js and Express.js. Expert-level understanding of RESTful design, microservices architecture, and event-driven systems. Strong proficiency in JavaScript/TypeScript and asynchronous programming patterns. Database & Data Management : Expertise in both NoSQL (MongoDB - aggregation pipelines, indexing, sharding) and SQL databases. Experience with caching strategies (Redis), query optimization, and handling high-volume transactions (10,000+ concurrent requests). Integration & APIs : Proven experience building robust third-party integrations, webhook handlers, and API gateways. Strong knowledge of authentication patterns (OAuth 2.0, JWT), rate limiting, and API versioning Cloud & DevOps : Hands-on experience with AWS services (EC2, Lambda, S3, SQS, EventBridge), Docker containerization, and CI/CD pipelines. Testing & Code Quality : Strong testing practices with 80%+ code coverage, experience with TDD/BDD methodologies. Proficient in Git workflows, code review processes, and maintaining high code quality standards using tools like ESLint and SonarQube.
Posted 5 days ago
0.0 years
0 - 0 Lacs
kovaipudur, coimbatore, tamil nadu
Remote
About the Role We're seeking a talented Senior Backend Engineer with strong integration expertise to architect and build the core infrastructure of our multi-tenant SaaS platform. You'll be responsible for designing scalable backend systems and creating robust integrations with various third-party services and APIs. As our Backend Engineer, you'll work on high-throughput systems processing millions of transactions daily, building RESTful APIs, implementing event-driven architectures, and ensuring our platform maintains 99.9% uptime while scaling to meet growing demands. Backend Development Design and implement RESTful and GraphQL APIs for internal and external consumption Build robust webhook handlers for real-time event processing Develop authentication and authorization systems (OAuth 2.0, JWT, API keys) System Architecture & Design Architect scalable microservices supporting a multi-tenant SaaS architecture Design event-driven systems using message queues for asynchronous processing Design systems capable of handling 10,000+ concurrent requests Build comprehensive monitoring, logging, and alerting systems Collaboration & Leadership Collaborate with frontend teams to design optimal API contracts Mentor junior developers on backend best practices Technical Requirements Backend Development : 2+ years building production APIs with Node.js and Express.js. Expert-level understanding of RESTful design, microservices architecture, and event-driven systems. Strong proficiency in JavaScript/TypeScript and asynchronous programming patterns. Database & Data Management : Expertise in both NoSQL (MongoDB - aggregation pipelines, indexing, sharding) and SQL databases. Experience with caching strategies (Redis), query optimization, and handling high-volume transactions (10,000+ concurrent requests). Integration & APIs : Proven experience building robust third-party integrations, webhook handlers, and API gateways. Strong knowledge of authentication patterns (OAuth 2.0, JWT), rate limiting, and API versioning Cloud & DevOps : Hands-on experience with AWS services (EC2, Lambda, S3, SQS, EventBridge), Docker containerization, and CI/CD pipelines. Testing & Code Quality : Strong testing practices with 80%+ code coverage, experience with TDD/BDD methodologies. Proficient in Git workflows, code review processes, and maintaining high code quality standards using tools like ESLint and SonarQube. Job Types: Full-time, Permanent Pay: ₹25,000.00 - ₹60,000.00 per month Benefits: Health insurance Provident Fund Work from home Ability to commute/relocate: Kovaipudur, Coimbatore, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person Speak with the employer +91 6385291572
Posted 5 days ago
5.0 years
0 Lacs
noida, uttar pradesh, india
On-site
Database Reliability Engineering Position Overview We are seeking a highly skilled Database Administrator (DBA) with extensive hands-on experience in managing AWS OpenSearch or Elasticsearch clusters, as well as expertise in in-memory databases (e.g., Redis, Memcached) or graph databases (e.g., Neo4j) . The ideal candidate will have a deep understanding of data and index shard management, schema planning, and data modelling for distributed search, in-memory, and graph-based systems. Additionally, the candidate must be proficient in DevOps tools and have experience managing critical production systems that directly impact revenue. The role requires a proactive learner capable of adapting quickly to new technologies in a fast-paced environment. Key Responsibilities Manage and optimize AWS OpenSearch/Elasticsearch, in-memory (e.g., Redis, Memcached), and graph database (e.g., Neo4j) clusters for high availability and performance. Optimize data distribution, sharding, caching, and graph traversals for efficient query performance. Design schemas and data models for search, caching, and graph-based use cases. Tune query performance and indexing strategies to meet SLAs for critical production systems. Automate deployment, scaling, and monitoring using DevOps tools (e.g., Terraform, Ansible, Docker, Kubernetes). Troubleshoot and resolve issues in real-time for revenue-impacting systems. Implement monitoring and alerting (e.g., CloudWatch, Prometheus) to detect performance issues. Ensure security and compliance with encryption and access controls (IAM, VPC). Collaborate with development and DevOps teams for application integration. Stay updated on advancements in relevant database technologies. Requirements and Skills: Education : B-Tech / MCA Experience : Overall 5 - 10 Years 5+ years managing AWS OpenSearch or Elasticsearch in production. 2+ years managing in-memory databases (e.g., Redis, Memcached) or graph databases (e.g., Neo4j). Proven track record with critical production systems impacting revenue. Expertise in data and index shard management, caching strategies, and node-relationship modelling. Strong experience in schema design and data modelling for search, caching, and graph use cases. Technical Skills: Proficient in AWS services (e.g., OpenSearch Service, EC2, ElastiCache, CloudWatch). Advanced knowledge of Elasticsearch/OpenSearch APIs, query DSL, and optimization. Experience with in-memory databases (e.g., Redis, Memcached) or graph databases (e.g., Neo4j Cypher). Skilled in DevOps tools: IaC: Terraform, CloudFormation Configuration: Ansible, Chef, Puppet CI/CD: Jenkins, GitLab CI, GitHub Actions Containerization: Docker, Kubernetes Monitoring: Prometheus, Grafana, ELK Stack Familiarity with scripting (e.g., Python, Bash) for automation. Soft Skills: Strong problem-solving and analytical skills for distributed systems. Quick learner, adaptable to new tools and processes. Excellent communication and collaboration skills. Certifications (Preferred): AWS Certified Solutions Architect, AWS Certified Database – Specialty, or Elastic Certified Engineer. Neo4j Certified Professional or Redis-related certifications. DevOps certifications (e.g., Docker, Kubernetes). Good-to-Have Experience with other AWS databases (e.g., RDS, DynamoDB, Neptune). Knowledge of big data tools (e.g., Hadoop, Spark, Kafka). Familiarity with log analytics platforms (e.g., ELK Stack, Splunk). Responsibilities of the Job Include (But Not Limited To): Serve as primary POC for advanced-level troubleshooting and perform hands-on DB/system administration Automate regular administrative tasks and ensure correctness Document data standards, procedures, and dictionary definitions Collaborate with SREs and developers on DB design, performance tuning, and production support Help evolve application/database architecture and support environment creation Respond to on-call incidents, and support technical queries from staff, management, and vendors
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
coimbatore, tamil nadu
On-site
You should have a Bachelor's degree in Computer Science, Information Technology, or a related field with a minimum of 3 years of experience in database management and administration. Proficiency in one or more relational database management systems like MySQL, PostgreSQL, or MongoDB is required. Your responsibilities will include strong SQL query writing and optimization skills, routine maintenance tasks, such as software updates and backups, and implementing and maintaining database security measures like access control, authentication, and encryption. You are expected to have a good understanding of Query Optimization and Query Tuning and ensure compliance with data privacy regulations such as GDPR and HIPAA. As part of your role, you will need to plan and implement strategies for scaling databases as data volume and traffic increase. You should be able to evaluate and implement sharding, partitioning, or other scalability techniques when necessary. Deploying and configuring monitoring tools to track database performance and security, as well as setting up alerting systems for proactive issue resolution, will be part of your responsibilities. Knowledge of data security best practices, regulatory compliance, problem-solving, and troubleshooting abilities are essential. Strong communication and collaboration skills are required for this role, along with experience in scripting and automation for database tasks. Familiarity with monitoring and management tools for database management systems (DBMS) will be beneficial.,
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
The MongoDB DBA position requires a professional with over 5 years of experience, based in Ahmedabad. As the MongoDB DBA, you will play a crucial role in maintaining smooth operations and catering to specific business requirements effectively. Your primary responsibilities will include database administration for MongoDB clusters, performance tuning, backup/restore procedures, and shading. Additionally, you will be expected to have hands-on experience with replication and failover strategies to ensure the reliability and efficiency of the systems. If you possess the necessary qualifications and experience, are able to start immediately, and are excited about seizing this opportunity, we encourage you to submit your resume to cv.hr@evokehr.com.,
Posted 6 days ago
5.0 years
0 Lacs
india
Remote
About the Job: HEROIC Cybersecurity ( HEROIC.com ) is seeking a Senior Data Infrastructure Engineer with deep expertise in DataStax Enterprise (DSE) and Apache Cassandra to help architect, scale, and maintain the data infrastructure that powers our cybersecurity intelligence platforms. You will be responsible for designing and managing fully automated, big data pipelines that ingest, process, and serve hundreds of billions of breached and leaked records sourced from the surface, deep, and dark web. You'll work with DSE Cassandra, Solr, and Spark, helping us move toward a 99% automated pipeline for data ingestion, enrichment, deduplication, and indexing — all built for scale, speed, and reliability. This position is critical in ensuring our systems are fast, reliable, and resilient as we ingest thousands of unique datasets daily from global threat intelligence sources. What you will do: Design, deploy, and maintain high-performance Cassandra clusters using DataStax Enterprise (DSE) Architect and optimize automated data pipelines to ingest, clean, enrich, and store billions of records daily Configure and manage DSE Solr and Spark to support search and distributed processing at scale Automate dataset ingestion workflows from unstructured surface, deep, and dark web sources Cluster management, replication strategy, capacity planning, and performance tuning Ensure data integrity, availability, and security across all distributed systems Write and manage ETL processes, scripts, and APIs to support data flow automation Monitor systems for bottlenecks, optimize queries and indexes, and resolve production issues Research and integrate third-party data tools or AI-based enhancements (e.g., smart data parsing, deduplication, ML-based classification) Collaborate with engineering, data science, and product teams to support HEROIC’s AI-powered cybersecurity platform. What we are looking for: Minimum 5 years experience with Cassandra / DataStax Enterprise in production environments Hands-on experience with DSE Cassandra, Solr, Apache Spark, CQL, and data modeling at scale Strong understanding of NoSQL architecture, sharding, replication, and high availability Advanced knowledge of Linux/Unix, shell scripting, and automation tools (e.g., Ansible, Terraform) Proficient in at least one programming language: Python, Java, or Scala Experience building large-scale automated data ingestion systems or ETL workflows Solid grasp of AI-enhanced data processing, including smart cleaning, deduplication, and classification Excellent written and spoken English communication skills Prior experience with cybersecurity or dark web data (preferred but not required) What we offer: Position Type: Full-time Location: Pune, India (Remote – Work from anywhere) Compensation: $1800-2700 monthly (depending on experience) Benefits: Paid Time Off + Public Holidays Professional Growth: Amazing upward mobility in a rapidly expanding company. Innovative Culture: Fast-paced, innovative, and mission-driven. Be part of a team that leverages AI and cutting-edge technologies. About Us: HEROIC Cybersecurity (HEROIC.com) is building the future of cybersecurity. Unlike traditional cybersecurity solutions, HEROIC takes a predictive and proactive approach to intelligently secure our users before an attack or threat occurs. Our work environment is fast-paced, challenging, and exciting. At HEROIC, you’ll work with a team of passionate, engaged individuals dedicated to intelligently securing the technology of people all over the world.
Posted 1 week ago
5.0 years
0 Lacs
hyderabad, telangana, india
On-site
About the Roles We are seeking a Senior Backend Engineer with expertise in Java, Spring Boot, Python (FastAPI), API design, security, and enterprise platform development . This role focuses on building robust, scalable, and secure backend services that power the Z2 Enterprise Platform , enabling rich functionality across supply chain applications, user workflows, and AI-driven features. In addition to traditional backend responsibilities, this role includes integrating with Generative AI systems and LLMs , building API endpoints for intelligent assistants, enabling search and indexing via systems like Elasticsearch , and supporting scalable PostgreSQL-based persistence layers. You’ll work closely with frontend, DevOps, product, and AI/ML teams to deliver enterprise-grade capabilities with modern architectural patterns. You will be responsible for designing and implementing secure, high-performance, and extensible backend components that support multi-tenant, microservices-based enterprise applications — while also embedding AI-powered workflows and intelligent features using LLMs and vector stores. The ideal candidate combines deep backend engineering skills with a strong product mindset, and is excited about delivering AI-native enterprise features at scale . Key Responsibilities Backend Services & Architecture Design, develop, and maintain backend services and APIs using Java (Spring Boot) and Python (FastAPI) . Architect systems for scalability, security, and performance in microservices/cloud-native environments. Implement modular service layers to support domain logic, integrations, and cross-cutting concerns. Database Performance & Scaling Optimize PostgreSQL schemas, queries, indexes, and connections for high-performance data access. Architect scalable data models and sharding/partitioning strategies for large datasets. Proactively monitor and troubleshoot database performance bottlenecks . API Design & Data Access Build secure, versioned RESTful APIs (GraphQL experience is a plus) for web, mobile, and AI integrations. Implement structured API contracts with proper governance, validation, and observability. Ensure data access follows RBAC/ABAC principles and supports multi-tenant contexts. LLM & AI Integration Integrate with LLM platforms (OpenAI, Anthropic, Hugging Face, etc.) to expose AI-powered capabilities via backend services. Build RAG (Retrieval-Augmented Generation) flows and streaming pipelines to support GenAI workflows. Collaborate with AI/ML engineers to expose model endpoints, prompt templates, and feedback loops via APIs. Manage vector databases (e.g., pgvector, Pinecone ) and embed AI inference into core platform services. Security & Best Practices Implement strong authentication/authorization using Spring Security, JWT, OAuth2, and SSO. Enforce backend security best practices , API rate-limiting, and protection against common vulnerabilities (OWASP). Drive code quality through unit testing, static analysis, and CI/CD workflows . Collaboration & Mentorship Collaborate with product managers, frontend engineers, DevOps, and AI/ML teams to define architecture and roadmap. Mentor junior developers and conduct code reviews and design walkthroughs . Contribute to technical documentation, RFCs, and system design artifacts . Required Skills and Qualifications Core Backend Skills 5+ years of backend development experience with Java (Spring Boot) and Python (FastAPI or Flask) . Strong understanding of object-oriented design , microservices , and distributed systems. Data Engineering / Infrastructure Expertise Experience building and managing ETL pipelines for large-scale data ingestion. Strong experience with Elasticsearch / OpenSearch , schema design, indexing, and tuning. Deep knowledge of PostgreSQL performance optimization, scaling, partitioning , and replication. API Design & Cloud-Native Architecture Proficiency in designing REST APIs ; familiarity with GraphQL is a plus. Experience with Docker, Kubernetes, Helm , and CI/CD pipelines using GitHub Actions, Jenkins, or ArgoCD. Security & Performance Hands-on experience with Spring Security, OAuth2, JWT, and secure API practices . Familiarity with profiling, monitoring, and performance tuning tools (Prometheus, Grafana, etc.). AI/LLM Integration (Preferred) Exposure to LLM integration (e.g., OpenAI, Hugging Face Transformers, LangChain). Experience with vector databases , embeddings, and RAG architectures . Understanding of AI observability, prompt logging, and feedback-driven refinement is a plus. Preferred Qualifications Experience with event-driven architectures (Kafka, RabbitMQ) . Familiarity with serverless deployments (AWS Lambda, API Gateway). Exposure to data observability and anomaly detection tools . Prior work on LLMOps or GenAI product workflows is highly desirable. Education Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience). What We Offer Opportunity to work on mission-critical backend systems and data infrastructure. Competitive salary and comprehensive benefits package. Collaborative and innovative work environment with modern tools and processes.
Posted 1 week ago
2.0 - 4.0 years
0 Lacs
pune, maharashtra, india
On-site
Pune Job Location 2+ Years Experience Any Graduate Qualification 08 May, 2025 Job Posted On Job Description We are seeking a MongoDB (or NoSQL) Database Expert with a strong understanding of NoSQL databases and experience working with large datasets in a fast-paced, Agile development environment. The ideal candidate will be responsible for optimizing database performance and ensuring seamless integration of database functionality with back-end systems. Responsibilities Design, implement, and maintain MongoDB databases for optimal performance and scalability. Develop and optimize queries for large datasets to ensure efficient data retrieval. Collaborate with application developers to integrate database functionality with back-end systems. Monitor database performance, applying best practices for optimization and security. Design and implement data models that meet business requirements. Work with developers to troubleshoot and resolve database-related issues. Skills 2+ years of experience in MongoDB or other NoSQL databases (e.g., Cassandra, CouchDB). Strong experience with database design, data modeling, and query optimization. Hands-on experience with large datasets and optimizing data retrieval in fast-paced environments. Familiarity with Agile development processes. Knowledge of database performance tuning, scalability, and security best practices. Strong understanding of indexing, sharding, and replication in MongoDB. Experience with backend integration and collaborating with application developers. Familiarity with tools like MongoDB Atlas, Compass, or similar database management tools. Please Fill Up the form to apply Full Name * Email Address * Phone *0 / 10 Current Location * Experience *Experience0-11-22-44-6More than 6 I Can Join Within *1 Week2 Weeks3 WeeksMore than 3 Weeks Current/Last CTC * Expected CTC * CV/Resume * No file chosen Please do not fill in this field. Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
coimbatore, tamil nadu, india
On-site
Backend Engineer - Integration Specialist Company: Cobay Technology Private Limited Location: Coimbatore, Tamil Nadu (On-site) Experience: 2 - 4 years Team: Platform Engineering Team 🚀 About the Role We're seeking a talented Senior Backend Engineer with strong integration expertise to architect and build the core infrastructure of our multi-tenant SaaS platform. You'll be responsible for designing scalable backend systems and creating robust integrations with various third-party services and APIs. As our Backend Engineer, you'll work on high-throughput systems processing millions of transactions daily, building RESTful APIs, implementing event-driven architectures, and ensuring our platform maintains 99.9% uptime while scaling to meet growing demands. This role is perfect for someone who: Gets excited about building scalable backend architectures Thrives on making disparate systems communicate seamlessly Takes pride in writing clean, maintainable, and performant code Enjoys solving complex distributed systems challenges 💼 Key Responsibilities Backend Development Design and implement RESTful and GraphQL APIs for internal and external consumption Build robust webhook handlers for real-time event processing Develop authentication and authorization systems (OAuth 2.0, JWT, API keys) Create data transformation and validation layers for diverse data formats Implement resilient systems with retry mechanisms, circuit breakers, and graceful degradation System Architecture & Design Architect scalable microservices supporting a multi-tenant SaaS architecture Design event-driven systems using message queues for asynchronous processing Build reusable libraries and frameworks for rapid feature development Optimize database queries and implement efficient caching strategies Design systems capable of handling 10,000+ concurrent requests Build comprehensive monitoring, logging, and alerting systems Collaboration & Leadership Collaborate with frontend teams to design optimal API contracts Mentor junior developers on backend best practices Participate in architecture reviews and technical decision-making 🛠 Technical Requirements Backend Development : 2+ years building production APIs with Node.js and Express.js. Expert-level understanding of RESTful design, microservices architecture, and event-driven systems. Strong proficiency in JavaScript/TypeScript and asynchronous programming patterns. Database & Data Management : Expertise in both NoSQL (MongoDB - aggregation pipelines, indexing, sharding) and SQL databases. Experience with caching strategies (Redis), query optimization, and handling high-volume transactions (10,000+ concurrent requests). Integration & APIs : Proven experience building robust third-party integrations, webhook handlers, and API gateways. Strong knowledge of authentication patterns (OAuth 2.0, JWT), rate limiting, and API versioning Cloud & DevOps : Hands-on experience with AWS services (EC2, Lambda, S3, SQS, EventBridge), Docker containerization, and CI/CD pipelines. Performance & Reliability : Expertise in building resilient systems with circuit breakers, retry mechanisms, and graceful degradation. Experience with monitoring tools (DataDog, New Relic), distributed tracing, and maintaining 99.9% uptime for production systems. Testing & Code Quality : Strong testing practices with 80%+ code coverage, experience with TDD/BDD methodologies. Proficient in Git workflows, code review processes, and maintaining high code quality standards using tools like ESLint and SonarQube. Bonus Skills : GraphQL, Kubernetes orchestration, serverless architectures (AWS Lambda), event sourcing and CQRS patterns, multi-tenant SaaS architecture, and experience with high-throughput systems.
Posted 1 week ago
0 years
0 Lacs
mumbai, maharashtra, india
On-site
Are you a Python expert who has built scalable software solutions? Do you know how to handle heavy volumes of data and process the responses as APIs? Have you worked in scaling AI models or across MLE process? AryaXAI stands at the forefront of AI innovation, revolutionizing AI for mission-critical businesses by building explainable, safe, and aligned systems that scale responsibly. Our mission is to create AI tools that empower researchers, engineers, and organizations to unlock AI's full potential while maintaining transparency and safety. Our team thrives on a shared passion for cutting-edge innovation, collaboration, and a relentless drive for excellence. At AryaXAI, everyone contributes hands-on to our mission in a flat organizational structure that values curiosity, initiative, and exceptional performance. We are looking for Sr Software Engineers who - Expert in Python and have built scalable systems. Has experience with Puppet/Chef/Ansible, Amazon Web Services (AWS), Git, Graphite and related tools for large-scale systems management is a must. Should be able to write python, bash, c/c++ and perl scripts. Good understanding of architecture and tradeoffs in distributed computing environments. Experience working with linux system monitoring and analysis. Should be hands on Cloud like aws ec, elb, s3 bucket, auto scale etc. Better understanding of nginx/Apache web server as reverse proxy , load balancer, caching. H/w, S/w load balancer. Experience with web infrastructure, distributed systems, or component-oriented software engineering What You'll Be Doing Design and implementation of the network that supports for easy deployments of the servers Building exciting new features that will enable developers to build highly available, robust AI services Maintaining and ensuring a high standard for reliability and availability across multiple datacenters Evolving our existing architecture and codebases to support building flexible networking capabilities both internally and for the product Desired Skills Very good knowledge in nosql - mongodb sharding, clustering , replication , security. Knowledge of the different kinds of databases – relational, document, key/value, graph Demonstrated open-source contribution Hand on experience in working in one or major MLE processes like data prep, model serving, compute optimizatons etc. Knowledge of TCP/IP and network programming or developing/designing large software systems What’ll You Get Highly competitive and meaningful compensation package One of the best health care plans that covers not only you but also your family A great team Micro-entrepreneurial tasks and responsibilities. Career development and leadership opportunities
Posted 1 week ago
0 years
12 - 15 Lacs
noida
On-site
Solution Architect – AI, Data Engineering & Scalable Messaging Solutions Employment Type: Full-Time About LMS LMS is a leader in delivering cutting-edge A2P messaging, SMS, and communication solutions. We are now expanding our capabilities to integrate the latest AI/LLM advancements, scalable data engineering pipelines, and modern AI agent technologies into our architecture to deliver market-leading products and services. We are seeking a Solution Architect with strong expertise in data engineering, database scalability, and AI/LLM-based solutions to design, integrate, and deliver next-gen communication and AI-driven platforms. Key Responsibilities Architect and design scalable, secure, and high-performance solutions for our A2P messaging, SMS platforms, and AI-driven services. Integrate latest AI/LLM technologies (OpenAI, Anthropic, LangChain, LlamaIndex, vector databases, AI agents) into existing messaging systems. Design and implement data engineering pipelines for handling large-scale structured and unstructured datasets. Build database architectures that are optimized for scalability, reliability, and high availability (SQL Server, NoSQL, distributed databases). Collaborate with product teams to research, evaluate, and implement emerging AI tools, frameworks, and market trends. Lead proof-of-concepts (PoCs) and prototype development for AI agent integrations and automation workflows. Ensure all solutions follow industry standards, security practices, and compliance requirements. Work closely with cross-functional teams including developers, DevOps, and business analysts to deliver production-ready solutions. Required Skills & Experience Technical Expertise Programming & Backend: .NET Core / ASP.NET, C#, Java/Kotlin (Android), Python (AI/ML frameworks) Data Engineering: ETL pipelines, big data processing (Apache Spark, Kafka, Databricks), batch & streaming data architectures Databases: MS SQL Server, NoSQL (MongoDB, Redis), distributed database systems AI/LLM & Agents: OpenAI API, Anthropic Claude, LangChain, LlamaIndex, RAG (RetrievalAugmented Generation), Vector Databases (Pinecone, Milvus, Weaviate) Messaging Systems: SMPP protocol, A2P SMS architecture, Google Messages automation, messaging queue systems (RabbitMQ, Kafka) 1 Cloud & On-Premises: Hybrid architecture design, deployment on third-party cloud providers and on-premises Windows Server environments Scalability & Performance: Caching strategies, sharding, load balancing, high availability solutions Integration Skills: REST APIs, Webhooks, Microservices architecture Soft Skills Strong analytical thinking and problem-solving abilities Ability to research and adopt latest AI/market trends Excellent communication and stakeholder management skills Ability to work cross-functionally and lead solution design discussions Preferred Experience Building and deploying AI-powered conversational agents Designing AI-assisted data analytics pipelines Experience with containerization (Docker/Kubernetes) and CI/CD Experience in telecom messaging compliance and international routing for SMS Why Join LMS? Be part of a future-focused team working on AI-driven messaging and scalable data solutions Work on innovative projects integrating AI agents with large-scale communication systems Competitive salary and growth opportunities Opportunity to shape the next-generation architecture for the messaging and AI industry Job Type: Full-time Pay: ₹1,200,000.00 - ₹1,500,000.00 per year Benefits: Health insurance Provident Fund Work Location: In person Speak with the employer +91 8750818999
Posted 1 week ago
2.0 years
0 Lacs
pune, maharashtra, india
On-site
Pune Job Location 2+ Years Experience Any Graduate Qualification 08 May, 2025 Job Posted On Job Description We are seeking a MongoDB (or NoSQL) Database Expert with a strong understanding of NoSQL databases and experience working with large datasets in a fast-paced, Agile development environment. The ideal candidate will be responsible for optimizing database performance and ensuring seamless integration of database functionality with back-end systems. Responsibilities Design, implement, and maintain MongoDB databases for optimal performance and scalability. Develop and optimize queries for large datasets to ensure efficient data retrieval. Collaborate with application developers to integrate database functionality with back-end systems. Monitor database performance, applying best practices for optimization and security. Design and implement data models that meet business requirements. Work with developers to troubleshoot and resolve database-related issues. Skills 2+ years of experience in MongoDB or other NoSQL databases (e.g., Cassandra, CouchDB). Strong experience with database design, data modeling, and query optimization. Hands-on experience with large datasets and optimizing data retrieval in fast-paced environments. Familiarity with Agile development processes. Knowledge of database performance tuning, scalability, and security best practices. Strong understanding of indexing, sharding, and replication in MongoDB. Experience with backend integration and collaborating with application developers. Familiarity with tools like MongoDB Atlas, Compass, or similar database management tools. Please Fill Up the form to apply Full Name * Email Address * Phone *0 / 10 Current Location * Experience *Experience0-11-22-44-6More than 6 I Can Join Within *1 Week2 Weeks3 WeeksMore than 3 Weeks Current/Last CTC * Expected CTC * CV/Resume * No file chosen Please do not fill in this field.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
jaipur, rajasthan
On-site
As a MongoDB Developer with over 5 years of experience, you will be responsible for designing and developing MongoDB databases, collections, and schemas to efficiently store and manage party data for banking clients. Your role will involve implementing data ingestion, processing, and retrieval workflows using MongoDB features while adhering to banking industry standards. You will play a crucial role in optimizing MongoDB database performance by implementing indexing, sharding, and replication strategies to meet the high availability and low latency requirements of the banking sector. Ensuring data security and compliance will be a key aspect of your responsibilities, including implementing access controls, encryption, and backup/recovery processes in alignment with banking regulations. Collaboration with cross-functional teams, such as compliance and risk management, will be essential to understand data requirements and translate them into MongoDB solutions. You will actively contribute to the continuous improvement of party data infrastructure by proposing and implementing enhancements to address the evolving needs of the banking industry. Your day-to-day tasks will involve providing technical support, troubleshooting, and maintenance for MongoDB-powered party data systems to ensure minimal downtime and disruption to banking operations. It is imperative to stay updated with the latest MongoDB features, best practices, and industry trends, especially within the banking and financial services sectors. Working effectively within a large, complex banking organization is crucial, where you will need to adhere to established controls, policies, and procedures. Compliance with regulatory requirements and industry standards related to data management and security in the banking industry will also be a focal point of your responsibilities. Key Skills required for this role include expertise in MongoDB, database design, development, data ingestion, processing, retrieval, indexing, sharding, replication strategies, data security, access controls, encryption, backup/recovery processes, technical support, troubleshooting, and maintenance.,
Posted 1 week ago
8.0 years
0 Lacs
ahmedabad, gujarat, india
Remote
Position: PostgreSQL DBA Location: Remote Shift Time: 2pm to 11pm Experience: 8+ Years Job Overview: We are looking for a highly experienced Senior PostgreSQL DBA to join our growing team. The ideal candidate will play a critical role in managing, optimizing, and scaling PostgreSQL databases in a Kubernetes-based environment that utilizes sharding and pods for high availability and distributed workloads. The role requires deep technical expertise, an ownership mindset, and hands-on experience working with mission-critical, high-throughput database systems. Key Responsibilities: · Design, implement, and maintain highly available PostgreSQL clusters · Manage sharded database architecture and performance across multiple pods in Kubernetes · Optimize database performance, indexing, query tuning, and vacuuming strategies · Monitor, troubleshoot, and resolve database issues including replication, failover, and storage · Automate backups, recovery plans, and database provisioning · Ensure database security, access control, and compliance with client policies · Collaborate with DevOps and application teams to ensure smooth database operations and integrations · Participate in incident management and on-call support for database-related production issues · Perform database capacity planning, data partitioning, and archiving strategies Technical Skills & Experience Required: · 8+ years of hands-on experience with PostgreSQL administration · Strong knowledge of PostgreSQL performance tuning, replication, and backup/restore mechanisms · Experience with database sharding, partitioning, and horizontal scaling · Solid understanding of AWS services such as EC2, EKS, RDS, Aurora, S3, CloudWatcha · Proficient in writing SQL, PL/pgSQL, and automation scripts (Bash, Python, etc.) · Hands-on experience with monitoring and logging tools such as Datadog, Splunk, Prometheus, or pgBadger · Experience integrating with CI/CD pipelines and DevOps practices · Strong understanding of database security, auditing, and access control Desirable Skills: · Exposure to cloud-native PostgreSQL services (e.g., Amazon RDS/Aurora, GCP Cloud SQL, Azure PostgreSQL) · Familiarity with PostgreSQL extensions like PostGIS, TimescaleDB, etc. · Knowledge of high availability tools (Patroni, Pgpool-II, Stolon, etc.) · Exposure to multi-tenant or microservices-based architectures · Knowledge of cloud cost optimization for database infrastructure · Experience working in an Agile development environment Required Skills: · Strong problem-solving skills and proactive mindset · Ability to communicate technical concepts clearly with cross-functional teams · Comfortable in client-facing roles, especially during incidents and design discussions · Willingness to mentor junior DBAs and support team knowledge sharing
Posted 1 week ago
5.0 years
0 Lacs
india
On-site
Greetings from LTIMindtree ! We are currently hiring for a MSBI SSIS role and came across your profile, which aligns well with our requirements. The position is ideal for professionals with 3–5 years of experience in SSIS and related technologies. Note: We are looking for candidates who can join within 30 days . Key Skills We’re Looking For: Must Have 1 Strong Experience on SSIS SQL Server SSRS and ETL Extract Transform Load processes 2 Strong Experience on TSQL programming triggers and writing complex SQL QueriesStored Procedures 3 Hands on experience on analytical functions joins subqueries bulk binding aggregrate functions CTE rank functions and exception handling 4 Strong understanding of advanced database management concepts such as sharding indexing and table joining to design databases 5 Strong knowledge on Software engineering Principles SDLC Process QA Metrics 6 Excellent Analytical Communication If you're open to exploring this opportunity, please share your updated resume and below details to Sravanthi.Dhanyasi@ltimindtree.com 1)Full Name: 2) Total Experience: 3) Relevant Experience in MSBI SSIS: 4) Current Organization: 5) Current CTC: 6) Expected CTC: 7) Notice Period: If LWD/Negotiable 8) Current Location: 9)Preferred Location: 10) Pan card Number (Mandatory to upload your resume in our database): 11) Contact Number: 12) Alternate Email address: 13) Alternate Mobile Number: 14) Date of Birth: 15) Updated CV
Posted 1 week ago
5.0 - 31.0 years
9 - 15 Lacs
yelahanka, bengaluru/bangalore region
On-site
Position Summary We are hiring an experienced IT Manager – SQL to lead database administration, performance, and security for our global healthcare data environment. This role requires advanced SQL expertise to ensure system stability, compliance with healthcare regulations, and efficient data management. The manager will lead a team of DB professionals and act as a liaison between developers and business stakeholders to enforce SQL best practices and drive data-driven healthcare innovations. Key Responsibilities · Lead SQL database administration, monitoring, performance tuning, and capacity planning for large-scale healthcare datasets. · Manage indexing, partitioning, archival, and query optimization to support critical healthcare operations. · Ensure SQL environment security and compliance with HIPAA, GDPR, and other healthcare regulations through encryption, auditing, and access controls. · Oversee backup, recovery, and disaster recovery processes to safeguard healthcare data integrity and availability. · Collaborate with developers to enforce database normalization, ACID compliance, and efficient query design (joins, functions, CTEs). · Drive automation through DevOps & CI/CD pipelines using Azure SQL, Git for version control, and cloud-native monitoring tools. · Implement scaling strategies such as read replicas, sharding, and hybrid SQL/NoSQL solutions suitable for healthcare workloads. · Mentor SQL team members, promote best practices, and lead stakeholder communications with strong interpersonal and leadership skills. · Support healthcare data transformation initiatives integrating ETL, Big Data, and AI/ML technologies to enable predictive insights and advanced analytics. Must-Have Expertise · Advanced proficiency in SQL (DDL, DML, joins, triggers, stored procedures, functions). · Deep knowledge of indexing strategies (clustered, non-clustered, covering indexes) and query monitoring tools (Profiler, Extended Events, DMVs). · Proven experience with automated backups, recovery, and maintenance. · Expertise in partitioning, archival strategies, and query optimization techniques for large datasets. · Strong grasp of normalization, ACID properties, isolation levels, and concurrency control. · Hands-on with Azure SQL, Git for source code management, and cloud-native tools for monitoring, scaling, and security. · Experience with DevOps/CI/CD practices for database deployment. · Leadership skills with experience managing teams, effective communication, conflict resolution, and stakeholder engagement. Preferred Skills · Advanced SQL security knowledge including encryption, hashing, auditing, and role-based access tailored for healthcare compliance. · Experience with database scaling techniques: sharding, replication, and hybrid SQL/NoSQL architectures. · Familiarity with ETL processes, data warehousing, and Big Data platforms common in healthcare systems. · Exposure to AI/ML-driven SQL automation for query optimization, anomaly detection, and predictive analytics in healthcare data contexts. · Experience with BI reporting and dashboarding tools (.NET Core, Power BI, etc.). Behavioral Skills · Strong leadership and team-building capabilities with a focus on mentoring and knowledge sharing. · Excellent communication skills to interface effectively with technical teams and business stakeholders. · Problem-solving mindset with the ability to manage high-pressure situations in critical healthcare environments. · Collaborative attitude promoting cross-functional teamwork and continuous improvement. · High ethical standards and dedication to maintaining data security and compliance.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |