Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
30.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Job: About Client Our client is a market-leading company with over 30 years of experience in the industry. As one of the world’s leading professional services firms, with $19.7B, with 333,640 associates worldwide, helping their clients modernize technology, reimagine processes, and transform experiences, enabling them to remain competitive in our fast-paced world. Their Specialties in Intelligent Process Automation, Digital Engineering, Industry & Platform Solutions, Internet of Things, Artificial Intelligence, Cloud, Data, Healthcare, Banking, Finance, Fintech, Manufacturing, Retail, Technology, and Salesforce. Job Title : Postgresql DBA Key Skills : PostgreSQL, MongoDB, and Amazon Aurora, AWS Locations : Chennai Experience : 6-8 Years Education Qualification : Any Graduation Work Mode : Hybrid Employment Type : Contract to Hire Notice Period : Immediate - 10 Days Job Description: 7+ years of hands-on experience as a Database Engineer or Senior DBA with PostgreSQL, MongoDB, and Amazon Aurora. Deep expertise in relational/NoSQL data modeling, indexing, sharding, and replication. Proven track record of architecting large-scale, highly available database systems supporting microservices. Strong scripting skills (Python, or Bash) for automation, monitoring, and data migration tasks. Extensive experience with AWS managed services Proficiency in Infrastructure as Code tools (Terraform) and database migration/versioning frameworks Solid understanding of DevSecOps practices, CI/CD tooling (GitLab CI), and containerized deployments (Docker, Kubernetes). Excellent communicator capable of influencing cross-functional teams and presenting to senior leadership. Willingness to work overlapping EST hours for real-time collaboration. Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About VOIS VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone About VOIS India In 2009, VO IS started operating in India and now has established global delivery centres in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VO IS India supports global markets and group functions of Vodafone, and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Roles & Responsibilities Work Mode-Hybrid Minimum -6+ years’ experience Location – Pune Mandate – ELK(Elasticsearch) Admin Database (SQL/NOSQL) Linux Infra tools Scripting knowledge (Python/Bash/Shell) Ticketing experience (Good to have) Cloud experience (Good to have) Roles & Responsibilities 5+ years of hands-on experience with supporting Elasticsearch in production, handling medium to large clusters. Experience with Kibana visualization strategies, controls, and techniques. Experience with Elasticsearch index configuration options, sharding, aliases, etc. Strong experience in filters, Xpack, metrics, cluster management, pipelines. Working knowledge in Remedy or similar ticketing tool India VO IS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 10 Best Workplaces for Millennials, Equity, and Inclusion , Top 50 Best Workplaces for Women , Top 25 Best Workplaces in IT & IT-BPM and 10th Overall Best Workplaces in India by the Great Place to Work Institute in 2024. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch! Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Company Description CodeChavo is a global digital transformation solutions provider that collaborates with top technology companies to drive impactful transformation. Powered by technology, inspired by people, and driven by purpose, CodeChavo partners with clients from design to operation. With deep domain expertise and a forward-thinking approach, CodeChavo integrates innovation and agility into their clients’ organizations. We help companies outsource their digital projects and build quality tech teams. Role Description This is a full-time, on-site role for a Full Stack Developer (.NET + Angular + Elasticsearch) located in Gurugram. The Full Stack Developer will be responsible for developing and maintaining both front-end and back-end components of web applications. This includes building reusable code, developing user-facing features using Angular, and managing server-side logic with .NET. The role also involves working with Elasticsearch for effective searching and analyzing large volumes of data, ensuring the performance, quality, and responsiveness of the applications. Collaboration with cross-functional teams to design, implement, and optimize scalable solutions is a key aspect of the role. Key Responsibilities Design, develop, and test robust, scalable features in .NET and Angular-based applications. Collaborate with cross-functional teams in an Agile/Scrum environment to deliver high-quality software. Develop RESTful APIs and micro services using ASP.NET Core. Implement advanced search capabilities using Elastic search or OpenSearch. Optimise backend performance through caching (in-memory and shared) and query tuning. Secure applications using IdentityServer4, OAuth2, and OpenID Connect protocols. Troubleshoot and fix application bugs; write clean, maintainable code. Write unit and integration tests to ensure code quality. Participate in code reviews, sprint planning, and daily stand-ups. Requirements 3–5 years of professional software development experience. Proficient in C#.NET and ASP.NET Core.Hands-on experience with Angular 10+ and TypeScript. Strong SQL and relational database experience (e.g., SQL Server, PostgreSQL). Solid understanding of Elastic search or OpenSearch (must-have). Familiar with IdentityServer4 and modern authentication methods. Experience with caching techniques (MemoryCache, Redis). Knowledge of database scaling strategies like sharding and replication. Familiarity with Git and version control workflows. Ability to write and maintain unit tests using frameworks like xUnit, NUnit, Jasmine, or Karma. Good to have: Experience with CI/CD and deployment pipelines. Exposure to packaging and publishing NPM libraries. Basic Docker/containerisation understanding. Show more Show less
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Chennai
On-site
Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Data Modeler Role Purpose The Data Modeler plays a vital role in the success of our data analytics initiatives at Wipro Technologies. This role involves not only designing, testing, and maintaining sophisticated software programs tailored for specific operating systems and applications deployed at client sites. The Data Modeler ensures that the final products meet rigorous quality assurance parameters and adhere to industry standards. You will work closely with cross-functional teams to create robust data architectures and drive innovation within our data-driven projects. Your expertise will contribute directly to improved business outcomes and user experiences, establishing you as a key player in our organization. ͏ Location: Chennai (Mandatory) @ Customer location. JD: As a Data Modeler, you are expected to possess hands-on experience in data modeling for both OLTP and OLAP systems, which will enable you to design comprehensive data solutions that cater to diverse business needs. Your in-depth knowledge of Conceptual, Logical, and Physical data modeling will be crucial in ensuring precise data representation and storage. A strong understanding of indexing, partitioning, and data sharding, complemented by practical experience, is essential to optimize database performance and ensure efficient data retrieval. Additionally, your ability to identify and address factors impacting database performance will support near-real-time reporting and enhanced application interaction. Familiarity with at least one data modeling tool, particularly DBSchema, is highly valued. Experience or functional knowledge of the mutual fund industry will provide a competitive edge as our projects often intersect with financial services. Moreover, familiarity with GCP databases like AlloyDB, CloudSQL, and BigQuery enriches your toolkit, enabling you to apply cutting-edge cloud technology in data modeling. Please note, your willingness to work from our Chennai office is mandatory for this position, as we prioritize collaboration and on-site teamwork to drive our initiatives forward. Mandatory Skills: Cloud-PaaS-GCP-Google Cloud Platform. Experience: 5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 1 week ago
7.0 years
0 Lacs
Kochi, Kerala, India
On-site
About the Role We seek an experienced Senior Node.js Developer to lead enterprise-grade backend systems with complex business workflows (non-eCommerce). You will architect scalable solutions, manage cross-functional teams (React.js/iOS), and own end-to-end delivery—from development to deployment and client communication. Key Responsibilities Technical Leadership: Architect and develop enterprise Node.js applications (BFSI, ERP, healthcare, or logistics domains). Design and optimize complex business workflows (multi-step approvals, real-time data processing, async operations). Manage deployment pipelines (CI/CD, containerization, cloud infrastructure). Team Management: Lead developers (React.js frontend + iOS native), ensuring code quality and timely delivery. Mentor team on best practices for scalability, security, and performance. Client & Project Delivery: Interface directly with clients for requirements, updates, and troubleshooting (English fluency essential). Drive root-cause analysis for critical production issues. Infrastructure & Ops: Oversee server management (AWS/Azure/GCP), monitoring (Prometheus/Grafana), and disaster recovery. Optimize MongoDB clusters (sharding, replication, indexing). Required Skills Core Expertise: 5–7 years with Node.js (Express/NestJS) + MongoDB (complex aggregations, transactions). Proven experience in enterprise applications Deployment Mastery: CI/CD (Jenkins/GitLab), Docker/Kubernetes, cloud services (AWS EC2/Lambda/S3). Leadership & Communication: Managed teams of 3+ developers (frontend/mobile). Fluent English for client communication, documentation, and presentations. System Design: Built systems with complex workflows (state machines, event-driven architectures, BPMN). Proficient in microservices, REST/GraphQL, and message brokers (Kafka/RabbitMQ). Preferred Skills Basic knowledge of React.js and iOS native (Swift/Objective-C). Infrastructure-as-Code (Terraform/CloudFormation). Experience with TypeScript, Redis, or Elasticsearch. Non-Negotiables Enterprise application background Deployment ownership : CI/CD, cloud, and server management. English fluency for daily client communication. Valid passport + readiness to travel to Dubai. On-site work in Kochi + immediate joining (0–15 days). What We Offer Competitive salary + performance bonuses. Global enterprise projects with scalable impact. Upskilling support in cloud infrastructure and leadership. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Delhi, India
Remote
About HighLevel: HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. We are proud to support a global and growing community of over 2 million businesses, from marketing agencies to entrepreneurs to small businesses and beyond. Our platform empowers users across industries to streamline operations, drive growth, and crush their goals. HighLevel processes over 15 billion API hits and handles more than 2.5 billion message events every day. Our platform manages 470 terabytes of data distributed across five databases, operates with a network of over 250 micro-services, and supports over 1 million domain names. Our People With over 1,500 team members across 15+ countries, we operate in a global, remote-first environment. We are building more than software; we are building a global community rooted in creativity, collaboration, and impact. We take pride in cultivating a culture where innovation thrives, ideas are celebrated, and people come first, no matter where they call home. Our Impact Every month, our platform powers over 1.5 billion messages, helps generate over 200 million leads, and facilitates over 20 million conversations for the more than 2 million businesses we serve. Behind those numbers are real people growing their companies, connecting with customers, and making their mark - and we get to help make that happen. Learn more about us on our YouTube Channel or Blog Posts About the Role: We are seeking a highly skilled Senior Database Engineer with expertise in ClickHouse and other columnar databases. The ideal candidate will have a deep understanding of database performance optimization, query tuning, data modeling, and large-scale data processing. You will be responsible for designing, implementing, and maintaining high-performance analytical databases that support real-time and batch processing. Responsibilities: Design, optimize, and maintain ClickHouse databases to support high-throughput analytical workloads Develop and implement efficient data models for fast query performance and storage optimization Monitor and troubleshoot database performance issues, ensuring minimal downtime and optimal query execution Work closely with data engineers, software developers, and DevOps teams to integrate ClickHouse with data pipelines Optimise data ingestion processes, ensuring efficient storage and retrieval of structured and semi-structured data Implement partitioning, sharding, and indexing strategies for large-scale data processing Evaluate and benchmark ClickHouse against other columnar databases such as Apache Druid, Apache Pinot, or Snowflake Establish best practices for backup, replication, high availability, and disaster recovery Automate database deployment, schema migrations, and performance monitoring using infrastructure-as-code approaches Requirements: 5+ years of experience working with high-performance databases, with a focus on ClickHouse or similar columnar databases Strong knowledge of SQL, query optimisation techniques, and database internals Experience handling large-scale data (TBs to PBs) and optimizing data storage & retrieval Hands-on experience with ETL/ELT pipelines, streaming data ingestion, and batch processing Proficiency in at least two scripting/programming languages like NodeJS, Python, Go, or Java for database automation Familiarity with Kafka, Apache Spark, or Flink for real-time data processing Experience in Kubernetes, Docker, Terraform, or Ansible for database deployment & orchestration is a plus Strong understanding of columnar storage formats (Parquet, ORC, Avro) and their impact on performance Knowledge of cloud-based ClickHouse deployments (AWS, GCP, or Azure) is a plus Excellent problem-solving skills, ability to work in a fast-paced environment, and a passion for performance tuning Preferred Skills: Experience with alternative columnar databases like Apache Druid, Apache Pinot, or Snowflake Background in big data analytics, time-series databases, or high-performance data warehousing Prior experience working with distributed systems and high-availability architectures EEO Statement: The company is an Equal Opportunity Employer. As an employer subject to affirmative action regulations, we invite you to voluntarily provide the following demographic information. This information is used solely for compliance with government recordkeeping, reporting, and other legal requirements. Providing this information is voluntary and refusal to do so will not affect your application status. This data will be kept separate from your application and will not be used in the hiring decision. Show more Show less
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Key Responsibilities: Install, configure, and maintain MongoDB and other databases . Ensure database performance, security, and scalability . Implement replication, sharding, and backup strategies . Optimize queries, indexing, and storage for efficiency. Monitor database health using tools like Ops Manager, Prometheus, or Grafana . Troubleshoot database issues and ensure high availability . Automate database management tasks using Shell/Python scripting . Collaborate with development teams to optimize schema design and queries . Requirements: 4-6 years of experience in database administration. Strong expertise in MongoDB (preferred), MySQL, or PostgreSQL. Hands-on experience with replication, sharding, and high availability. Knowledge of backup, restore, and disaster recovery strategies. Experience in Linux environments and scripting (Shell, Python). Familiarity with MongoDB Atlas, AWS RDS, or cloud-based databases. Preferred Qualifications: MongoDB Certification is a plus. Experience with DevOps tools like Docker, Kubernetes, or Ansible . Exposure to both SQL and NoSQL databases. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
India
On-site
Company Description NodeStar is a pioneering AI technology company that specializes in developing cutting-edge conversational AI applications. Our diverse team comprises visionary tech founders, seasoned executives, AI PhDs, and product pioneers who have forged a new path in AI innovation. We create integrated solutions that propel our partners to new heights by infusing conversational interfaces with game mechanics for interactive experiences across multiple platforms. Role Description This is a full-time role for a Senior/Staff Python Backend Developer at NodeStar. As a senior technical leader, you will architect and build scalable backend systems that power our AI-driven applications. You will lead complex technical initiatives, mentor engineering teams, and drive architectural decisions that shape our platform's future. This role requires deep expertise in Python, distributed systems, and cloud infrastructure, combined with the ability to translate business requirements into robust technical solutions that scale to millions of users. Core Responsibilities Architect and design large-scale distributed systems and microservices architecture Lead technical initiatives across multiple teams and drive engineering excellence Define technical roadmaps and architectural standards for backend systems Mentor and guide junior and mid-level developers, fostering their professional growth Own end-to-end delivery of complex features from design to production deployment Drive technical decision-making and evaluate new technologies for adoption Collaborate with product, AI/ML teams, and stakeholders to align technical solutions with business goals Establish best practices for code quality, testing, deployment, and monitoring Lead performance optimization initiatives and ensure system reliability at scale Participate in on-call rotations and incident response for critical systems Qualifications Bachelor's degree in Computer Science or related field (Master's preferred) 5+ years of professional backend development experience, with 2+ years in senior/lead roles Expert-level proficiency in Python and deep understanding of its internals Extensive experience with FastAPI, Django, and async Python frameworks Proven track record of designing and implementing distributed systems at scale Strong expertise in database design, optimization, and management (PostgreSQL, Redis) Deep knowledge of AWS services (EKS, RDS, Lambda, SQS, etc.) and cloud architecture patterns Experience with microservices, event-driven architecture, and message queuing systems Expertise in API design, GraphQL, and RESTful services Strong understanding of software security best practices and compliance requirements Excellent communication skills and ability to influence technical decisions Preferred Qualifications Experience building AI/ML-powered applications and working with LLMs Expertise with container orchestration (Kubernetes) and infrastructure as code (Terraform) Experience with streaming data platforms and real-time processing Knowledge of LangChain, LangGraph, and modern AI application frameworks Experience with vector and graph databases in production environments Track record of leading successful migrations or major architectural changes Published articles, conference talks, or open-source contributions Experience in high-growth startups or AI-focused companies Technical Stack Languages: Python 3.x (expert level), with knowledge of Go or Rust a plus Frameworks: FastAPI, LangGraph, Django REST framework, Celery AI/ML: LangChain, Pydantic, experience with LLM integration Databases: PostgreSQL, Redis, Chroma, Neo4j, experience with sharding and replication Infrastructure: AWS (extensive), Docker, Kubernetes, Terraform Monitoring: DataDog, Prometheus, ELK stack or similar Architecture: Microservices, event-driven systems, CQRS, domain-driven design What We Offer Competitive salary Professional development opportunities Flexible work arrangements Collaborative and innovative work environment Paid time off and holidays Potential for equity We value skill and experience over tenure. If you have less than 5 years of experience but are passionate about backend development and have a proven track record of success, we encourage you to apply and be part of our innovative and dynamic team at NodeStar! Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
India
On-site
Job Profile Summary The Cloud NoSQL Database Engineer performs database engineering and administration activities, including design, planning, configuration, monitoring, automation, self-serviceability, alerting, and space management. The role involves database backup and recovery, performance tuning, security management, and migration strategies. The ideal candidate will lead and advise on Neo4j and MongoDB database solutions, including migration, modernization, and optimization, while also supporting secondary RDBMS platforms (SQL Server, PostgreSQL, MySQL, Oracle). The candidate should be proficient in workload migrations to Cloud (AWS/Azure/GCP). Key Responsibilities: MongoDB Administration: Install, configure, and maintain Neo4j (GraphDB) and MongoDB (NoSQL) databases in cloud and on-prem environments. NoSQL Data Modeling: Design and implement graph-based models in Neo4j and document-based models in MongoDB to optimize data retrieval and relationships. Performance Tuning & Optimization: Monitor and tune databases for query performance, indexing strategies, and replication performance. Backup, Restore, & Disaster Recovery: Design and implement backup and recovery strategies for Neo4j, MongoDB, and secondary database platforms. Migration & Modernization: Lead database migration strategies, including homogeneous and heterogeneous migrations between NoSQL, Graph, and RDBMS platforms. Capacity Planning: Forecast database growth and plan for scalability, optimal performance, and infrastructure requirements. Patch Management & Upgrades: Plan and execute database software upgrades, patches, and service packs. Monitoring & Alerting: Set up proactive monitoring and alerting for database health, performance, and potential failures using Datadog, AWS CloudWatch, Azure Monitor, or Prometheus. Automation & Scripting: Develop automation scripts using Python, AWS CLI, PowerShell, Shell scripting to streamline database operations. Security & Compliance: Implement database security best practices, including access controls, encryption, key management, and compliance with cloud security standards. Incident & Problem Management: Work within ITIL frameworks to resolve incidents, service requests, and perform root cause analysis for problem management. High Availability & Scalability: Design and manage Neo4j clustering, MongoDB replication/sharding, and HADR configurations across cloud and hybrid environments. Vendor & Third-Party Tool Management: Evaluate, implement, and manage third-party tools for Neo4j, MongoDB, and cloud database solutions. Cross-Platform Database Support: Provide secondary support for SQL Server (Always On, Replication, Log Shipping), PostgreSQL (Streaming Replication, Partitioning), MySQL (InnoDB Cluster, Master-Slave Replication), and Oracle (RAC, Data Guard, GoldenGate). Cloud Platform Expertise: Hands-on with cloud-native database services such as AWS DocumentDB, DynamoDB, Azure CosmosDB, Google Firestore, Google BigTable. Cost Optimization: Analyze database workload, optimize cloud costs, and recommend licensing enhancements. Shape Knowledge & Skills: Strong expertise in Neo4j (Cypher Query Language, APOC, Graph Algorithms, GDS Library) and MongoDB (Aggregation Framework, Sharding, Replication, Indexing). Experience with homogeneous and heterogeneous database migrations (NoSQL-to-NoSQL, Graph-to-RDBMS, RDBMS-to-NoSQL). Familiarity with database monitoring tools such as Datadog, Prometheus, CloudWatch, Azure Monitor. Proficiency in automation using Python, AWS CLI, PowerShell, Bash/Shell scripting. Experience in cloud-based database deployment using AWS RDS, Aurora, DynamoDB, Azure SQL, Azure CosmosDB, GCP Cloud SQL, Firebase, BigTable. Understanding of microservices and event-driven architectures, integrating MongoDB and Neo4j with applications using Kafka, RabbitMQ, or AWS SNS/SQS. Experience with containerization (Docker, Kubernetes) and Infrastructure as Code (Terraform, CloudFormation, Ansible). Strong analytical and problem-solving skills for database performance tuning and optimization. Shape Education & Certifications: Bachelor’s degree in Computer Science, Information Systems, or a related field. Database Specialty Certifications in Neo4j and MongoDB (Neo4j Certified Professional, MongoDB Associate/Professional Certification). Cloud Certifications (AWS Certified Database - Specialty, Azure Database Administrator Associate, Google Cloud Professional Data Engineer). Preferred Experience: 5+ years of experience in database administration with at least 3 years dedicated to Neo4j and MongoDB. Hands-on experience with GraphDB & NoSQL architecture and migrations. Experience working in DevOps environments and automated CI/CD pipelines for database deployments. Strong expertise in data replication, ETL, and database migration tools such as AWS DMS, Azure DMS, MongoDB Atlas Live Migrate, Neo4j ETL Tool. Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Delhi, India
Remote
Overview WELCOME TO SITA We're the team that keeps airports moving, airlines flying smoothly, and borders open. Our tech and communication innovations are the secret behind the success of the world’s air travel industry. You'll find us at 95% of international hubs. We partner closely with over 2,500 transportation and government clients, each with their own unique needs and challenges. Our goal is to find fresh solutions and cutting-edge tech to make their operations run like clockwork. Want to be a part of something big? Are you ready to love your job? The adventure begins right here, with you, at SITA. About The Role & Team The Senior Software Developer (Database Administrator) will play a pivotal role in the design, development, and maintenance of high-performance and scalable database environments. This individual will ensure seamless integration of various database components, leveraging advanced technologies to support applications and data systems. The candidate should possess expertise in SQL Server, MongoDB and other NoSQL solutions would be a plus. What You’ll Do Manage, monitor, and maintain SQL Server databases both On-Prem and Cloud across production and non-production environments. Design and implement scalable and reliable database architectures. Develop robust and secure database systems, ensuring high availability and performance. Create and maintain shell scripts for database automation, monitoring, and administrative tasks. Troubleshoot and resolve database issues to ensure system stability and optimal performance. Implement backup, recovery, Migration and disaster recovery strategies. Collaborate with cross-functional teams to understand requirements and deliver database solutions that align with business objectives. Qualifications ABOUT YOUR SKILLS Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Over 6 years of experience in database administration, specializing in MongoDB and SQL Server. Proficient in shell scripting (e.g., Bash, PowerShell) for database automation. Expertise in query optimization, database performance tuning, and high-availability setups such as replica sets, sharding, and failover clusters. Familiarity with cloud-based database solutions and DevOps pipelines. Skilled in database security, including role-based access and encryption. Experienced with monitoring tools like mongotop, mongostat, and SQL Profiler. Knowledge of messaging queues (RabbitMQ, IBM MQ, or Solace) is a plus. Strong understanding of database administration best practices, design patterns, and standards. Demonstrates excellent problem-solving skills, attention to detail, and effective communication and teamwork abilities. NICE-TO-HAVE Professional certification is a plus. What We Offer We’re all about diversity. We operate in 200 countries and speak 60 different languages and cultures. We’re really proud of our inclusive environment. Our offices are comfortable and fun places to work, and we make sure you get to work from home too. Find out what it's like to join our team and take a step closer to your best life ever. 🏡 Flex Week: Work from home up to 2 days/week (depending on your team’s needs) ⏰ Flex Day: Make your workday suit your life and plans. 🌎 Flex Location: Take up to 30 days a year to work from any location in the world. 🌿 Employee Wellbeing: We’ve got you covered with our Employee Assistance Program (EAP), for you and your dependents 24/7, 365 days/year. We also offer Champion Health - a personalized platform that supports a range of wellbeing needs. 🚀 Professional Development : Level up your skills with our training platforms, including LinkedIn Learning! 🙌🏽 Competitive Benefits : Competitive benefits that make sense with both your local market and employment status. SITA is an Equal Opportunity Employer. We value a diverse workforce. In support of our Employment Equity Program, we encourage women, aboriginal people, members of visible minorities, and/or persons with disabilities to apply and self-identify in the application process. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chandigarh, India
On-site
We are seeking an experienced MERN Stack Trainer to design, develop, and deliver instructor-led and hands-on training programs covering the full MERN (MongoDB, Express.js, React, Node.js) technology stack. The ideal candidate will possess strong software-architecture knowledge, be well-versed in backend management and design patterns, and be capable of guiding students through both core and advanced topics such as asynchronous programming, database design, scalability, reliability, and maintainability. This role requires designing curriculum, creating lab exercises, evaluating student progress, and continuously refining content to align with industry best practices. Key Responsiblities. Curriculum Design & Development: Define learning objectives, course outlines, and module breakdowns for MERN stack topics. Training Delivery & Facilitation Conduct live instructor-led sessions (onsite/virtual) adhering to learning principles. Facilitate hands-on labs where participants build real-world projects (e.g., e-commerce site, chat application, CRUD apps). Demonstrate step-by-step development, debugging, and deployment workflows. Assign and review practical exercises; provide detailed feedback and remediation for struggling learners. Mentor participants on best practices, troubleshooting, and performance optimization. Assessment & Evaluation Design quizzes, coding challenges, and project assessments that rigorously test conceptual understanding and practical skills. Track participant progress (attendance, lab completion, assessment scores) and prepare weekly status reports. Provide certification guidance and mock interview sessions for MERN-related roles. Continuously collect participant feedback to refine content and delivery style. Content Maintenance & Continuous Improvement Stay up-to-date with the latest MERN ecosystem developments: new Node.js features, React releases, database enhancements, DevOps tooling. Regularly revise training materials to incorporate emerging technologies (e.g., serverless functions, Next.js, GraphQL, TypeScript). Collaborate with instructional designers, subject-matter experts, and other trainers to ensure consistency and quality across programs. Required Qualifications Educational Background Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a closely related field. Professional Experience Minimum 3 years of hands-on training experience for designing and building full-stack applications using the MERN stack (Node.js, Express.js, React.js, MongoDB). Preferred 3 years of formal training or mentoring experience in a classroom (physical/virtual) environment, preferably to engineering students or early-career software engineers. Technical Proficiency (must demonstrate strong expertise in all of the following): Node.js & Express.js : building RESTful services, middleware patterns, debugging, error handling, performance tuning. MongoDB : schema design, indexing, aggregation pipelines, replication, and sharding. React.js : component architecture, hooks, state management (Redux or equivalent), React Router, testing frameworks (Jest, React Testing Library). Frontend Technologies : HTML5 semantics, CSS3 (Flexbox, Grid, responsive design, Sass/LESS), Bootstrap, JavaScript (ES6+), jQuery fundamentals. Database Administration : proficiency in at least one relational database (PostgreSQL or MariaDB) and one NoSQL/document database (MongoDB). Familiarity with Redis (caching/real-time sessions), Neo4j, and InfluxDB (optional). Software Architecture & Design Patterns : SOLID principles, MVC/MVVM, event-driven patterns, microservices vs. monolith trade-offs. DevOps & Tooling : Git/GitHub workflows, containerization basics (Docker), basic cloud deployment. Testing & Quality : unit testing (Mocha/Chai, Jest), integration testing (Supertest), basic performance testing (JMeter), code linting (ESLint), code coverage. Soft Skills Excellent verbal and written communication skills; ability to explain complex concepts in a simplified and structured manner. Proven classroom management and facilitation skills; adaptable to diverse learner backgrounds. Strong problem-solving aptitude and the ability to perform live troubleshooting during sessions. Demonstrated organizational skills: ability to manage multiple batches, track progress, and ensure timely delivery of content. High degree of professionalism, punctuality, and ownership. Show more Show less
Posted 1 week ago
4.0 years
3 - 7 Lacs
Noida
On-site
Position: Backend Developer with Database(PostgreSQL) Expertise Location: Noida Experience Level: 4+ Years Employment Type: Full-time Project Overview: Join our team to develop a state-of-the-art website that visualizes live data from connected vehicle fleets. The platform will process and display fieldwork data using advanced sensors onboard the machines. Key features include a heatmap view, live data updates with minimal latency (average ~5 seconds), and a 3D object viewer for enhanced visualization. The backend system will handle billions of data rows efficiently using PostgreSQL, with a microservice architecture powered by RESTful APIs. The frontend will prioritize performance optimization to ensure smooth data rendering with minimal browser load. Key Responsibilities: Optimize frontend performance to handle live data streams with low latency and ensure efficient rendering. Collaborate with the backend team to integrate RESTful APIs and validate them using Postman . Work with large-scale datasets in PostgreSQL , ensuring smooth interaction between frontend and backend systems. Implement heatmaps and GIS-based visualizations for field data insights using GeoServer. Hands-on experience with AWS services(EC2, RDS, S3 etc) and Ubuntu linux. Requirements: Expertise in Database Designing and optimization for large scale applications. Hands-on experience with PostgreSQL and advance concepts of PostgreSQL like partitioning, composite index, sharding etc Strong knowledge of SQL query optimization. Familiarity with API testing tools like Postman . Preferred Qualifications: Experience in Python programming, especially with frameworks like Flask and Django . Knowledge of Node.js for backend development. Previous experience in developing GIS-based projects involving 3D rendering or WebGL implementations. Why Join Us? Work on cutting-edge connected vehicle technologies. Collaborate with a dynamic, innovative team. Opportunity to contribute to impactful, real-time data visualization projects. About Us: Apogee GNSSPvt.Ltd. offers an extensive range of equipment like GNSS Receivers, CORS, Unmanned systems, GIS Data Collector, Rotating LaserScanners, Radio, and software like VRS, NTRIP. At Apogee Precision Lasers, our innovative products make your work hassle-free and moreproductive. Our GNSS solutions help to provide reliable, highly precise positioning in surveying &engineering, agriculture. Also Rotating laser scanners are helping the farmers in water management,crop yields, etc. and modernizing agriculture. For over 10 + years, Apogee has 5 + offices, 200 +dealers and 70000+ satisfied customers. · Kindly send me the updated resume on the same email id. · Name: Kalpika Shrimali · Designation: HR Manager · Website – www.apogeegnss.com Job Types: Full-time, Permanent Benefits: Provident Fund Location Type: In-person Schedule: Day shift Monday to Friday Education: Bachelor's (Preferred) Experience: PostgreSQL: 4 years (Required) Python: 4 years (Required) Work Location: In person Expected Start Date: 16/06/2025
Posted 1 week ago
6.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
Backend Developer Backend Developer responsible for building and maintaining the server-side infrastructure. This includes managing data, building APIs, integrating with external services, and ensuring the backend can handle scaling, security, and performance optimally. Job Overview: As a Backend Developer, you will be responsible for developing, maintaining, and optimizing the server-side application, database systems, and integrations that power core functionalities. You will ensure the backend is efficient, secure, scalable, and maintainable, collaborating with the frontend team and other stakeholders. Key Responsibilities: 1. API Development & Integration: • Design, develop, and maintain RESTful APIs and GraphQL services to serve data to the frontend. • Build and manage microservices for specific ERP functionalities (e.g., Inventory, Orders, User Management, etc.). • Integrate third-party APIs and services (payment gateways, authentication systems, etc.). • Work with API Gateway (AWS) to manage, monitor, and throttle API requests. 2. Database Design & Management: • Design and maintain PostgreSQL databases, ensuring data integrity, normalization, and efficient query performance. • Implement ORM (Object-Relational Mapping) solutions like Prisma, Sequelize, or Django ORM for easier database management. • Manage database migrations, backups, and high-availability configurations. • Design and implement caching mechanisms to improve database query performance (e.g., Redis). 3. Authentication & Authorization: • Implement secure user authentication and authorization systems (OAuth 2.0, JWT, Amazon Cognito). • Handle user sessions and roles to ensure that only authorized users can access specific data or perform actions. 4. Performance Optimization: • Optimize server-side performance to ensure the ERP system can handle high traffic and large data sets. • Perform database indexing and query optimization to reduce load times. • Set up and monitor auto-scaling infrastructure (e.g., AWS EC2 Auto Scaling, AWS Lambda for serverless functions). 5. Security & Compliance: • Implement best practices for securing the backend, including data encryption, rate limiting, and API security. • Ensure that sensitive data is stored and transmitted securely, using services like AWS KMS (Key Management Service). • Comply with industry standards for data protection and privacy (e.g., GDPR). 6. Testing & Debugging: • Write unit, integration, and API tests using testing frameworks like Jest, Mocha, or PyTest (depending on language). • Debug backend issues and optimize performance for a seamless user experience. • Conduct thorough testing for edge cases, system loads, and failure scenarios. 7. Collaboration & Agile Development: • Work closely with the frontend team to ensure smooth integration of APIs with the user interface. • Participate in agile development cycles, attending daily standups, sprint planning, and code reviews. • Contribute to architecture decisions and system design for scaling and maintaining the ERP platform. 8. Infrastructure & DevOps: • Manage cloud infrastructure, using AWS EC2, S3, Lambda, and other services. • Implement CI/CD pipelines for seamless deployment and updates using GitHub Actions, Jenkins, or AWS CodePipeline. • Use tools like Terraform or CloudFormation for infrastructure-as-code (IaC). Required Skills & Qualifications: • Proficiency in Backend Programming Languages: • Node.js (JavaScript/TypeScript) or Python (Django/Flask). • Experience with Relational Databases (PostgreSQL, MySQL, or similar). • Experience with ORMs like Prisma, Sequelize, Django ORM. • Knowledge of GraphQL and RESTful APIs. • Experience with authentication systems (OAuth 2.0, JWT, Amazon Cognito). • Familiarity with AWS services (EC2, Lambda, RDS, S3, CloudWatch). • Strong understanding of version control with Git. • Experience with Docker and containerized applications. • Ability to design and implement scalable microservices architecture. • Familiarity with caching mechanisms (e.g., Redis, CloudFront). • Knowledge of CI/CD pipelines (GitHub Actions, Jenkins, CodePipeline). • Familiarity with API Gateway (AWS). • Understanding of security best practices in backend systems. Preferred Skills & Qualifications: • Familiarity with serverless architecture (AWS Lambda). • Experience with container orchestration tools like Kubernetes or Docker Swarm. • Experience with GraphQL and tools like Apollo Server for building GraphQL APIs. • Knowledge of monitoring and logging tools like AWS CloudWatch, Prometheus, or ELK Stack. • Familiarity with server-side rendering frameworks like Next.js (for full-stack development). • Advanced Database Management: Sharding, Replication, High Availability, and Failover mechanisms. Education & Experience: • Degree or Experience in Computer Science, Software Engineering, or related field. • At least 6+ years of experience as a backend developer. • Proven experience in developing APIs, integrating with third-party services, and handling large-scale databases. Soft Skills: • Strong problem-solving and analytical skills. • Excellent written and verbal communication skills. • Ability to work collaboratively in a team-oriented, agile environment. • Comfortable with remote work and self-management. • Adaptability to new technologies and learning on the go. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
India
On-site
Job Summary: We are looking for an experienced Database & Data Engineer who can own the full lifecycle of our cloud data systems—from database optimization to building scalable data pipelines. This hybrid role demands deep expertise in SQL performance tuning , cloud-native ETL/ELT , and modern Azure data engineering using tools like Azure Data Factory, Databricks, and PySpark . Ideal candidates will be comfortable working across Medallion architecture , transforming raw data into high-quality assets ready for analytics and machine learning. Key Responsibilities: 🔹 Database Engineering Implement and optimize indexing, partitioning, and sharding strategies to improve performance and scalability. Tune and refactor complex SQL queries, stored procedures, and triggers using execution plans and profiling tools. Perform database performance benchmarking, query profiling , and resource usage analysis. Address query bottlenecks, deadlocks, and concurrency issues using diagnostic tools and SQL optimization. Design and implement read/write splitting and horizontal/vertical sharding for distributed systems. Automate backup, restore, high availability, and disaster recovery using native Azure features. Maintain schema versioning and enable automated deployment via CI/CD pipelines and Git . 🔹 Data Engineering Build and orchestrate scalable data pipelines using Azure Data Factory (ADF), Databricks , and PySpark . Implement Medallion architecture with Bronze, Silver, and Gold layers in Azure Data Lake. Process and transform data using PySpark, Pandas, and NumPy . Create and manage data integrations from REST APIs , flat files, databases, and third-party systems. Develop and manage incremental loads , SCD Type 1 & 2 , and advanced data transformation workflows . Leverage Azure services like Synapse, Azure SQL DB, Azure Blob Storage , and Azure Data Lake Gen2 . Ensure data quality, consistency, and lineage across environments. 🔹 Collaboration & Governance Work with cross-functional teams including data science, BI, and business analysts. Maintain standards around data governance, privacy, and security compliance . Contribute to internal documentation and team knowledge base using tools like JIRA, Confluence, and SharePoint . Participate in Agile workflows and help define sprint deliverables for data engineering tasks. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 5+ years of hands-on experience in data engineering and SQL performance optimization in cloud environments. Expertise in Azure Data Factory, Azure Data Lake, Azure SQL, Azure Synapse , and Databricks . Proficient in SQL, Python, PySpark, Pandas, and NumPy . Strong experience in query performance tuning, indexing, and partitioning . Familiar with PostgreSQL (PGSQL) and handling NoSQL databases like Cosmos DB or Elasticsearch . Experience with REST APIs , flat files, and real-time integrations. Working knowledge of version control (Git) and CI/CD practices in Azure DevOps or equivalent. Solid understanding of Medallion architecture , lakehouse concepts, and data reliability best practices. Preferred Qualifications: Microsoft Certified: Azure Data Engineer Associate or equivalent. Familiarity with Docker, Kubernetes , or other containerization tools. Exposure to streaming platforms such as Kafka, Azure Event Hubs, or Azure Stream Analytics. Industry experience in supply chain, logistics, or finance is a plus. Show more Show less
Posted 2 weeks ago
10.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Summary: We are seeking an experienced Database Administrator to design, implement, and manage highly available database systems supporting real-time data processing for vehicle telemetry applications. The ideal candidate will ensure 24/7 database availability while handling millions of requests and maintaining optimal performance during peak loads. Key Responsibilities: Design and implement highly available database architectures capable of handling millions of transactions Manage and optimize multiple database instances across microservices infrastructure Ensure zero-downtime during maintenance, updates, and scaling operations Design and implement real-time data ingestion strategies for vehicle telemetry data Develop and maintain database backup and recovery procedures Monitor database performance and implement optimization strategies Design and manage data partitioning and sharding strategies Implement and manage database replication and failover mechanisms Establish and maintain database security protocols Perform capacity planning and resource optimization Required Skills: Expert knowledge of enterprise database systems (Oracle, PostgreSQL, MongoDB, etc.) Strong experience with time-series databases Proficiency in database clustering and high availability solutions Experience with database performance tuning and query optimization Knowledge of data replication and synchronization techniques Expertise in backup and disaster recovery strategies Understanding of database security best practices Experience with monitoring and alerting tools Knowledge of automated database management tools Proficiency in scripting languages (Python, Shell, etc.) Required Experience: 10+ years of hands-on experience as a Database Administrator Proven experience managing high-volume, real-time database systems Experience with geospatial data management Background in IoT or telemetry data management Experience with microservices architectures Track record of implementing high-availability solutions Show more Show less
Posted 2 weeks ago
6.0 years
0 Lacs
Delhi, India
On-site
Job Summary: We are looking for a skilled Database Administrator (DBA) with 6+ years of experience in managing and optimizing SQL, MySQL, MongoDB, and PostgreSQL databases. The ideal candidate will ensure database availability, performance, security, and scalability while supporting business-critical applications. Key Responsibilities: Install, configure, and maintain SQL, MySQL, MongoDB, and PostgreSQL databases. Monitor database health, optimize queries, and ensure performance tuning. Implement backup, recovery, and disaster recovery strategies. Design and manage high availability (HA) and replication solutions. Troubleshoot database issues, optimize indexing, and resolve performance bottlenecks. Secure databases by implementing access controls and encryption. Work closely with developers and DevOps teams to support database needs. Automate database maintenance tasks using scripts (SQL, PowerShell, Python). Plan and execute database upgrades and migrations. Required Skills & Qualifications: SQL/MySQL: Strong experience with T-SQL, performance tuning, AlwaysOn Availability Groups, Replication, and Backup & Restore strategies. MongoDB: Experience with NoSQL database management, indexing, replication (Replica Sets), and scaling strategies (Sharding). PostgreSQL: Expertise in query optimization, partitioning, backup & restore, and performance tuning. Experience with cloud-based databases (Azure SQL, Google Cloud). Strong understanding of database security and user management. Hands-on experience with database monitoring tools (Grafana, Prometheus, SQL Profiler, etc.). Experience in scripting and automation (Bash, PowerShell, Python). Preferred Qualifications: Certifications such as Microsoft Certified: Azure Database Administrator, MongoDB Certified DBA, PostgreSQL Certification. Experience in CI/CD database integration and DevOps environments. Exposure to containerized databases (Docker, Kubernetes). Show more Show less
Posted 2 weeks ago
4.0 - 9.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role : Senior Java Developer Experience : 4 - 9 years Location : Pune Technical Know-How Core & Advanced Java : Collections, Hibernate, JDBC, Caching, Servlets, REST APIs, Threading, File Operations, Metadata driven concepts, Exception Handling. Knowledge of any framework (preferably Spring : Hands on with JavaScript Formats : Working with XML & JSON data formats using JAVA programs, XSLT : Oracle SQL Aggregation queries, Indexes, Joins, DDL, DMLs, Subqueries Sharding/Partitioning Concepts PL/SQL - Cursors (Nice to have) Job Description Ability to design and implement integration components between various enterprise systems. In dept knowledge of REST API based integrations. Hands on with JavaScript programming Must have done at least 2 end to end implementation for large banks/Insurance companies Working experience & sound understanding of JAVA/J2EE based Oracle products technical stack (XML, JSON, Integration with Oracle products, JAVA). Experience designing and developing Batch and Realtime (REST/SOAP) interfaces for large dataset in BFSI domain. Performance Tuning (Analyzing AWR reports, SQL tuning, Partitioning & Archival, Writing most optimized scalable code, deployments). Ability to understand requirements from business stake holders and convert into technical designs. Able to analyze and convert data into usable format with complex SQL/PL-SQL. Experience in writing & understanding XSLT, developing REST APIs for Inbound and Outbound integrations. Working knowledge of scalable cloud native architecture and design fundamentals. Exposure to Unix / Linux and basic Shell scripting. Able to work with cross geography teams and manage & report workload on daily basis. Adaptable, hard-working, independent, accountable, a strong sense of ownership, confident and strong communication. Adaptability to pick up new technology quickly (Kafka, Spark or new products as per company requirements). Must have exposure to Banking or Financial domain in past projects. Personal Skills Good written & verbal communication Should have a Flexible, Professional Approach. Demonstrated ability to manage teams with a results-oriented track record. About RIA Advisory RIA Advisory LLC (RIA) is a business advisory and technology company that specializes in the field of Revenue Management and Billing for Banking, Payments, Capital Markets, Exchanges, Utilities, Healthcare and Insurance industry verticals. With a highly experienced team in the field of Pricing, Billing & Revenue Management, RIA prioritizes understanding client needs and industry best practices to approach any problem with insight and careful strategic planning. Each one of RIA Advisory's Managing Partners have over 20 years of industry expertise and experience, our leadership and consulting team demonstrate our continued efficiency to serve our clients as a strategic partner especially for transforming ORMB and CC&B space. Our operation are spread across US, UK, India, Philippines, Australia. Services Offered Business Process Advisory for Revenue management processes Technology Consulting & Implementation Help clients transition to latest technology suite and overcome business problems. Managed Services Quality Assurance Cloud Services Product Offered Data Migration and Integration Hub Data Analytics Platform Test Automation Document Fulfilment Customer Self Service Top Industries/Verticals Financial Services Healthcare Energy and Utilities Public Sector Revenue Management. We recognize the impact of technologies on the process & people that drive them, innovate scalable processes & accelerate the path to revenue realization. We value our people and are Great Place to Work Certified. (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : MongoDB Administration Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : Graduate Summary: As an Application Tech Support Practitioner, you will act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world-class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. You will be based in Coimbatore. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work-related problems. - Provide timely and effective technical support to clients. - Troubleshoot and resolve issues related to MongoDB Administration. - Collaborate with cross-functional teams to enhance system performance. - Document technical processes and solutions for future reference. - Stay updated with the latest trends and advancements in MongoDB technology. Professional & Technical Skills: - Must To Have Skills: Proficiency in MongoDB Administration. - Strong understanding of database management principles. - Experience in performance tuning and optimization of MongoDB databases. - Knowledge of data replication and sharding in MongoDB. - Hands-on experience in troubleshooting and resolving database issues. Additional Information: - The candidate should have a minimum of 3 years of experience in MongoDB Administration. - This position is based at our Coimbatore office. - A Graduate degree is required. Graduate Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
As a Software Development Engineer 2 (Backend), you will own the design, architecture, and implementation of scalable server-side systems using Node.js . You'll drive performance optimizations, enforce security best practices, and mentor junior engineers. You will work closely with product, frontend, and DevOps teams to deliver reliable, high-throughput services that power our next-generation digital products. Strategic Development The core responsibilities for the job include the following: Define and evolve backend architecture roadmaps focused on Node.js microservices. Design complex, event-driven systems to support real-time features at scale. Develop security frameworks, API-gateway strategies, and technical debt reduction plans. Drive innovation in caching, queuing, and data-streaming architectures. Technical Leadership Mentor mid- and junior-level engineers; establish coding and design standards. Lead high-level architectural decision-making, design reviews, and post-mortems. Produce and maintain clear technical documentation, playbooks, and runbooks. Champion engineering excellence programs and facilitate regular knowledge-sharing. Cross-Functional Collaboration Align backend roadmaps with product, design, and infrastructure objectives. Collaborate with frontend teams on API contracts, data schemas, and performance budgets. Drive integration strategies across. Position Overview: web, mobile, and cloud platforms. Requirements 2+ years of hands-on experience in backend development, primarily with Node.js . Proven track record designing and operating large-scale, distributed systems. Strong understanding of system design, performance profiling, and security best practices. Experience mentoring peers, leading code reviews, and driving architectural discussions. Excellent problem-solving skills and ability to work independently. Backend Development Expertise in Node.js (Express, Nest.js ) for building production-grade services. Implement real-time systems using WebSockets, message queues (RabbitMQ, Kafka). Design and maintain internal NPM packages and custom middleware. Profile and optimize server-side performance, memory usage, and event loops. Frontend Development (Good To Have) Familiarity with React.js, Angular, or Vue.js to troubleshoot end-to-end flows. Understand SSR, hydration, and API consumption patterns. Database And Infrastructure Design distributed database schemas (PostgreSQL, MongoDB, Redis) with sharding and replication strategies. Implement caching (Redis/Memcached), search (Elasticsearch), and backup/restore. Lead database security, compliance, and governance initiatives. Cloud And DevOps (Good To Have) Build and maintain CI/CD pipelines (GitHub Actions, Jenkins, CircleCI). Work with Docker, Kubernetes, ECS/EKS, and serverless (Lambda, Cloud Functions). Implement monitoring, logging, and alerting (Prometheus, Grafana, ELK stack). This job was posted by Krishna Sharmathi from RootQuotient. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Company: The healthcare industry is the next great frontier of opportunity for software development, and Health Catalyst is one of the most dynamic and influential companies in this space. We are working on solving national-level healthcare problems, and this is your chance to improve the lives of millions of people, including your family and friends. Health Catalyst is a fast-growing company that values smart, hardworking, and humble individuals. Each product team is a small, mission-critical team focused on developing innovative tools to support Catalyst’s mission to improve healthcare performance, cost, and quality. POSITION OVERVIEW: We are looking for a highly skilled Senior Database Engineer with 4+ years of hands-on experience in managing and optimizing large-scale, high-throughput database systems. The ideal candidate will possess deep expertise in handling complex ingestion pipelines across multiple data stores and a strong understanding of distributed database architecture. The candidate will play a critical technical leadership role in ensuring our data systems are robust, performant, and scalable to support massive datasets ingested from various sources without bottlenecks. You will work closely with data engineers, platform engineers, and infrastructure teams to continuously improve database performance and reliability. performance bottlenecks. KEY RESPONSIBILITIES: • Query Optimization: Design, write, debug and optimize complex queries for RDS (MySQL/PostgreSQL), MongoDB, Elasticsearch, and Cassandra. • Large-Scale Ingestion: Configure databases to handle high-throughput data ingestion efficiently. • Database Tuning: Optimize database configurations (e.g., memory allocation, connection pooling, indexing) to support large-scale operations. • Schema and Index Design: Develop schemas and indexes to ensure efficient storage and retrieval of large datasets. • Monitoring and Troubleshooting: Analyze and resolve issues such as slow ingestion rates, replication delays, and performance bottlenecks. • Performance Debugging: Analyze and troubleshoot database slowdowns by investigating query execution plans, logs, and metrics. • Log Analysis: Use database logs to diagnose and resolve issues related to query performance, replication, and ingestion bottlenecks • Data Partitioning and Sharding: Implement partitioning, sharding, and other distributed database techniques to improve scalability. • Batch and Real-Time Processing: Optimize ingestion pipelines for both batch and real-time workloads. • Collaboration: Partner with data engineers and Kafka experts to design and maintain robust ingestion pipelines. • Stay Updated: Stay up to date with the latest advancements in database technologies and recommend improvements. REQUIRED SKILLS AND QUALIFICATIONS: • Database Expertise: Proven experience with MySQL/PostgreSQL (RDS), MongoDB, Elasticsearch, and Cassandra. • High-Volume Operations: Proven experience in configuring and managing databases for large-scale data ingestions. • Performance Tuning: Hands-on experience with query optimization, indexing strategies, and execution plan analysis for large datasets. • Database Internals: Strong understanding of replication, partitioning, sharding, and caching mechanisms. • Data Modeling: Ability to design schemas and data models tailored for high throughput use cases. • Programming Skills: Proficiency in at least one programming language (e.g., Python, Java, Go) for building data pipelines. • Debugging Proficiency: Strong ability to debug slowdowns by analyzing database logs, query execution plans, and system metrics. • Log Analysis Tools: Familiarity with database log formats and tools for parsing and analyzing logs. • Monitoring Tools: Experience with monitoring tools such as AWS CloudWatch, Prometheus, and Grafana to track ingestion performance. • Problem-Solving: Analytical skills to diagnose and resolve ingestion-related issues effectively. PREFERRED QUALIFICATIONS: • Certification in any of the mentioned database technologies. • Hands-on experience with cloud platforms such as AWS (preferred), Azure, or GCP. • Knowledge of distributed systems and large-scale data processing. • Familiarity with cloud-based database solutions and infrastructure. • Familiarity with large scale data ingestion tools like Kafka, Spark or Flink. EDUCATIONAL REQUIREMENTS: • Bachelor’s degree in computer science, Information Technology, or a related field. Equivalent work experience will also be considered Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Role We are looking for a SQL Expert with around 5 years of hands-on experience in designing, developing, and optimizing complex SQL queries, stored procedures, and database solutions. You will play a key role in supporting our data-driven applications, ensuring efficient data processing, performance tuning, and robust database design. Responsibilities This is a critical role working alongside product, engineering, and analytics teams to deliver high-quality, reliable, and scalable data Responsibilities : Design and develop complex SQL queries, stored procedures, views, and functions. Optimize query performance, indexing strategies, and database tuning. Develop and maintain ETL/ELT pipelines for data processing and transformation. Collaborate with developers to design scalable and normalized database schemas. Analyze and troubleshoot database performance issues and recommend improvements. Ensure data integrity, consistency, and compliance across systems. Create and maintain comprehensive documentation of data models and processes. Support reporting and analytics teams by providing clean, optimized datasets. Work with large datasets and understand partitioning, sharding, and parallel Skills & Experience : 5+ years of hands-on experience with SQL (SQL Server, MySQL, PostgreSQL, Oracle, or similar). Strong knowledge of advanced SQL concepts : window functions, CTEs, indexing, query optimization. Experience in writing and optimizing stored procedures, triggers, and functions. Familiarity with data warehousing concepts, dimensional modeling, and ETL processes. Ability to diagnose and resolve database performance issues. Experience working with large, complex datasets and ensuring high performance. Strong understanding of relational database design and normalization. Solid experience with tools like SSIS, Talend, Apache Airflow, or similar ETL frameworks (nice to have). Familiarity with cloud databases (AWS RDS, BigQuery, Snowflake, Azure SQL) is a plus. Good communication and documentation Qualifications : Experience with BI/reporting tools (Power BI, Tableau, Looker). Knowledge of scripting languages (Python, Bash) for data manipulation. Understanding of NoSQL databases or hybrid data architectures. (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
4.0 - 6.0 years
4 - 14 Lacs
Noida
On-site
Key Responsibilities: Install, configure, and maintain MongoDB and other databases . Ensure database performance, security, and scalability . Implement replication, sharding, and backup strategies . Optimize queries, indexing, and storage for efficiency. Monitor database health using tools like Ops Manager, Prometheus, or Grafana . Troubleshoot database issues and ensure high availability . Automate database management tasks using Shell/Python scripting . Collaborate with development teams to optimize schema design and queries . Requirements: 4-6 years of experience in database administration . Strong expertise in MongoDB (preferred), MySQL, or PostgreSQL . Hands-on experience with replication, sharding, and high availability . Knowledge of backup, restore, and disaster recovery strategies . Experience in Linux environments and scripting (Shell, Python) . Familiarity with MongoDB Atlas, AWS RDS, or cloud-based databases . Preferred Qualifications: MongoDB Certification is a plus. Experience with DevOps tools like Docker, Kubernetes, or Ansible . Exposure to both SQL and NoSQL databases . Job Types: Full-time, Permanent Pay: ₹406,812.10 - ₹1,498,222.49 per year Benefits: Health insurance Provident Fund Schedule: Day shift Application Question(s): Percentage in 10th: Percentage in 12th: Percentage in Graduation: Notice Period: Current CTC: Expected CTC: Work Location: In person
Posted 2 weeks ago
6.0 - 8.0 years
12 - 15 Lacs
Noida
On-site
Key Responsibilities: Administer, monitor, and maintain MySQL and MongoDB databases in production and non-production environments. Perform database performance tuning, query optimization, and regular health checks. Design and implement backup and disaster recovery strategies. Ensure high availability and scalability using replication, clustering, and sharding techniques. Monitor database systems using tools such as Percona Monitoring and Management (PMM), Nagios, Prometheus, or equivalent. Perform regular database upgrades, patches, and migrations. Create and maintain documentation related to database architecture, configurations, and procedures. Collaborate with development teams on database design, indexing strategies, and best practices. Automate database tasks using scripts or orchestration tools. Ensure data security, compliance, and access control policies are enforced. Participate in on-call rotation and support emergency troubleshooting and recovery. Required Skills and Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. 6 to 8 years of proven experience as a DBA handling MySQL and MongoDB databases. Deep understanding of MySQL (InnoDB, performance schema, slow query analysis) and MongoDB (replica sets, sharding, aggregation pipeline). Proficient in writing and optimizing SQL and NoSQL queries. Strong experience with Linux/Unix environments. Experience in implementing and managing high availability and disaster recovery. Hands-on experience with database backup tools (e.g., Percona XtraBackup, mongodump/mongorestore). Knowledge of security practices, including encryption, auditing, and user privilege management. Familiarity with cloud platforms (AWS, GCP, or Azure) and managed DB services (e.g., RDS, DocumentDB, Atlas). Scripting skills (Shell, Python, or similar) for automation tasks. Strong problem-solving and communication skills. Job Type: Full-time Pay: ₹1,200,000.00 - ₹1,500,000.00 per year Work Location: In person
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At DigitalOcean, we're not just simplifying cloud computing - we're revolutionizing it. We serve the developer community and the businesses they build with a relentless pursuit of simplicity. With our customers at the heart of what we do - and powered by a diverse culture that values boldness, speed, simplicity, ownership, and a growth mindset - we are committed to building truly useful products. Come swim with us! Position Overview We are looking for a Software Engineer who is passionate about writing clean, maintainable code and eager to contribute to the success of our platform.As a Software Engineer at DigitalOcean, you will join a dynamic team dedicated to revolutionizing cloud computing.We’re looking for an experienced Software Engineer II to join our growing engineering team. You’ll work on building and maintaining features that directly impact our users, from creating scalable backend systems to improving performance for thousands of customers. What You’ll Do Design, develop, and maintain backend systems and services that power our platform. Collaborate with cross-functional teams to design and implement new features, ensuring the best possible developer experience for our users. Troubleshoot complex technical problems and find efficient solutions in a timely manner. Write high-quality, testable code, and contribute to code reviews to maintain high standards of development practices. Participate in architecture discussions and contribute to the direction of the product’s technical vision. Continuously improve the reliability, scalability, and performance of the platform. Participate in rotating on-call support, providing assistance with production systems when necessary. Mentor and guide junior engineers, helping them grow technically and professionally. What You’ll Add To DigitalOcean A degree in Computer Science, Engineering, or a related field, or equivalent experience. Proficiency in at least one modern programming language (e.g., Go, Python, Ruby, Java, etc.), with a strong understanding of data structures, algorithms, and software design principles. Hands-on experience with cloud computing platforms and infrastructure-as-code practices. Strong knowledge of RESTful API design and web services architecture. Demonstrated ability to build scalable and reliable systems that operate in production at scale. Excellent written and verbal communication skills to effectively collaborate with teams. A deep understanding of testing principles and the ability to write automated tests that ensure the quality of code. Familiarity with agile methodologies, including sprint planning, continuous integration, and delivery. Knowledge of advanced database concepts such as sharding, indexing, and performance tuning. Exposure to monitoring and observability tools such as Prometheus, Grafana, or ELK Stack. Experience with infrastructure-as-code tools such as Terraform or CloudFormation. Familiarity with Kubernetes, Docker, and other containerization/orchestration tools. Why You’ll Like Working For DigitalOcean We innovate with purpose. You’ll be a part of a cutting-edge technology company with an upward trajectory, who are proud to simplify cloud and AI so builders can spend more time creating software that changes the world. As a member of the team, you will be a Shark who thinks big, bold, and scrappy, like an owner with a bias for action and a powerful sense of responsibility for customers, products, employees, and decisions. We prioritize career development. At DO, you’ll do the best work of your career. You will work with some of the smartest and most interesting people in the industry. We are a high-performance organization that will always challenge you to think big. Our organizational development team will provide you with resources to ensure you keep growing. We provide employees with reimbursement for relevant conferences, training, and education. All employees have access to LinkedIn Learning's 10,000+ courses to support their continued growth and development. We care about your well-being. Regardless of your location, we will provide you with a competitive array of benefits to support you from our Employee Assistance Program to Local Employee Meetups to flexible time off policy, to name a few. While the philosophy around our benefits is the same worldwide, specific benefits may vary based on local regulations and preferences. We reward our employees. The salary range for this position is based on market data, relevant years of experience, and skills. You may qualify for a bonus in addition to base salary; bonus amounts are determined based on company and individual performance. We also provide equity compensation to eligible employees, including equity grants upon hire and the option to participate in our Employee Stock Purchase Program. We value diversity and inclusion. We are an equal-opportunity employer, and recognize that diversity of thought and background builds stronger teams and products to serve our customers. We approach diversity and inclusion seriously and thoughtfully. We do not discriminate on the basis of race, religion, color, ancestry, national origin, caste, sex, sexual orientation, gender, gender identity or expression, age, disability, medical condition, pregnancy, genetic makeup, marital status, or military service. This job is located in Hyderabad, India Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Company Description CodeChavo is a global digital transformation solutions provider committed to making a real impact through innovation. Collaborating with leading technology companies, CodeChavo offers end-to-end services from design to operation. With deep expertise and a future-focused approach, CodeChavo embeds agility and innovation into clients’ organizations. Our mission is to help companies outsource their digital projects and build high-quality tech teams. Role Description We are seeking a Full Stack Developer (.NET + Angular + Elasticsearch) for a full-time, on-site role located in Gurugram. The Full Stack Developer will be responsible for both back-end and front-end development, utilizing technologies such as .NET, Angular, and Elasticsearch. Daily tasks include developing and maintaining web applications, collaborating with cross-functional teams, ensuring application performance, and writing clean, scalable code. Key Responsibilities Design, develop, and test robust, scalable features in .NET and Angular-based applications. Collaborate with cross-functional teams in an Agile/Scrum environment to deliver high-quality software. Develop RESTful APIs and microservices using ASP.NET Core. Implement advanced search capabilities using Elasticsearch or OpenSearch. Optimize backend performance through caching (in-memory and shared) and query tuning. Secure applications using IdentityServer4 , OAuth2, and OpenID Connect protocols. Troubleshoot and fix application bugs; write clean, maintainable code. Write unit and integration tests to ensure code quality. Participate in code reviews, sprint planning, and daily stand-ups. Requirements 3–4 years of professional software development experience. Proficient in C#.NET and ASP.NET Core . Hands-on experience with Angular 10+ and TypeScript. Strong SQL and relational database experience (e.g., SQL Server, PostgreSQL). Solid understanding of Elasticsearch or OpenSearch ( must-have ). Familiar with IdentityServer4 and modern authentication methods. Experience with caching techniques (MemoryCache, Redis). Knowledge of database scaling strategies like sharding and replication. Familiarity with Git and version control workflows. Ability to write and maintain unit tests using frameworks like xUnit, NUnit, Jasmine, or Karma. Good to have Experience with CI/CD and deployment pipelines. Exposure to packaging and publishing NPM libraries. Basic Docker/containerisation understanding. Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2