Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Software Engineer (Cloud Development) at our company, you will have the opportunity to be a key part of the Cloud development group in Bangalore. We are seeking passionate individuals who are enthusiastic problem solvers and experienced Cloud engineers to help us build and maintain Synamedia GO product and Infinite suite of solutions. Your role will involve designing, developing, and deploying solutions using your deep-rooted programming and system experience for the next generation of products in the domain of Video streaming. Your key responsibilities will include conducting technology evaluations, developing proof of concepts, designing Cloud distributed microservices features, writing code, conducting code reviews, continuous integration, continuous deployment, and automated testing. You will work as part of a development team responsible for building and managing microservices for the platform. Additionally, you will play a critical role in the design and development of services, overseeing the work of junior team members, collaborating in a multi-site team environment, and ensuring the success of your team by delivering high-quality results in a timely manner. To be successful in this role, you should have a strong technical background with experience in cloud design, development, deployment, and high-scale systems. You should be proficient in loosely coupled design, Microservices development, Message queues, and containerized applications deployment. Hands-on experience with technologies such as NodeJS, Java, GoLang, and Cloud Technologies like AWS, EKS, Open stack is required. You should also have experience in DevOps, CI/CD pipeline, monitoring tools, and database technologies. We are looking for highly motivated individuals who are self-starters, independent, have excellent analytical and logical skills, and possess strong communication abilities. You should have a Test-Driven Development (TDD) mindset, be open to supporting incidents on Production deployments, and be willing to work outside of regular business hours when necessary. At our company, we value diversity, inclusivity, and equal opportunity. We offer flexible working arrangements, skill enhancement and growth opportunities, health and wellbeing programs, and the chance to work collaboratively with a global team. We are committed to fostering a people-friendly environment, where all our colleagues can thrive and succeed. If you are someone who is eager to learn, ask challenging questions, and contribute to the transformation of the future of video, we welcome you to join our team. We offer a culture of belonging, where innovation is encouraged, and we work together to achieve success. If you are interested in this role or have any questions, please reach out to our recruitment team for assistance.,
Posted 1 day ago
5.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
This role has been designed as ‘Hybrid’ with an expectation that you will work on average 2 days per week from an HPE office. Who We Are Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today’s complex world. Our culture thrives on finding new and better ways to accelerate what’s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. Job Description Job Family Definition: Designs, develops, troubleshoots and debugs software programs for software enhancements and new products. Develops software including operating systems, compilers, routers, networks, utilities, databases and Internet-related tools. Determines hardware compatibility and/or influences hardware design. Management Level Definition Contributions include applying developed subject matter expertise to solve common and sometimes complex technical problems and recommending alternatives where necessary. Might act as project lead and provide assistance to lower level professionals. Exercises independent judgment and consults with others to determine best method for accomplishing work and achieving objectives. What You'll Do We are looking for a senior software engineer to work on its next generation Enterprise switching portfolio to incorporate active service performance monitoring capabilities from onset in cloud environment on its most successful platforms including MX, ACX and PTX product lines. In this position, you will have the opportunity to work alongside with our multi-site multi-service cloud engineering team members to share and learn best engineering practices, as well as actively contribute to build product from the ground up. You will be an integral part of the Juniper Network Automation Software Engineering team with the responsibilities including: Assist in design and implementation of micro-service-based cloud network monitoring responsive Web UI Design and Implementation Contribute to the integration and testing of the developed application. Document software designs and procedures. Use of Juniper router data to develop network monitoring applications. Assist with troubleshooting and root cause analysis of problems found, both in-process and escalations from the field Demonstrate exemplary behavior in following proper engineering processes to manage risks and systematically achieve high product quality. Minimum Qualifications What you need to bring: BS or MS Data Science, Machine Learning, Statistics, Mathematics, Computer Science, or a related field 5-10 years of Proficiency in the core technologies of the web: Javascript, HTML, CSS, ReactJS, NodeJS, and it's core principles Experience with micro-frontend Experience with implementing responsive web designs and writing unit and integration tests Experience in working with one or more of the following infrastructure components like Postgres, Kafka, ElasticSearch, Redis Familiarity with Git Experience working in Linux based Operating Systems Understanding and/or Cloud Programming knowledge with Dockers and Containers is a plus Developing multi-threaded applications Understanding and/or Cloud Programming knowledge with Dockers and Containers is a plus Programming knowledge with inter-process communication and distributed systems is a plus Experienced working on highly scalable system addressing CPU Performance and Bandwidth Utilization Hands on experience developing software, debugging, and deploying application on Linux operating systems. Aware of Agile / Scrum development environment Strong problem solving and analytical skills. Good verbal & written communication skills and demonstrated ability to collaborate across teams and organization Preferred Qualifications Familiarity with nginx and server analytics platforms such as Grafana/Kibana and open-source collectors Familiarity with newer specifications of RESTful APIs and GraphQL is a plus Familiarity with modern front-end build pipelines and tool Other Information Location: Bengaluru Relocation is/is not available for this position No Travel requirements for the position Additional Skills Cloud Architectures, Cross Domain Knowledge, Design Thinking, Development Fundamentals, DevOps, Distributed Computing, Microservices Fluency, Full Stack Development, Security-First Mindset, Solutions Design, Testing & Automation, User Experience (UX) What We Can Offer You Health & Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal & Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have — whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Let's Stay Connected Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #networking Job Engineering Job Level TCP_03 HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity. Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.
Posted 1 day ago
12.0 - 18.0 years
0 Lacs
karnataka
On-site
Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, where you'll be supported and inspired by a collaborative community of colleagues around the world, and where you'll be able to reimagine what's possible. Join us and help the world's leading organizations unlock the value of technology and build a more sustainable, more inclusive world. We are hiring AWS Migration Experts with 12-18 years of experience for Pan India locations. This role involves leading large-scale cloud migration projects, designing scalable architectures, and ensuring seamless transitions. Strong expertise in AWS services, cloud strategy, and stakeholder management is essential. Be part of our cloud transformation journey. Lead and execute end-to-end Oracle and PostgreSQL database assessments and migrations to AWS. Design and implement cloud-native architectures for enterprise-scale workloads. Utilize AWS Database Migration Service (DMS) and other AWS tools for seamless migration. Optimize database performance post-migration and ensure high availability. Collaborate with cross-functional teams to align migration strategies with business goals. Provide technical leadership and guidance throughout the migration lifecycle. Ensure compliance with AWS best practices, security, and governance standards. Maintain documentation and use tools like Jira and Confluence for tracking and reporting. Strong understanding of AWS migration tools, especially AWS DMS. Familiarity with MongoDB, DocumentDB, Elasticsearch, and OpenSearch and their migration strategies. AWS certifications such as AWS Certified Solutions Architect or AWS Certified Database Specialty. Database administration or consulting, with a focus on Oracle and PostgreSQL. Excellent problem-solving, communication, and collaboration skills. Experience with performance tuning, infrastructure automation, and cloud governance. You can shape your career with us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage, or new parent support via flexible work. At Capgemini, you can work on cutting-edge projects in tech and engineering with industry leaders or create solutions to overcome societal and environmental challenges. Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market-leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem.,
Posted 1 day ago
6.0 - 15.0 years
0 Lacs
chennai, tamil nadu
On-site
You have a fantastic opportunity to join as a Technical Architect with a minimum of 15 years of IT experience, out of which at least 6 years have been dedicated to developing and architecting web applications with a strong emphasis on JavaScript. Your role will involve working on web applications based on Service-Oriented Architecture (SOA) principles, utilizing various UI frameworks and languages such as Angular, NodeJS, ExpressJS, ReactJS, JQuery, CSS, and HTML5. You must have expertise in responsive UI design and development, as well as a deep understanding of JavaScript/ES6 for building high-performing and heavy-traffic web applications using JS frameworks like Angular, React, Ember, etc. Additionally, experience with unit-test driven development, build tools like Webpack, Gulp, Grunt, and Continuous Integration and Continuous Deployment with Jenkins will be crucial. As a Technical Architect, you will be expected to excel in Frontend and Middleware design, development, and implementation, focusing on technologies such as Angular, Node ExpressJS, and related tools. Experience in AWS Cloud Infrastructure, AWS application services, AWS Database services, Containers, and Microservices will be highly beneficial. Your responsibilities will also include designing Microservices reference Architecture, BRMS, BPMN, and Integrations, along with expertise in Cloud native solutions, DevOps, Containers, CI/CD, Code Quality, Micro-Services, API architectures, Cloud, Mobile, and Analytics. Nice-to-have skills include experience in NoSQL, Elasticsearch, Python, R, Linux, Data Modeling, and Master Data Management. In your day-to-day role, you will be tasked with identifying business problems, developing Proof-of-concepts, architecting, designing, developing, and implementing frameworks and application Software Components using Enterprise/Open Source technologies. You will play a key role in designing & implementing Application Architecture concepts, best practices, and state-of-the-art Integration Patterns while troubleshooting pre and post-production functional and non-functional issues. Furthermore, you should have the capability to learn new technologies quickly, stay updated on the latest industry trends, and techniques. If you are passionate about technology, enjoy problem-solving, and have a knack for staying ahead in the ever-evolving tech landscape, this role offers an exciting opportunity for you to showcase your skills and make a significant impact.,
Posted 1 day ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
As a Full Stack Developer- SDE 2 (Java/Python) at our company, you will play a pivotal role in leading the design and delivery of complex end-to-end features spanning across frontend, backend, and data layers. Your responsibilities will include making strategic architectural decisions, reviewing and approving pull requests, building shared UI component libraries, identifying performance bottlenecks, and ensuring comprehensive testing strategies. You will be responsible for instrumenting services with metrics and logging, defining and enforcing testing strategies, owning CI/CD pipelines, and ensuring compliance with OWASP Top-10 mitigations, WCAG accessibility, and SEO best practices. Collaboration with Product, UX, and Ops teams to translate business objectives into technical roadmaps will be a key aspect of your role. To be successful in this role, you should have at least 5 years of experience in building production Full stack applications end-to-end with measurable impact. Your expertise in React (or Angular/Vue), Node.js (Express/NestJS), Python (Django/Flask/FastAPI), or Java (Spring Boot) is essential. Additionally, you should be skilled in designing RESTful and GraphQL APIs, handling scalable database schemas, and have knowledge of various databases, caching mechanisms, and AWS services. Strong leadership skills in Agile/Scrum environments, proficiency in unit/integration testing, frontend profiling, backend tracing, and secure coding practices are also required. Your ability to communicate technical trade-offs effectively and provide constructive feedback will be crucial in this role. We are looking for individuals with a passion for delivering high-quality software, strong collaboration abilities, determination, creative problem-solving skills, openness to feedback, eagerness to learn and grow, and excellent communication skills. If you possess these qualities and are ready to contribute to our team in Hyderabad, we would love to hear from you.,
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. You will be part of a team of highly skilled professionals working with cutting-edge technologies. Our purpose is to bring real positive changes in an increasingly virtual world, transcending generational gaps and disruptions of the future. We are seeking AWS Glue Professionals with the following qualifications: - 3 or more years of experience in AWS Glue, Redshift, and Python - 3+ years of experience in engineering with expertise in ETL work with cloud databases - Proficiency in data management and data structures, including writing code for data reading, transformation, and storage - Experience in launching spark jobs in client mode and cluster mode, with knowledge of spark job property settings and their impact on performance - Proficiency with source code control systems like Git - Experience in developing ELT/ETL processes for loading data from enterprise-sized RDBMS systems such as Oracle, DB2, MySQL, etc. - Coding proficiency in Python or expertise in high-level languages like Java, C, Scala - Experience in using REST APIs - Expertise in SQL for manipulating database data, familiarity with views, functions, stored procedures, and exception handling - General knowledge of AWS Stack (EC2, S3, EBS), IT Process Compliance, SDLC experience, and formalized change controls - Working in DevOps teams based on Agile principles (e.g., Scrum) - ITIL knowledge, especially in incident, problem, and change management - Proficiency in PySpark for distributed computation - Familiarity with Postgres and ElasticSearch At YASH, you will have the opportunity to build a career in an inclusive team environment. We offer career-oriented skilling models and leverage technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our workplace is grounded in four principles: - Flexible work arrangements, free spirit, and emotional positivity - Agile self-determination, trust, transparency, and open collaboration - Support for the realization of business goals - Stable employment with a great atmosphere and ethical corporate culture.,
Posted 1 day ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in. Job Description REQUIREMENTS: Total Experience 5+years. Strong working experience in backend development with Java and Spring Boot. Hands-on experience with RESTful APIs, JMS, JPA, Spring MVC, Hibernate. Strong understanding of messaging systems (Kafka, SQS) and caching technologies (Redis). Experience with SQL (Aurora MySQL) and NoSQL databases (Cassandra, DynamoDB, Elasticsearch). Proficient with CI/CD pipelines, Java build tools, and modern DevOps practices. Exposure to AWS services like EC2, S3, RDS, DynamoDB, EMR. Familiarity with Kubernetes-based orchestration and event-driven architecture. Experience working in Agile environments with minimal supervision. Experience with observability tools and performance tuning. Understanding of orchestration patterns and microservice architecture. Strong communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code Understanding functional requirements thoroughly and analyzing the clients needs in the context of the project Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realize it Determining and implementing design methodologies and tool sets Enabling application development by coordinating requirements, schedules, and activities. Being able to lead/support UAT and production roll outs Creating, understanding and validating WBS and estimated effort for given module/task, and being able to justify it Addressing issues promptly, responding positively to setbacks and challenges with a mindset of continuous improvement Giving constructive feedback to the team members and setting clear expectations. Helping the team in troubleshooting and resolving complex bugs Coming up with solutions to any issue that is raised during code/design review and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.
Posted 1 day ago
2.0 - 4.0 years
0 Lacs
Delhi, India
On-site
Position Overview We are seeking a highly skilled and experienced Software Engineer with 2-4 years of professional experience in Python and Django, specifically in building REST APIs using frameworks like FASTAPI and Django Rest Framework (DRF). The ideal candidate should have hands-on experience with Redis cache, Docker, PostgreSQL, Kafka, Elasticsearch, and ETL RESPONSIBILITIES : Collaborate with cross-functional teams to design, develop, and maintain high-quality software solutions using Python, Django (including DRF), FastAPI, and other modern frameworks. Build robust and scalable REST APIs, ensuring efficient data transfer and seamless integration with frontend and third-party systems. Utilize Redis for caching, session management, and performance optimization. Design and implement scalable ETL pipelines to efficiently process and transform large datasets across systems. Integrate and maintain Kafka for building real-time data streaming and messaging services. Implement Elasticsearch for advanced search capabilities, data indexing, and analytics functionalities. Containerize applications using Docker for easy deployment and scalability. Design and manage PostgreSQL databases, ensuring data integrity and performance tuning. Write clean, efficient, and well-documented code following best practices and coding standards. Participate in system design discussions and contribute to architectural decisions, particularly around data flow and microservices communication. Troubleshoot and debug complex software issues, ensuring smooth operation of production systems. Profile and optimize Python code for improved performance and scalability. Implement and maintain CI/CD pipelines for automated testing and REQUIREMENTS : 2-4 years of experience in backend development using Python. Strong proficiency in Django, DRF, and RESTful API development. Experience with FastAPI, asyncio, and modern Python libraries. Solid understanding of PostgreSQL and relational database concepts. Proficiency with Redis for caching and performance optimization. Hands-on experience with Docker and container orchestration. Familiarity with Kafka for real-time messaging and event-driven systems. Experience implementing and maintaining ETL pipelines for structured/unstructured data. Working knowledge of Elasticsearch for search and data indexing. Exposure to AWS services (e.g., EC2, S3, RDS) and cloud-native development. Understanding of Test-Driven Development (TDD) and automation frameworks. Strong grasp of Git and collaborative development practices. Excellent communication skills and a team-oriented mindset. Experience with Agile development We Offer : Opportunity to shape the future of unsecured lending in emerging markets Competitive compensation package Professional development and growth opportunities Collaborative, innovation-focused work environment Comprehensive health and wellness & Work Model Immediate joining possible Work From Office only Based in Gurugram, Sector 65 (ref:hirist.tech)
Posted 1 day ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
Imagine what you could do here. At Apple, phenomenal ideas have a way of becoming great products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. The people here at Apple dont just create products - they create the kind of wonder thats revolutionized entire industries. Its the diversity of those people and their ideas that inspires the innovation that runs through everything we do, from amazing technology to industry-leading environmental efforts. Join Apple, and help us leave the world better than we found it! We are looking for a passionate NoSQL / Search Engineer to help manage the large scale data store environments. This team is responsible for providing new architectures and scalability solutions to ever growing business and data processing needs. Individual can go to the depths to solve complex problems and have the curiosity to explore and learn new technologies for innovative solutions. Design, implement and maintain NoSQL database systems / search engines. Develop and optimize search algorithms to ensure high performance and accuracy. Analyze and understand data requirements to design appropriate data models. Monitor and troubleshoot database performance issues, ensuring system stability and efficiency. Implement data indexing and ensure efficient data retrieval processes. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Stay updated with the latest advancements in NoSQL and search technologies, and apply them to improve existing systems. Create and maintain documentation related to database configurations, schemas, and processes Will work with global teams in US. Deliver solutions that can keep up with a rapidly evolving product in a timely fashion Minimum Qualifications 4+ years or experience as a NoSQL / Search Engineer or in a similar role. Strong understanding and hands-on experience with NoSQL databases such as Cassandra, Couchbase, or similar. Expertise in search technologies such as Elasticsearch, Solr, or similar. Proficiency in programming languages such as Java, Python Familiarity with data modeling, indexing, and query optimization techniques. Experience with large-scale data processing and distributed systems. Strong problem-solving skills and attention to detail. Good in depth understanding of the Linux in term of debugging tools and performance tuning Preferred Qualifications Experience with cloud platforms such as AWS, Google Cloud, or Azure is a plus Knowledge of machine learning techniques and their application in search is an added bonus to have JVM Tuning tools, OS Performance and Debugging Open source contributions will be a huge plus Submit CV,
Posted 1 day ago
3.0 - 6.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Call for motivated professionals to join our OpenShift Container Platform (OCP) Operations Team as first-level support. This team is essential to maintaining the stability and performance of OpenShift clusters that power key business systems. As part of a tiered support structure (L1, L2, L3), the role is focused on day-to-day operational tasks, continuous monitoring, initial incident handling, and supporting ongoing maintenance activities. Your work will directly impact the reliability of containerized services that are critical to enterprise functions, ensuring a secure, scalable, and highly available platform. Responsibilities Diagnose and Resolve Platform Issues: Troubleshoot problems affecting workloads, Persistent Volume Claims (PVCs), ingress traffic, service endpoints, and image registries to ensure smooth operations. Apply Configuration Updates: Use tools like YAML, Helm, or Kustomize to implement changes across the platform in a consistent and reliable manner. Cluster Maintenance and Upgrades: Handle the upkeep of Operators, carry out OpenShift cluster upgrades, and perform post-update checks to confirm platform stability. Support CI/CD and DevOps Teams: Collaborate closely with development teams to identify and fix issues in build and deployment pipelines. Namespace and Access Management: Oversee and automate tasks like namespace creation, access control (RBAC), and applying network security rules (NetworkPolicies). Monitor System Health: Manage logging, monitoring, and alert systems such as Prometheus, EFK (Elasticsearch, Fluentd, Kibana), and Grafana to proactively identify issues. Plan and Participate in Maintenance Cycles: Contribute to change request (CR) planning, patching schedules, and coordinate downtime and recovery procedures when needed. No. of Resources: 5 Role Focus: Advanced Troubleshooting, Change Management, Automation Technology: IT Job Type: Contractual (12 Months and Auto renewal for good performer) Job Location: Gurgaon Work Mode: Onsite Experience: 3 to 6 Years Work Shift: 24x7 Rotational Coverage On-call Support Payroll Model: Third Party Payroll Salary: Competitive Relocation Expense: No
Posted 1 day ago
4.0 - 8.0 years
0 Lacs
ahmedabad, gujarat
On-site
You are a proactive and experienced QA Engineer with 4-8 years of expertise in automation testing, API testing using Postman, and hands-on experience validating AI/ML models and working with OpenSearch. You are passionate about delivering high-quality software and thrive in fast-paced, data-driven environments. Your key responsibilities include designing, developing, and maintaining automated test frameworks and test scripts for web, backend, and data-intensive systems. You will create and execute test plans for API testing using Postman, Swagger, or similar tools. You will validate AI/ML model outputs for accuracy, fairness, performance, and edge cases. Additionally, you will develop test strategies for systems using OpenSearch or Elasticsearch, perform functional, regression, integration, and performance testing across products, identify, log, and track bugs using tools like JIRA, and actively participate in sprint planning, story grooming, and retrospectives. Furthermore, you will contribute to the continuous improvement of QA processes, automation coverage, and test pipelines. Your required skills include proficiency in automation testing using Selenium, Playwright, Cypress, or similar frameworks, API testing with tools like Postman, REST Assured, Swagger, scripting/programming in Python, JavaScript, or Java, basic understanding of ML models for AI testing, experience with OpenSearch/Elasticsearch for search validation, familiarity with CI/CD integration tools like Jenkins, GitHub Actions, and version control systems like Git/GitLab/Bitbucket. Preferred qualifications include experience working with large-scale data-driven applications, familiarity with Docker, Kubernetes, or cloud platforms (AWS, Azure, GCP), knowledge of security testing or performance testing tools (e.g., JMeter, Locust), exposure to Agile/Scrum methodologies, and test case management tools (e.g., TestRail, Zephyr). Your soft skills include strong analytical and problem-solving skills, excellent written and verbal communication, ability to work independently and in a collaborative team environment, and a detail-oriented mindset with a passion for quality. Lendfoundry is part of Sigma Infosolutions Limited, launched in 2004 with offices in Bangalore, Ahmedabad, Jodhpur, and Indore. Lendfoundry, founded in Irvine, California in 2015, aims to build systems that allow marketplace lenders to eliminate tech buildout, minimize IT infrastructure, and accelerate their growth strategy. LendFoundry offers a turnkey solution for fintech startups and existing marketplace ventures, providing end-to-end loan origination and loan management processes to approve, disburse, and manage loans quickly and easily.,
Posted 2 days ago
2.0 years
0 Lacs
India
On-site
At H1, we believe access to the best healthcare information is a basic human right. Our mission is to provide a platform that can optimally inform every doctor interaction globally. This promotes health equity and builds needed trust in healthcare systems. To accomplish this our teams harness the power of data and AI-technology to unlock groundbreaking medical insights and convert those insights into action that result in optimal patient outcomes and accelerates an equitable and inclusive drug development lifecycle. Visit h1.co to learn more about us. As a Software Engineer on the search Engineering team you will support and develop the search infrastructure of the company. This involves working with TB’s of data, indexing, ranking and retrieval of medical data to support the search in backend infra. What You'll Do At H1 The Search Engineering team is responsible for developing and maintaining the company's core search infrastructure. Our objective is to enable fast, accurate, and scalable search across terabytes of medical data. This involves building systems for efficient data ingestion, indexing, ranking, and retrieval that power key product features and user experiences. As a Software Engineer on the Search Engineering team, your day typically includes: Working with our search infrastructure – writing and maintaining code that ingests large-scale data in Elasticsearch. Designing and implementing high-performance APIs that serve search use cases with low latency. Building and maintaining end-to-end features using Node.js and GraphQL, ensuring scalability and maintainability. Collaborating with cross-functional teams – including product managers and data engineers to align on technical direction and deliver impactful features to our users. Take ownership of the search codebase–proactively debug, troubleshoot, and resolve issues quickly to ensure stability and performance. Consistently produce simple, elegant designs and write high-quality, maintainable code that can be easily understood and reused by teammates. Demonstrate a strong focus on performance optimization, ensuring systems are fast, efficient, and scalable. Communicate effectively and collaborate across teams in a fast-paced, dynamic environment. Stay up to date with the latest advancements in AI and search technologies, identifying opportunities to integrate cutting-edge capabilities into our platforms. About You You bring strong hands-on technical skills and experience in building robust backend APIs. You thrive on solving complex challenges with innovative, scalable solutions and take pride in maintaining high code quality through thorough testing.You are able to align your work with broader organizational goals and actively contribute to strategic initiatives. You proactively identify risks and propose solutions early in the project lifecycle to avoid downstream issues.You are curious, eager to learn, and excited to grow in a collaborative, high-performing engineering team environment. Requirements 1–2 years of professional experience. Strong programming skills in TypeScript, Node.js, and Python (Mandatory) Practical experience with Docker and Kubernetes Good to have: Big Data technologies (e.g., Scala, Hadoop, PySpark), Golang, GraphQL, Elasticsearch, and LLMs Not meeting all the requirements but still feel like you’d be a great fit? Tell us how you can contribute to our team in a cover letter! H1 OFFERS Full suite of health insurance options, in addition to generous paid time off Pre-planned company-wide wellness holidays Retirement options Health & charitable donation stipends Impactful Business Resource Groups Flexible work hours & the opportunity to work from anywhere The opportunity to work with leading biotech and life sciences companies in an innovative industry with a mission to improve healthcare around the globe
Posted 2 days ago
0 years
0 Lacs
Rajkot, Gujarat, India
On-site
RK Infotech, Rajkot is looking for DevOps engineer with strong Amazon Web Services experience. We are looking for an experienced engineer to join our DevOps team with experience building and scaling services in a cloud environment. So, Great job opportunity for a DevOps engineer job in Rajkot, Gujarat, India. Freshers Can Apply for DevOps engineer job. Required Skills DevOps Engineer Job Strong background in Linux/Unix Administration Ability to use a wide variety of open source technologies and cloud services Experience With AWS And Azure Is Required Strong experience with SQL and MySQL A working understanding of code and script (PHP, Python, Perl and/or Ruby) Knowledge of best practices and IT operations in an always-up, always-available service Implement monitoring for automated system health checks. Build our CI pipeline, and train and guide the team in DevOps practices Knowledge with databases including MySQL, Mongo & Elasticsearch , DB Clusster Critical thinker and problem-solving skills Team player Good time-management skills Interpersonal and communication skills Responsibilities Build scalable, efficient cloud infrastructure To identify and establish DevOps practices in the company Build the whole stack ELBs to databases, and then move and launch our site at its new home Work collaboratively with software engineering to deploy and operate our systems Help automate and streamline our operations and processes Build and maintain tools for deployment, monitoring and operations Ability to troubleshoot and resolve issues in our dev, test and production environments Build independent web based tools, microservices and solutions Manage source control including SVN and GIT
Posted 2 days ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
As the global leader in high-speed connectivity, Ciena is committed to a people-first approach. Our teams enjoy a culture focused on prioritizing a flexible work environment that empowers individual growth, well-being, and belonging. We’re a technology company that leads with our humanity—driving our business priorities alongside meaningful social, community, and societal impact. How You Will Contribute As a Senior Software Developer within the Blue Planet team, you will play a key role in designing, developing, testing, and supporting scalable software solutions tailored for carrier-class networks and cloud environments. This role requires a strong technical foundation, attention to detail, and a collaborative mindset to deliver high-quality, modular code that is built to scale and last. You Will Work closely with cross-functional teams to design and develop high-performing software modules and features. Write and maintain backend and frontend code with strong emphasis on quality, performance, and maintainability. Support system design, documentation, and end-to-end development including unit testing and debugging. Participate in global agile development teams to deliver against project priorities and milestones. Contribute to the development of telecom inventory management solutions integrated with cloud platforms and advanced network technologies. The Must Haves Bachelor's or Master’s degree in Computer Science, Engineering, or a related technical field. 4+ years of software development experience. Backend: Java 11+, Spring (Security, Data, MVC), SpringBoot, J2EE, Maven, JUnit. Frontend: TypeScript, JavaScript, Angular 2+, HTML, CSS, SVG, Protractor, Jasmine. Databases: Neo4j (Graph DB), PostgreSQL, TimescaleDB. Experience with SSO implementations (LDAP, SAML, OAuth2). Proficiency with Docker, Kubernetes, and cloud platforms (preferably AWS). Strong understanding of algorithms, data structures, and software design patterns. Assets Experience with ElasticSearch, Camunda/BPMN, Drools, Kafka integration. Knowledge of RESTful APIs using Spring MVC. Knowledge in Inventory Management Systems (e.g., Cramer, Granite, Metasolv). Familiarity with tools like Node.js, Gulp, and build/test automation. Exposure to telecom/networking technologies such as DWDM/OTN, SONET, MPLS, GPON, FTTH. Understanding of OSS domains and exposure to telecom network/service topology and device modeling. Prior experience working in a global, agile development environment. Not ready to apply? Join our Talent Community to get relevant job alerts straight to your inbox. At Ciena, we are committed to building and fostering an environment in which our employees feel respected, valued, and heard. Ciena values the diversity of its workforce and respects its employees as individuals. We do not tolerate any form of discrimination. Ciena is an Equal Opportunity Employer, including disability and protected veteran status. If contacted in relation to a job opportunity, please advise Ciena of any accommodation measures you may require.
Posted 2 days ago
12.0 years
0 Lacs
India
Remote
What You Can Expect Zoom is seeking a highly qualified and experienced full stack senior software engineer (Java). Developing and maintaining IT cloud native solutions in our CPQ, Order to Cash and other business areas. This strategic position requires an engineer with exceptional initiative and precise attention to detail. The ideal candidate excels at complex challenges and shares our commitment to developing superior software. Furthermore, if you are dedicated to advancing Zoom's evolution into an agile, responsive, and customer-focused enterprise application organization, this role presents an optimal opportunity. About The Team This engineering position would play a pivotal role in architecting, designing, building and supporting the full-stack cloud-native solutions to address the channels business enablement targets. This includes the self-service experience supporting quoting and ordering for Zoom’s partner ecosystem. These range from software development and machine learning to quality assurance teams that work to create and maintain Zoom's user-friendly interfaces and robust infrastructure. If you are excited about the potential of leading Zoom’s continued evolution into a customer-obsessed enterprise application organization, then this role is for you! What We’re Looking For Have a BS/MS in Computer Science or equivalent. 12+ years of backend/full-stack development experience. Expert knowledge in Java and core technologies (JVM, multithreading, IO, network). Have mastery of Java Spring MVC, Spring Boot, RESTful APIs. Experience building low-latency microservices and API publishing. Have understanding of authentication/authorization (OAuth, JWT). Have expertise in SQL/NoSQL databases (MySQL, MongoDB, DynamoDB). Experience with caching systems (Redis, Memcache) - Knowledge of search technologies (ElasticSearch, Lucene, Solr). Cloud services experience (AWS, GCP, Azure) - Containerization and CI/CD (Docker, Jenkins) - Linux systems and application servers (nginx, Tomcat). Have design patterns and coding best practices. System reliability and scalability in cloud infrastructure. Experience with failover and circuit breaking patterns. Have application logging and performance monitoring experience. Proficiency with tools like Splunk, ELK, Datadog, Prometheus .System maintenance and troubleshooting. Have experience with version control (Git) and build tools (Maven/Gradle). Secure coding practices and OWASP guidelines - Localization/internationalization implementation. Have excellent verbal and written communication. Collaborative team player with consensus-building ability. Problem-solving skills for complex technical challenges Ways of Working Our structured hybrid approach is centered around our offices and remote work environments. The work style of each role, Hybrid, Remote, or In-Person is indicated in the job description/posting. Benefits As part of our award-winning workplace culture and commitment to delivering happiness, our benefits program offers a variety of perks, benefits, and options to help employees maintain their physical, mental, emotional, and financial health; support work-life balance; and contribute to their community in meaningful ways. Click Learn for more information. About Us Zoomies help people stay connected so they can get more done together. We set out to build the best collaboration platform for the enterprise, and today help people communicate better with products like Zoom Contact Center, Zoom Phone, Zoom Events, Zoom Apps, Zoom Rooms, and Zoom Webinars. We’re problem-solvers, working at a fast pace to design solutions with our customers and users in mind. Find room to grow with opportunities to stretch your skills and advance your career in a collaborative, growth-focused environment. Our Commitment At Zoom, we believe great work happens when people feel supported and empowered. We’re committed to fair hiring practices that ensure every candidate is evaluated based on skills, experience, and potential. If you require an accommodation during the hiring process, let us know—we’re here to support you at every step. If you need assistance navigating the interview process due to a medical disability, please submit an Accommodations Request Form and someone from our team will reach out soon. This form is solely for applicants who require an accommodation due to a qualifying medical disability. Non-accommodation-related requests, such as application follow-ups or technical issues, will not be addressed.
Posted 2 days ago
7.0 years
0 Lacs
India
On-site
About Company: Glowingbud is a rapidly growing eSIM services platform that simplifies connectivity with powerful APIs, robust B2B and B2C interfaces, and seamless integrations with Telna. Our platform enables global eSIM lifecycle management, user onboarding, secure payment systems, and scalable deployments. Recently acquired by Telna (https://www.telna.com), we are expanding our product offerings and team to meet increasing demand and innovation goals. Key Responsibilities: API Development: Design, develop, optimize, and maintain high-performance RESTful APIs using Node.js and MongoDB. Scalability & Performance: Optimize backend performance for handling large data volumes and high-traffic production environments. Multi-Tenant SaaS: Develop and maintain multi-tenant architectures ensuring data policies, security, scalability, and efficiency. Microservices Architecture: Design and implement microservices-based solutions, ensuring modularity and maintainability. Database Management: Proficiently manage and optimize MongoDB, including indexing, aggregation, and performance tuning. System Engineering: Work with DevOps to ensure scalability, reliability, and security of backend systems. Product Development: Collaborate with product teams to build long-term, scalable backend solutions. Code Quality & Security: Write clean, maintainable, and secure code following industry best practices. Enforce coding standards, conduct detailed code reviews. Monitoring & Debugging: Implement logging, monitoring, and debugging tools to ensure system reliability. Collaboration: Work closely with frontend teams and DevOps to ensure seamless API integrations and deployments. Qualifications: 7+ years of experience in backend development with Node.js and MongoDB. Strong understanding of microservices architecture and system design principles. Experience in building and maintaining multi-tenant SaaS applications. Proven experience handling large-scale data and high-traffic production systems. Proficiency in MongoDB, including schema design, indexing strategies, and performance optimization. Experience with event-driven architecture and messaging queues (e.g., AWS SQS, RabbitMQ, Kafka). Knowledge of authentication and authorization mechanisms (JWT, AWS Cognito, SSO). Strong experience with API development best practices, security, and rate limiting. Familiarity with containerization (Docker, Kubernetes) and CI/CD pipelines. Proficiency in cloud services (AWS, GCP, or Azure) for backend infrastructure. Preferred Skills: Experience with Redis, ElasticSearch, or other caching mechanisms. Knowledge of serverless architectures and cloud-native development. Exposure to REST / gRPC for API communication. Understanding of data streaming and real-time processing. Experience with NoSQL and relational database hybrid architectures. Familiarity with observability tools (Prometheus, Grafana, ELK Stack). Experience in automated testing for backend systems.
Posted 2 days ago
5.0 years
0 Lacs
Greater Chennai Area
On-site
Customers trust the Alation Data Intelligence Platform for self-service analytics, cloud transformation, data governance, and AI-ready data, fostering data-driven innovation at scale. With more than $340M in funding – valued at over $1.7 billion and nearly 600 customers, including 40% of the Fortune 100 — Alation helps organizations realize value from data and AI initiatives. Alation has been recognized in 2024 as one of Inc. Magazine's Best Workplaces for the fifth time, a testament to our commitment to creating an inclusive, innovative, and collaborative environment. Collaboration is at the forefront of everything we do. We strive to bring diverse perspectives together and empower each team member to contribute their unique strengths to live out our values each day. These are: Move the Ball, Build for the Long Term, Listen Like You’re Wrong, and Measure Through Customer Impact. Joining Alation means being part of a fast-paced, high-growth company where every voice matters, and where we’re shaping the future of data intelligence with AI-ready data. Join us on our journey to build a world where data culture thrives and curiosity is celebrated each day! Job Description Customers trust the Alation Data Intelligence Platform for self-service analytics, cloud transformation, data governance, and AI-ready data, fostering data-driven innovation at scale. With more than $340M in funding – valued at over $1.7 billion and nearly 600 customers, including 40% of the Fortune 100 — Alation helps organizations realize value from data and AI initiatives. Alation has been recognized in 2024 as one of Inc. Magazine's Best Workplaces for the fifth time, a testament to our commitment to creating an inclusive, innovative, and collaborative environment. Collaboration is at the forefront of everything we do. We strive to bring diverse perspectives together and empower each team member to contribute their unique strengths to live out our values each day. These are: Move the Ball, Build for the Long Term, Listen Like You’re Wrong, and Measure Through Customer Impact. Joining Alation means being part of a fast-paced, high-growth company where every voice matters, and where we’re shaping the future of data intelligence with AI-ready data. Join us on our journey to build a world where data culture thrives and curiosity is celebrated each day! Join us! We are looking for an experienced Staff Technical Support Engineer to join our advanced support team. You will provide advanced-level technical support, helping our customers integrate with the Alation platform. You will be responsible for troubleshooting and debugging complex issues as well as acting as an escalation point with customers and internal teams. What You'll Be Doing Provide advanced-level technical support to Alation customers, partners, prospects, and other support engineers. Specialize in at least one of the support specialization areas and serve as SME for internal and external customers. Contribute to the Alation Support Knowledge Base by regularly authoring, editing and updating technical documentation such as KB articles, runbooks, community FAQs, product documentation, etc. Facilitate internal and external technical enablement sessions. Build and utilize complex lab setups to replicate and resolve problems. You Should Have CS degree and at least 5 years of experience as a support engineer providing enterprise software application support. Experience troubleshooting Linux and running shell commands. Experience with Relational Databases, such Oracle and Postgres. SQL is a must. Ability to diagnose and debug applications written in Java and/or Python. Experience with Web servers, such as Apache and Nginx. Experience with REST APIs A big plus if you have experience in the following areas: Postgres (DB internals) JDBC drivers Elasticsearch, NoSQL, MongoDB Hadoop Ecosystem (Hive, HBase) Cloud technologies and frameworks such as Kubernetes and Docker Alation, Inc. is an Equal Employment Opportunity employer. All qualified applicants will receive consideration for employment without regards to that individual’s race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, veteran status, genetic information, ethnicity, citizenship, or any other characteristic protected by law. The Company will strive to provide reasonable accommodations to permit qualified applicants who have a need for an accommodation to participate in the hiring process (e.g., accommodations for a job interview) if so requested. This company participates in E-Verify. Click on any of the links below to view or print the full poster. E-Verify and Right to Work. Alation, Inc. is an Equal Employment Opportunity employer. All qualified applicants will receive consideration for employment without regards to that individual’s race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, veteran status, genetic information, ethnicity, citizenship, or any other characteristic protected by law. The Company will strive to provide reasonable accommodations to permit qualified applicants who have a need for an accommodation to participate in the hiring process (e.g., accommodations for a job interview) if so requested. This company participates in E-Verify. Click on any of the links below to view or print the full poster. E-Verify and Right to Work.
Posted 2 days ago
0.0 - 1.0 years
1 - 2 Lacs
Cochin
On-site
We are looking for a skilled Junior DevOps Engineer to join our team and help us streamline our development and deployment processes. In this role, you will work closely with software developers, IT operations, and system administrators to build and maintain scalable infrastructure, automate deployment pipelines, and ensure the reliability and efficiency of our systems. You will play a key role in implementing best practices for continuous integration and continuous deployment (CI/CD), monitoring, and cloud services. Experience: 0-1 years as a DevOps Engineer Location : Kochi,Infopark Phase II Immediate Joiners Preferred Key Responsibility Area Exposure to version control systems such as Git, SVN (Subversion), and Mercurial foundational tools. Experience in CI/CD tools like Jenkins, Travis CI, CircleCI, and GitLab CI/CD Proficiency in configuration management tools such as Ansible, Puppet, Chef, and SaltStack Knowledge in containerization platforms such as Docker and container orchestration tools like Kubernetes Exposure to Infrastructure as Code (IaC) Tools like Terraform, AWS CloudFormation, Azure Resource Manager, and Google Cloud Deployment Manager Experience in Monitoring and logging solutions such as Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, and Datadog. Knowledge of collaboration and communication platforms such as Slack, and Atlassian Jira. Qualifications Bachelor’s degree in Computer Science, Information Technology, or a related field. Proven experience as a DevOps Engineer or in a similar role. Job Types: Full-time, Permanent Pay: ₹15,000.00 - ₹20,000.00 per month Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Supplemental Pay: Performance bonus Yearly bonus Application Question(s): are u willing to relocate to Kochi? Whats your notice period? Work Location: In person
Posted 2 days ago
3.0 years
5 - 10 Lacs
Kazhakuttam
On-site
About the Role You will architect, build and maintain end-to-end data pipelines that ingest 100 GB+ of NGINX/web-server logs from Elasticsearch, transform them into high-quality features, and surface actionable insights and visualisations for security analysts and ML models. Acting as both a Data Engineer and a Behavioural Data Analyst, you will collaborate with security, AI and frontend teams to ensure low-latency data delivery, rich feature sets and compelling dashboards that spot anomalies in real time. Key Responsibilities ETL & Pipeline Engineering: Design and orchestrate scalable batch / near-real-time ETL workflows to extract raw logs from Elasticsearch. Clean, normalize and partition logs for long-term storage and fast retrieval. Optimize Elasticsearch indices, queries and retention policies for performance and cost. Feature Engineering & Feature Store: Assist in the development of robust feature-engineering code in Python and/or PySpark. Define schemas and loaders for a feature store (Feast or similar). Manage historical back-fills and real-time feature look-ups ensuring versioning and reproducibility. Behaviour & Anomaly Analysis: Perform exploratory data analysis (EDA) to uncover traffic patterns, bursts, outliers and security events across IPs, headers, user agents and geo data. Translate findings into new or refined ML features and anomaly indicators. Visualisation & Dashboards: Create time-series, geo-distribution and behaviour-pattern visualisations for internal dashboards. Partner with frontend engineers to test UI requirements. Monitoring & Scaling: Implement health and latency monitoring for pipelines; automate alerts and failure recovery. Scale infrastructure to support rapidly growing log volumes. Collaboration & Documentation: Work closely with ML, security and product teams to align data strategy with platform goals. Document data lineage, dictionaries, transformation logic and behavioural assumptions. Minimum Qualifications: Education – Bachelor’s or Master’s in Computer Science, Data Engineering, Analytics, Cybersecurity or related field. Experience – 3 + years building data pipelines and/or performing data analysis on large log datasets. Core Skills Python (pandas, numpy, elasticsearch-py, Matplotlib, plotly, seaborn; PySpark desirable) Elasticsearch & ELK stack query optimisation SQL for ad-hoc analysis Workflow orchestration (Apache Airflow, Prefect or similar) Data modelling, versioning and time-series handling Familiarity with visualisation tools (Kibana, Grafana). DevOps – Docker, Git, CI/CD best practices. Nice-to-Have Kafka, Fluentd or Logstash experience for high-throughput log streaming. Web-server log expertise (NGINX / Apache, HTTP semantics) Cloud data platform deployment on AWS / GCP / Azure. Hands-on exposure to feature stores (Feast, Tecton) and MLOps. Prior work on anomaly-detection or cybersecurity analytics systems. Why Join Us? You’ll sit at the nexus of data engineering and behavioural analytics, turning raw traffic logs into the lifeblood of a cutting-edge AI security product. If you thrive on building resilient pipelines and diving into the data to uncover hidden patterns, we’d love to meet you. Job Type: Full-time Pay: ₹500,000.00 - ₹1,000,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Work Location: In person
Posted 2 days ago
4.0 years
3 - 6 Lacs
Hyderābād
On-site
Join one of the nation’s leading and most impactful health care performance improvement companies. Over the years, Health Catalyst has achieved and documented clinical, operational, and financial improvements for many of the nation’s leading healthcare organizations. We are also increasingly serving international markets. Our mission is to be the catalyst for massive, measurable, data-informed healthcare improvement through: Data: integrate data in a flexible, open & scalable platform to power healthcare’s digital transformation Analytics: deliver analytic applications & services that generate insight on how to measurably improve Expertise: provide clinical, financial & operational experts who enable & accelerate improvement Engagement: attract, develop and retain world-class team members by being a best place to work POSITION OVERVIEW: We are looking for a highly skilled Senior Database Engineer with 4+ years of hands-on experience in managing and optimizing large-scale, high-throughput database systems. The ideal candidate will possess deep expertise in handling complex ingestion pipelines across multiple data stores and a strong understanding of distributed database architecture. The candidate will play a critical technical leadership role in ensuring our data systems are robust, performant, and scalable to support massive datasets ingested from various sources without bottlenecks. You will work closely with data engineers, platform engineers, and infrastructure teams to continuously improve database performance and reliability. performance bottlenecks. KEY RESPONSIBILITIES: Query Optimization: Design, write, debug and optimize complex queries for RDS (MySQL/PostgreSQL), MongoDB, Elasticsearch, and Cassandra. Large-Scale Ingestion: Configure databases to handle high-throughput data ingestion efficiently. Database Tuning: Optimize database configurations (e.g., memory allocation, connection pooling, indexing) to support large-scale operations. Schema and Index Design: Develop schemas and indexes to ensure efficient storage and retrieval of large datasets. Monitoring and Troubleshooting: Analyze and resolve issues such as slow ingestion rates, replication delays, and performance bottlenecks. Performance Debugging: Analyze and troubleshoot database slowdowns by investigating query execution plans, logs, and metrics. Log Analysis: Use database logs to diagnose and resolve issues related to query performance, replication, and ingestion bottlenecks Data Partitioning and Sharding: Implement partitioning, sharding, and other distributed database techniques to improve scalability. Batch and Real-Time Processing: Optimize ingestion pipelines for both batch and realtime workloads. Collaboration: Partner with data engineers and Kafka experts to design and maintain robust ingestion pipelines. Stay Updated: Stay up to date with the latest advancements in database technologies and recommend improvements REQUIRED SKILLS AND QUALIFICATIONS: Database Expertise: Proven experience with MySQL/PostgreSQL (RDS), MongoDB, Elasticsearch, and Cassandra. High-Volume Operations: Proven experience in configuring and managing databases for large-scale data ingestions. Performance Tuning: Hands-on experience with query optimization, indexing strategies, and execution plan analysis for large datasets. Database Internals: Strong understanding of replication, partitioning, sharding, and caching mechanisms. Data Modeling: Ability to design schemas and data models tailored for high-throughput use cases. Programming Skills: Proficiency in at least one programming language (e.g., Python, Java, Go) for building data pipelines. Debugging Proficiency: Strong ability to debug slowdowns by analyzing database logs, query execution plans, and system metrics. Log Analysis Tools: Familiarity with database log formats and tools for parsing and analyzing logs. Monitoring Tools: Experience with monitoring tools such as AWS CloudWatch, Prometheus, and Grafana to track ingestion performance. Problem-Solving: Analytical skills to diagnose and resolve ingestion-related issues effectively. PREFERRED QUALIFICATIONS: Certification in any of the mentioned database technologies. Hands-on experience with cloud platforms such as AWS (preferred), Azure, or GCP. Knowledge of distributed systems and large-scale data processing. Familiarity with cloud-based database solutions and infrastructure. Familiarity with large scale data ingestion tools like Kafka, Spark or Flink. EDUCATIONAL REQUIREMENTS: Bachelor’s degree in computer science, Information Technology, or a related field. Equivalent work experience will also be considered The above statements describe the general nature and level of work being performed in this job function. They are not intended to be an exhaustive list of all duties, and indeed additional responsibilities may be assigned by Health Catalyst . Studies show that candidates from underrepresented groups are less likely to apply for roles if they don’t have 100% of the qualifications shown in the job posting. While each of our roles have core requirements, please thoughtfully consider your skills and experience and decide if you are interested in the position. If you feel you may be a good fit for the role, even if you don’t meet all of the qualifications, we hope you will apply. If you feel you are lacking the core requirements for this position, we encourage you to continue exploring our careers page for other roles for which you may be a better fit. At Health Catalyst, we appreciate the opportunity to benefit from the diverse backgrounds and experiences of others. Because of our deep commitment to respect every individual, Health Catalyst is an equal opportunity employer.
Posted 2 days ago
0 years
7 - 10 Lacs
Hyderābād
Remote
Hyderabad, India Job ID: R-1080050 Apply prior to the end date: August 23rd, 2025 When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you'll be doing... We are seeking a highly motivated and experienced Engineer III to serve as the primary Operations Engineer for our Software-Defined Networking (SDN) controllers, with integrated DevOps responsibilities. In this critical role, you will be instrumental in ensuring the reliability, scalability, and performance of our core network provisioning infrastructure. You will work closely with a team to manage, maintain, and troubleshoot the BNC OpenDaylight controller platform and its successor, the NEAP OpenDaylight controller platform (including Redhat Openshift, Kubernetes, Elasticsearch, Grafana, Kafka, Kibana, MongoDB, and Prometheus components), Nokia Nuage, and Nokia SRIC (Segment Routing Controller) platforms. You will also contribute to building automation tools to enhance operational efficiency. Triaging, researching, and appropriately routing AYS (At Your Service) tickets submitted against the operations team, and following up on tickets throughout their lifecycle. Scheduling and tracking maintenance windows in the Kirke Change Control system for hands-on operational procedures to be executed on production platforms. Understanding the needs of the Operations teams and driving solutions to problems through prioritizing projects, managing your time, taking accountability for delivering results, and maintaining close relationships with peer teams/organizations, including SDN Planning. Overseeing and taking accountability for compliance to all Verizon security standards through CPI-810, requirements for all Verizon built applications, adherence to privacy and data policies. Presenting and communicating progress on projects to various teams. Partnering internally and externally with peer organizations and vendors to provide operational leadership. Helping to usher in a DevOps culture to Verizon. Contributing to the design, build, test, and deployment of automation utilities that are integral to the operations toolkit. Helping support the team through actively providing feedback to management on status and any obstacles. What we're looking for... You are excited by the prospect of working on groundbreaking technology in a creative and entrepreneurial environment. You’re a great team player and can use your excellent communication skills to get your point across to technical and non-technical audiences alike. No stranger to a fast-paced environment, you manage competing priorities with ease and get a kick out of finding innovative solutions to complex problems. You'll need to have: Bachelor’s degree and four or more years of work experience. Four or more years of relevant work experience in networking and software. Knowledge of software automation, programming languages, virtualization, and the networking layers. Comfort working in the Linux command line environment. Familiarity with Python network automation libraries including Netmiko, Napalm, Ansible, REST, and Netconf. Experience with Redhat Openstack and Openshift deployment environments Experience in monitoring tools such as prometheus, grafana, newrelic, ELK etc Experience in operations/platform support for network applications along with automation development to improve operational efficiency using python, ansible etc. Even better if you have one or more of the following: Experience with managing network devices. Good organizational skills and the ability to handle multiple work assignments simultaneously. Four or more years' experience in the telco industry focused on technology. Ability to lead technical discussions with a group of individuals in the industry with varying technical positions. Experience using REST-based web services. Knowledge of Network Configuration Protocol (NETCONF), REST-CONF, and YANG. Knowledge of NextGen Optical Networks, 5G, SDN, NFV, and other relevant technologies. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Apply Now Save Saved Open sharing options Share Related Jobs Engineer III Specialist-Cloud Save Chennai, India Technology Principal Kubernetes Engineer Save Ashburn, Virginia, +4 other locations Technology Engineer III Consultant-Software Development Save Hyderabad, India Technology Shaping the future. Connect with the best and brightest to help innovate and operate some of the world’s largest platforms and networks.
Posted 2 days ago
3.0 years
6 - 9 Lacs
India
On-site
Job Title: Core PHP Developer (3–10 Years Experience) Location: Madhapur, Hyderabad. Job Type: Full-Time | Permanent About the Role: We are seeking a Core PHP Developer with 3 to 10 years of experience , who has a solid foundation in PHP development and can work independently or in small teams. Many of our projects are legacy applications (7–10 years old) built using Core PHP, so a deep understanding of non-framework PHP development, MySQL optimization, and backend integrations is essential. Key Responsibilities: Maintain, enhance, and troubleshoot legacy Core PHP projects Work independently or collaboratively to manage backend, frontend, and database development Optimize performance for large data sets (e.g., import 1 lakh+ records efficiently into MySQL) Troubleshoot and recover crashed MySQL databases and perform advanced DB operations Implement or manage integrations with tools like Elasticsearch or MongoDB (preferred) Handle deployment and server-level configurations (DNS, SSL, TLS, FTP) Use Git for version control and collaborate effectively with the team Required Skills & Experience: 3+ years of hands-on experience with Core PHP (non-framework projects) Experience working on long-term, monolithic PHP applications Strong MySQL knowledge: optimization, large dataset handling, backup/recovery Experience with frontend basics (HTML, CSS, JavaScript) as needed Ability to manage full-stack tasks independently Familiar with Git-based version control Exposure to DNS, SSL/TLS, FTP, and hosting-related configurations Preferred (Not Mandatory): Working knowledge of Elasticsearch or MongoDB Familiarity with Linux server environments Experience with importing bulk data and performance tuning Ideal Candidate: Self-driven, problem-solver, and capable of taking ownership of complete modules or projects Comfortable handling both code and server-side aspects Efficient in debugging legacy codebases and refactoring when needed Job Type: Full-time Pay: ₹50,000.00 - ₹80,000.00 per month Ability to commute/relocate: Madhapur, Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Preferred) Location: Madhapur, Hyderabad, Telangana (Required) Work Location: In person
Posted 2 days ago
3.0 years
36 Lacs
Delhi
On-site
Job description Technical Requirements: Requirement: ELASTIC SEARCH (3+ YEARS) Experience: Minimum of 3+ years of experience working with Elasticsearch in a production environment. Experience with distributed systems, big data, and search technologies is highly desirable. Skills: Design, implement, and manage Elasticsearch clusters, ensuring optimal performance, scalability, and reliability Configure and maintain Elasticsearch index mappings, settings, and lifecycle management. Create and maintain comprehensive documentation for Elasticsearch setups, configurations, and best practices. Monitor cluster health, performance, and capacity planning to ensure high availability. Create and maintain comprehensive documentation for Elasticsearch setups, configurations, and best practices. Stay updated with the latest developments in Elasticsearch and related technologies and share knowledge with the team. Manage the lifecycle of indexed data, including rollovers, snapshots, and retention policies In-depth knowledge of Elasticsearch, including cluster management, indexing, search optimization, and security. Proficiency in data ingestion tools like Logstash, Beats, and other ETL pipelines. Develop and implement data ingestion pipelines using tools such as Logstash, Beats, or custom scripts to ingest structured and unstructured data. Strong understanding of JSON, REST APIs, and data modeling. Experience with Linux/Unix systems and scripting languages (e.g., Bash, Python). Familiarity with monitoring tools like Kibana, Grafana, or Prometheus. Job Types: Full-time, Contractual / Temporary Pay: From ₹300,000.00 per month Work Location: In person Job Types: Full-time, Contractual / Temporary Pay: From ₹300,000.00 per month Work Location: In person
Posted 2 days ago
6.0 years
0 Lacs
India
Remote
We are seeking a highly skilled and passionate Senior Software Engineer to join our client's dynamic team. This is a full-time hybrid role, primarily located in Noida with flexibility for some work from home. The ideal candidate will be passionate about crafting solid code, working collaboratively with team members to predictably deliver value. You will be a curious, adventurous individual who constantly looks to advance the current state of products and the technologies used. 📍 Location: Noida 🕒 Work Mode: Hybrid (2–3 days/week in office) 💼 Compensation: Best in industry, based on experience · Be an important part of a team that has full ownership of technical solutions, design, and implementation Write well-designed, testable clean code that supports business needs Get hands on and debug complex issues Operate in a fast-moving environment, make quick decisions and execute to deliver desired outcomes Be an integral part of the overall development / delivery team Job Qualifications: At least 6+ years of software development experience BS/MS in computer science or equivalent work experience. Strong Object-Oriented Programming concepts. Experience with Java, Spring Boot, Angular, RESTful API Experience with Cloud Services; Google Cloud preferred Experience in web development model with hands on experience developing products leveraging UI technology stacks like JavaScript/Angular or equivalent Experience with Google Cloud, cloud deployment and DevOps mindset. Familiarity with CI/CD automations and tools Familiarity with Infrastructure as code tool such as Terraform Knowledge of both relational (PostgreSQL) and NoSQL(DynamoDB and/or ElasticSearch) database technologies Knowledge of build pipelines (Jenkins preferred) Experience with Version control systems such as Git (BitBucket or Github preferred) If interested and looking for a change, please share your resume at sheenam.luthra@talentumhr.co.in
Posted 2 days ago
5.0 years
0 Lacs
India
Remote
About Us At SentinelOne, we’re redefining cybersecurity by pushing the limits of what’s possible—leveraging AI-powered, data-driven innovation to stay ahead of tomorrow’s threats. From building industry-leading products to cultivating an exceptional company culture, our core values guide everything we do. We’re looking for passionate individuals who thrive in collaborative environments and are eager to drive impact. If you’re excited about solving complex challenges in bold, innovative ways, we’d love to connect with you. What are we looking for? We are looking for a seasoned Senior Software Engineering Leader with a strong background in building cloud-based products using Java technology. In this role, you will collaborate closely with internal teams to implement new features, own the end-to-end development of server-side components, and ensure high-performance and scalable deployments. You will work closely with QA teams to deliver high-quality products and interface with customer-facing teams to implement feature requirements effectively. This role demands a high level of ownership, a strong learning quotient, and meticulous attention to detail. What will you do? Build next-generation cloud-based products using Java technology. Actively collaborate with internal teams to implement new and exciting product features. Own the end-to-end development of server-side components, from design to deployment. Identify performance and scalability requirements and optimize for large deployments. Work closely with the Quality Assurance team to ensure high-quality deliverables to customers. Interface with customer-facing teams to understand and implement feature requirements. What experience or knowledge should you bring? 5 to 10 years of experience in core Java development. Total 15+ years of Industry Experience. Proficiency in Java programming with a deep understanding of the Java language. Experience deploying Java-based applications in Jetty or any popular web application server. Strong knowledge of algorithm design and data structures. Advantages: Experience in microservices architecture using Spring is desirable. Familiarity with database technologies like Elasticsearch is desirable. Experience in developing and deploying cloud-native applications in AWS is Desirable. Strong networking fundamentals and sound TCP/IP knowledge is a plus Why us? You will be joining a cutting-edge company, where you will tackle extraordinary challenges and work with the very best in the industry along with competitive compensation. Flexible working hours and hybrid/remote work model. Flexible Time Off. Flexible Paid Sick Days. Global gender-neutral Parental Leave (16 weeks, beyond the leave provided by the local laws) Generous employee stock plan in the form of RSUs (restricted stock units) On top of RSUs, you can benefit from our attractive ESPP (employee stock purchase plan) Gym membership/sports gears by Cultfit. Wellness Coach app, with 3,000+ on-demand sessions, daily interactive classes, audiobooks, and unlimited private coaching. Private medical insurance plan for you and your family. Life Insurance covered by S1 (for employees) Telemedical app consultation (Practo) Global Employee Assistance Program (confidential counseling related to both personal and work life matters) High-end MacBook or Windows laptop. Home-office-setup allowances (one time) and maintenance allowance. Internet allowances. Provident Fund and Gratuity (as per govt clause) NPS contribution (Employee contribution) Half yearly bonus program depending on the individual and company performance. Above standard referral bonus as per policy. Udemy Business platform for Hard/Soft skills Training & Support for your further educational activities/trainings Sodexo food coupons. SentinelOne is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. SentinelOne participates in the E-Verify Program for all U.S. based roles.
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough