Jobs
Interviews

35553 Kubernetes Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Position: Devops Engineering Manager Address: 22nd Floor Tower - C, DLF Epitome, Cyber Hub Gurgaon, Haryana Tbo.com(www.tbo.com) TBO is a global platform that aims to simplify all buying and selling travel needs of travel partners across the world. The proprietary technology platform aims to simplify the demands of the complex world of global travel by seamlessly connecting the highly distributed travel buyers and travel suppliers at scale. The TBO journey began in 2006 with a simple goal to address the evolving needs of travel buyers and suppliers, and what started off as a single product air ticketing company, has today become the leading B2A (Business to Agents) travel portal across the Americas, UK & Europe, Africa, Middle East, India, and Asia Pacific. Today, TBOs product range from air, hotels, rail, holiday packages, car rentals, transfers, sightseeing, cruise, and cargo. Apart from these products, our proprietary platform relies heavily on AI/ML to offer unique listings and products, meeting specific requirements put forth by customers, thus increasing conversions. TBOs approach has always been technology-first and we continue to invest on new innovations and new offerings to make travel easy and simple. TBOs travel APIs are serving large travel ecosystems across the world while the modular architecture of the platform enables new travel products while expanding across new geographies. Why TBO: You will influence & contribute to Building World Largest Technology Led Travel Distribution Network for a $ 9 Trillion global travel business market. We are the emerging leaders in technology led end-to-end travel management, in the B2B space. Physical Presence in 47 countries with business in 110 countries. We are notching up our Gross Transaction Volume (GTV) in several billions and growing much faster than the industry growth rate; backed by a proven and well-established business model. We are reputed for our-long lasting trusted relationships. We stand by our eco system of suppliers and buyers to service the end customer. An open & informal start-up environment which cares. What TBO offers to a Life Traveler in You: Chance to work with CXO Leaders. Our leadership come from top IITs and IIMs; or have led significant business journeys for top brands Indian and global brands. Enhance Your Leadership Acumen. Join the journey to create global scale and World Best. Challenge Yourself to do something path breaking. Be Empowered. The only thing to stop you will be your imagination. Travel space is likely to see significant growth. Witness and shape this space. It will be one exciting journey. Own a wide portfolio of our Platform Business, India. Primary focus will be on top talent attraction, retention, development, and engagement. Talent Acquisition, Business HR, HR Operations & Leaning will report in apart from relevant COE functions connected to these domains. Do You have it in You Take the Voyage (‘Must-Haves’ ) . 8+ years of good hands-on experience in implementing DevOps practices · Design, develop and maintain DevOps process comprising several stages including plan, code, build, test, release, deploy, operate and monitor. · Platform automation using AWS cloud technologies like CDK, terraform, cloud formation etc. · Experience in scripting languages such as Python, Power shell, bash. . Candidates with experience in Linux, nginx, envoy, and fargate are preferred. Good knowledge of Kubernetes (preference AWS EKS), containers using Docker. · Design and Develop the CI/CD pipeline using "GitHub Actions" preferably. · End to end implementation.

Posted 1 day ago

Apply

3.0 - 6.0 years

13 - 15 Lacs

Mumbai Metropolitan Region

On-site

About The Opportunity We’re a rapid-growth enterprise AI platform provider in the cloud services & SaaS sector, empowering Fortune 500 customers to modernize data pipelines, automate knowledge work, and unlock new revenue streams with Generative AI. Backed by AI researchers and Google Cloud architects, we deliver production-grade solutions on GCP at scale. Join our hybrid team in Pune or Mumbai to shape the next generation of agentic AI-driven data products. Role & Responsibilities Architect, develop, and maintain end-to-end ETL pipelines on GCP leveraging BigQuery, Dataflow, Cloud Composer, Dataform, and Pub/Sub. Build and optimize secure, high-performance RESTful and gRPC APIs to expose analytics and ML features to internal and external consumers. Implement cost-effective, resilient data workflows through partitioning, autoscaling, and advanced monitoring with Cloud Monitoring/Stackdriver. Automate infrastructure provisioning and deployments using Terraform, Cloud Build, and Cloud Deploy, enforcing Infrastructure-as-Code best practices. Embed data quality and governance via schema enforcement, versioned contracts, and automated regression testing. Collaborate closely with product managers, data scientists, and SRE teams to meet SLAs and deliver measurable business impact. Skills & Qualifications Must-Have 3-6 years designing and implementing large-scale data solutions on Google Cloud Platform (BigQuery, Composer, Dataflow, Dataform, Pub/Sub). Strong proficiency in Python and SQL for building robust data pipelines and analytics queries. Expertise in DevOps workflows: Git, CI/CD (Cloud Build, Cloud Deploy), containerization, and Infrastructure-as-Code (Terraform). Proven experience developing and tuning high-throughput REST/gRPC APIs for data services. Deep understanding of data partitioning, optimization, and monitoring using Cloud Monitoring/Stackdriver. Solid knowledge of data quality frameworks, schema management, and automated testing in pipeline workflows. Preferred Experience integrating or operating Elasticsearch/OpenSearch for log and metric search. Familiarity with streaming frameworks such as Kafka or Flink and performance benchmarking on GCP. Exposure to Kubernetes or GKE for container orchestration and production scaling. Benefits & Culture Highlights Hybrid work model with flexible hours and collaborative office spaces in Pune and Mumbai. Continuous learning opportunities: certifications, hackathons, and AI/cloud training programs. Inclusive, innovation-driven culture emphasizing work-life balance and career growth. Skills: genrative AI,CI,Cd,Git,Data,Azure,DevOps,Python,langchain,Kubernetes,elastic search,Data Flow,RAG,LLMs,MLOps,API Development,API Platform,GCP,Cloud,Advanced,Pipelines

Posted 1 day ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Overview: We are seeking a skilled Java Developer with 6 years of progressive experience in backend development using Java and Spring Boot. The ideal candidate will have hands-on experience with Kafka, Docker, and Kubernetes CLI, along with a strong understanding of object-oriented programming (OOP), SOLID principles, clean coding practices, and design patterns. Key Skills & Experience: Proven experience in Java, Spring Boot Web, Gradle, and object-oriented programming. Strong understanding of OOP concepts, design patterns, and clean code practices. Familiarity with Kafka, Docker, and Kubernetes CLI. Exposure to functional programming is a plus. Experience with Test-Driven Development (TDD) and continuous integration. Hands-on experience with RDBMS, particularly PostgreSQL. Leadership experience in Agile, Lean, or Continuous Delivery environments. Passion for software engineering and a craftsmanship approach to development. Job Responsibilities: Deliver high-quality, customer-centric software using continuous delivery practices. Collaborate in cross-functional, value-driven teams to create innovative customer solutions. Design and build scalable, distributed systems using a microservices architecture. Leverage DevOps tools and best practices to streamline development and deployment. Contribute to all stages of software delivery—from ideation to production deployment. Mentor junior developers, providing guidance through your technical expertise and leadership. Perks & Benefits: Object Bootcamp – Upskill through intensive learning programs. Sabbatical Leaves – Recharge with extended leave options. Parental Leave – Support for growing families. Office Perks – Free meals and snacks at the workplace. Career-Focused Culture – Dedicated growth and development opportunities. Work Culture: Open and transparent culture with a flat hierarchy. Regular 360-degree feedback and a structured mentorship program. Supportive and collaborative work environment. A competitive yet friendly atmosphere that encourages innovation and learning.

Posted 1 day ago

Apply

3.0 - 6.0 years

13 - 15 Lacs

Pune, Maharashtra, India

On-site

About The Opportunity We’re a rapid-growth enterprise AI platform provider in the cloud services & SaaS sector, empowering Fortune 500 customers to modernize data pipelines, automate knowledge work, and unlock new revenue streams with Generative AI. Backed by AI researchers and Google Cloud architects, we deliver production-grade solutions on GCP at scale. Join our hybrid team in Pune or Mumbai to shape the next generation of agentic AI-driven data products. Role & Responsibilities Architect, develop, and maintain end-to-end ETL pipelines on GCP leveraging BigQuery, Dataflow, Cloud Composer, Dataform, and Pub/Sub. Build and optimize secure, high-performance RESTful and gRPC APIs to expose analytics and ML features to internal and external consumers. Implement cost-effective, resilient data workflows through partitioning, autoscaling, and advanced monitoring with Cloud Monitoring/Stackdriver. Automate infrastructure provisioning and deployments using Terraform, Cloud Build, and Cloud Deploy, enforcing Infrastructure-as-Code best practices. Embed data quality and governance via schema enforcement, versioned contracts, and automated regression testing. Collaborate closely with product managers, data scientists, and SRE teams to meet SLAs and deliver measurable business impact. Skills & Qualifications Must-Have 3-6 years designing and implementing large-scale data solutions on Google Cloud Platform (BigQuery, Composer, Dataflow, Dataform, Pub/Sub). Strong proficiency in Python and SQL for building robust data pipelines and analytics queries. Expertise in DevOps workflows: Git, CI/CD (Cloud Build, Cloud Deploy), containerization, and Infrastructure-as-Code (Terraform). Proven experience developing and tuning high-throughput REST/gRPC APIs for data services. Deep understanding of data partitioning, optimization, and monitoring using Cloud Monitoring/Stackdriver. Solid knowledge of data quality frameworks, schema management, and automated testing in pipeline workflows. Preferred Experience integrating or operating Elasticsearch/OpenSearch for log and metric search. Familiarity with streaming frameworks such as Kafka or Flink and performance benchmarking on GCP. Exposure to Kubernetes or GKE for container orchestration and production scaling. Benefits & Culture Highlights Hybrid work model with flexible hours and collaborative office spaces in Pune and Mumbai. Continuous learning opportunities: certifications, hackathons, and AI/cloud training programs. Inclusive, innovation-driven culture emphasizing work-life balance and career growth. Skills: genrative AI,CI,Cd,Git,Data,Azure,DevOps,Python,langchain,Kubernetes,elastic search,Data Flow,RAG,LLMs,MLOps,API Development,API Platform,GCP,Cloud,Advanced,Pipelines

Posted 1 day ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

Remote

At NiCE, we don’t limit our challenges. We challenge our limits. Always. We’re ambitious. We’re game changers. And we play to win. We set the highest standards and execute beyond them. And if you’re like us, we can offer you the ultimate career opportunity that will light a fire within you. So, what’s the role all about? We are looking for talented and motivated professionals, interested in delivering the latest in Application Operations (SaaS) using AWS, in a culture that encourages autonomous productive teams. You are Someone who loves learning how to configure backend systems, and infrastructure that will help us build our global SaaS platform. You have a comprehensive understanding of Amazon AWS platform and know how to light and put fires out. You are an engineer with confidence in his / her skill set, who is not afraid to look under the hood and break stuff to make stuff. If yes, then come be a part of NiCE Customer Services, a team of software engineers re-inventing Application Operations. How will you make an impact? You will get to team up with highly talented and highly motivated engineers and architects, using the latest in AWS, working on cutting edge technology. As a part of this team, you will be working in a fast-paced environment deploying, monitoring, automating & supporting a highly scalable real-time critical platform(s) impacting, millions of individuals & Billions of dollars. Implementing, configuring custom changes, and deploying new Application release upgrades Setup new environments & deploying solutions. Building proactive Monitoring & alerting service. Automation using ansible, python, Perl scripting. Setup & securing new Application instances Change management, Building deployment and rollback plans and procedures Creating and maintaining knowledge base for various technical resolutions Create and setup deployment scripts for different environments (i.e. Test properties vs Prod properties) Configure and optimize instances and web servers for optimal performance. (ex: adjusting default connection limits, adjusting request queuing thresholds) AWS troubleshooting support Support, Architect and Implement alongside Technical & Operations teams to meet our customers' individual needs for their infrastructure & application deployments. Work on critical, highly complex customer problems that will span multiple AWS services (dealing daily with high severity incidents). Help build and improve customer operations through scripts to automate and deploy AWS resources seamlessly with as little manual intervention as possible. Collaborate and help build utilities and tools for internal use that enable you and your fellow Engineers to operate safely at high speed / wide scale. Drive customer communication during critical events. Provide on-call off hour support and flexible to work in 24*7 shift environment Have you got what it takes? 2+ years of relevant experience Excellent hands-on experience in managing Application Support (3 tier/2 tier apps) Strong problem solving, analytical and communication skills Exposure in handling complex application performance issues Exposure to APM tools like AppDynamics, Dynatrace Excellent skills on managing containerized / cloud-based application with exposure to various cloud services (EC2, S3, IAM, ELB, VPC, VPN). Good experience in a DevOps environment / Operations team / Infrastructure Operations team. Excellent Troubleshooting skills OS level knowledge (Windows or Linux) Database skills ( SQL ,Oracle or Postgres / Casandra) Application Server ( skills on any of Middleware technologies e.g. – Tomcat , WebLogic , WebSphere) Ability to identify the underlying root cause of performance issues & mitigate bottlenecks Good understanding on Networking , Load balancers Good communication both written and verbal Exposure to scripting language (Ansible, Perl, Python, Ruby, Shell script, Powershell etc.) Experience in working with tools like OpsGenie, Nagios, Rundeck, Good understanding in Kubernetes. Cloud / Application level Security experience Experience in Banking & Financial domain Has worked in an Agile / Sprint development model. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7996 Reporting into: Tech Manager Role Type: Individual Contributor About NiCE NICE Ltd. (NASDAQ: NICE) software products are used by 25,000+ global businesses, including 85 of the Fortune 100 corporations, to deliver extraordinary customer experiences, fight financial crime and ensure public safety. Every day, NiCE software manages more than 120 million customer interactions and monitors 3+ billion financial transactions. Known as an innovation powerhouse that excels in AI, cloud and digital, NiCE is consistently recognized as the market leader in its domains, with over 8,500 employees across 30+ countries. NiCE is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, age, sex, marital status, ancestry, neurotype, physical or mental disability, veteran status, gender identity, sexual orientation or any other category protected by law.

Posted 1 day ago

Apply

0 years

0 Lacs

New Delhi, Delhi, India

On-site

Job Description Responsible for architecting and deploying solutions that combine machine learning models with full stack applications using Java and Python. This role focuses on integrating data pipelines, model inference, and API-driven front-end interfaces to automate workflows and optimize performance across systems. Drives implementation strategies aligned to product requirements and engineering standards. Location : New Delhi - GK1 Responsibilities Understand IoT-specific requirements including data ingestion from edge devices, analytics needs, and user-facing application features. Lead technical discussions with cross-functional teams (e.g., hardware, cloud, analytics) to evaluate feasibility, define specifications, and assess performance and scalability for IoT solutions. Define and design software architecture for integrating IoT data pipelines, ML models, and full stack applications using Java and Python. Deliver robust features including sensor data processing, real-time analytics dashboards, and APIs for device management and control. Drive deployment of end-to-end IoT platforms - from data collection and ML model deployment to web/mobile access - with a focus on automation and resilience. Review and finalize infrastructure design including edge-cloud integration, containerized services, and streaming data solutions (e.g., Kafka, MQTT). Create and manage user stories for device-side logic, cloud-based processing, and visualizations,ensuring seamless interaction across systems like OSS-BSS and enterpriseapplications. Establish standards for edge computing, MLOps in IoT, and cloud-native application development (SaaS/IoT PaaS), ensuring security, scalability, and maintainability. Facilitate prioritization of features related to device data processing, predictive maintenance, anomaly detection, and real-time user interfaces. Desired Skill Sets Strong experience architecting and delivering Software applications combining real-time data, machine learning, and cloud-native full stack platforms. Hands-on expertise with Java (Spring Boot) and Python for both backend services and ML model implementation. Experience with IoT protocols (MQTT, CoAP), data streaming (Kafka, AWS Kinesis), and edge-cloud data integration. Deep understanding of software/application lifecycle management for connected device platforms. Experience working in Agile setups and DevOps pipelines with tools like Docker, Kubernetes, Jenkins, Git

Posted 1 day ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

We are seeking a skilled Technical Project Manager who can effectively bridge the gap between the technical team and the client. The ideal candidate will have a strong background in leading projects. This role requires a deep understanding of both technical and project management aspects to ensure successful project delivery. Job Responsibilities: Serve as a liaison between the technical team and clients, ensuring clear communication and alignment of project goals and deliverables. Lead the architecture, design, development, and deployment of applications built on Node.js, React.js, Paython and related technologies. Collaborate with cross-functional teams to define project requirements and deliver cutting-edge solutions. Develop reusable components and applications using React.js and NEXT. Build secure, high-performance, and scalable applications using Node.js and TypeScript. Conduct technical estimation, analysis, documentation, and architecture during the project conceptualization stage. Test and deliver high-quality, error-free applications within pre-defined time schedules. Work collaboratively with cross-functional teams to exceed client expectations with innovative solutions. Learn quickly and adapt to modern technologies in a fast-paced environment. Stay updated with the latest trends in IoT and GenAI, leveraging them to enhance project outcomes. 1 Technical Project Manager Implement Agile methodologies to ensure efficient project management and delivery. Manage project timelines, resources, and budgets to ensure projects are completed on time and within scope. Facilitate regular project meetings and provide status updates to stakeholders. Identify and mitigate project risks and issues proactively. Foster a collaborative team environment and encourage continuous improvement. Job Requirements: Educational Background: Bachelor's/Masterʼs degree in Computer Science, Information Technology, or a related field. A strong team player with a CANDO attitude and excellent communication and writing skills. Deep understanding of web technologies, primarily with Node JS and common web front-end frameworks such as React/Angular, along with databases like MongoDB/MySql. Solution-oriented and performance-driven mindset, capable of solving complex technical challenges independently. Experience with Agile methodologies and tools like Jira or Trello. Proficiency in databases such as MongoDB, MySQL, or PostgreSQL. Familiarity with microservices architecture, ORMs, and payment gateway integration. Knowledge of modern tools like Figma for UI/UX design and AI tools for code generation. Hands-on experience designing scalable and secure architectures for enterprise applications. Ability to balance performance, cost, and functionality trade-offs. 2 Technical Project Manager Experience with AWS (e.g., Lambda, SQS, SNS or other cloud platforms is a plus. Familiarity with CI/CD pipelines and containerization (e.g., Docker, Kubernetes). Soft Skills: Strong problem-solving skills and a solution-oriented mindset. Excellent communication and interpersonal skills. Proven ability to lead and motivate teams. Experience with IoT solutions or protocols and GenAI technologies is highly desirable.

Posted 1 day ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Bengaluru, Karnataka, India; Hyderabad, Telangana, India . Minimum qualifications: Bachelor's degree or equivalent practical experience. 7 years of experience in enterprise application development, solution architecture, or systems integration. 4 years of experience in a client-facing cloud solution architecture advisory role. Experience in designing solutions with multiple programming languages (e.g., Java, Python, Go). Experience architecting cloud-native applications using Google Kubernetes Engine (GKE), serverless (e.g., Cloud Run, Cloud Functions), and related GCP services. Experience in microservices, event-driven architectures, API-first design, Domain-Driven Design (DDD), and distributed/resilient application patterns on GCP. Preferred qualifications: Certifications in Google Cloud (e.g., Professional Cloud Architect, Professional Cloud Developer). Experience in Kubernetes and financial industry, and leading architectural aspects of customer-facing GCP application migrations, from discovery to guiding implementation. Experience designing complex GCP application interactions: data flows, API strategies, inter-service communication (e.g., REST, gRPC, event streams). Familiarity with multi-cloud/hybrid environments. Ability to review existing code for re-architecture and understand application code to inform re-architecture and modernization strategies. Excellent communication, presentation, and problem-solving skills, with the ability to influence executive stakeholders on complex architectural decisions. About The Job The Google Cloud Consulting Professional Services team guides customers through the moments that matter most in their cloud journey to help businesses thrive. We help customers transform and evolve their business through the use of Google’s global network, web-scale data centers, and software infrastructure. As part of an innovative team in this rapidly growing business, you will help shape the future of businesses of all sizes and use technology to connect with customers, employees, and partners. As an Application Modernization Cloud Consultant, you will work directly with Google’s most strategic customers on critical application development and modernization projects to help them transform their businesses. You will provide expert solution architecture consulting and technical leadership to customers, working with client executives and key technical leaders to guide the architectural direction and adoption of modern applications and cloud-native solutions on Google Cloud Platform (GCP). In this role, you will lead customers through complex application redesign and re-architecture efforts, focusing on solution patterns for application scaling, reliability, and effective integration with GCP services. You will translate these strategies into practical, buildable architectural blueprints and provide strategic technical guidance, informed by your ability to prototype, demonstrate, and validate key architectural decisions. Additionally, you will work closely with Product Management and Product Engineering to build and facilitate excellence in our products and service offerings. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Serve as a trusted technical advisor on solution architecture to customer development leads, client executives, and partners, guiding them in designing and modernizing mission-critical application solutions on GCP. Lead and create sophisticated solution architectures and modernization roadmaps for cloud-native applications on GCP, aligning with complex customer requirements and promoting architectural best practices. Architect solutions that are practical and buildable, enabling effective hand-off for implementation. Collaborate with internal specialists, Product, and Engineering teams to contribute to and propagate Application Modernization thought leadership, reusable solution architecture patterns, and best practices related to GCP application modernization. Interact with sales, partners and customer technical stakeholders to manage project scope, priorities, deliverables, risks and issues, and timelines for successful client outcomes. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

Posted 1 day ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are looking for a QA Engineer to work with Web-based and API applications. Responsible for implementation of automation testing and designing automated tests to validate their performance. Writing Test Documentation, Executing Manual and Automation Scripts, conducting regular meetups, analyze test results to predict user behaviors, identifying bugs and suggest solutions to minimize problems, work closely with product and development to ensure timely delivery of the project. What you'll be responsible for? Design, code, test and manage various applications Implementation of automation testing Writing Test Documentation Executing Manual and Automation Scripts Conducting regular meetups Analyze test results to predict user behaviors Identify bugs and suggest solutions to minimize problems Work closely with product and development other stake holders across the globe. Qualification and other skills 5+ years of experience in QA (Test Case Design, Test case Execution and Status Reports) At least 3 yrs of experience in API/UI automation. At least 1+ Yrs of experience in performance testing. Bachelor's degree in Computer Science or equivalent What you'd have? Good Knowledge in Python OR Java programming Exposure to JMeter tool Database Testing experience, min 1 year Kubernetes Knowledge (Not mandatory) Exposure to any one IDE will be a plus. Strong analytical and problem-solving skills. Good oral and written communication skills Good knowledge of Linux and Microsoft Windows Jira exposure Passionate for Learning and Working Code Management Tool like Github, SVN, CVS etc. CI/CD Pipeline work exposure Azure ADO exposure. Telecom Experience (Communication Platform, SMS, Email and Voice, etc) Why join us? Impactful Work: Play a pivotal role in safeguarding Tanla's assets, data, and reputation in the industry. Tremendous Growth Opportunities: Be part of a rapidly growing company in the telecom and CPaaS space, with opportunities for professional development. Innovative Environment: Work alongside a world-class team in a challenging and fun environment, where innovation is celebrated. Tanla is an equal opportunity employer. We champion diversity and are committed to creating an inclusive environment for all employees. www.tanla.com

Posted 1 day ago

Apply

5.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Job Description About TaskUs: TaskUs is a provider of outsourced digital services and next-generation customer experience to fast-growing technology companies, helping its clients represent, protect and grow their brands. Leveraging a cloud-based infrastructure, TaskUs serves clients in the fastest-growing sectors, including social media, e-commerce, gaming, streaming media, food delivery, ride-sharing, HiTech, FinTech, and HealthTech. The People First culture at TaskUs has enabled the company to expand its workforce to approximately 45,000 employees globally. Presently, we have a presence in twenty-three locations across twelve countries, which include the Philippines, India, and the United States. It started with one ridiculously good idea to create a different breed of Business Processing Outsourcing (BPO)! We at TaskUs understand that achieving growth for our partners requires a culture of constant motion, exploring new technologies, being ready to handle any challenge at a moment's notice, and mastering consistency in an ever-changing world. What We Offer: At TaskUs, we prioritize our employees' well-being by offering competitive industry salaries and comprehensive benefits packages. Our commitment to a People First culture is reflected in the various departments we have established, including Total Rewards, Wellness, HR, and Diversity. We take pride in our inclusive environment and positive impact on the community. Moreover, we actively encourage internal mobility and professional growth at all stages of an employee's career within TaskUs. Join our team today and experience firsthand our dedication to supporting People First. Head of Engineering, AI Safety Services Enable responsible AI innovation at the speed of progress. The impact you'll make Accelerate responsible innovation by providing world-class AI safety services that allow our customers to quickly deploy new models and applications without compromising on safety. Influence the industry by establishing best practices, scalable tools, and clear benchmarks that set the standard for AI safety. Drive customer success through tools and platforms that integrate seamlessly into customer workflows, enabling safe and rapid innovation. What You'll Do Lead and inspire your team (AI engineers, platform engineers, and a product manager): Set clear goals, mentor team members, and foster a culture of impact, collaboration, and rapid learning. Implement agile processes that prioritize speed without sacrificing quality or security. Collaborate closely with customers to understand and address their AI safety challenges: Engage directly with customer teams to understand their innovation objectives, pain points, and safety requirements. Translate customer needs into actionable roadmaps aligned with their innovation cycles. Deliver tools and services that ensure AI applications are aligned, robust, interpretable, and safe: Oversee the development of platforms for alignment assessments, adversarial testing, drift detection, and interpretability analysis. Make informed build vs. buy decisions to optimize speed and customer integration. Champion engineering excellence in a high-stakes environment: Establish and enforce secure coding practices, automation, monitoring, and infrastructure-as-code. Ensure system reliability through defined SLAs, SLOs, rigorous testing, and clear operational dashboards. Continuously innovate within AI safety methodologies: Keep current with cutting-edge research in AI safety, adversarial testing, and model evaluation. Pilot emerging techniques that enhance our offerings and accelerate customer deployments. Experiences You'll Bring 5+ years of experience in software engineering, with at least 2 years managing technical teams or complex projects. Proven track record delivering AI, ML, or data-intensive platforms or services (e.g., model evaluation platforms, MLOps systems, AI safety tools). Demonstrated success in team leadership, coaching, and talent development. Excellent cross-functional collaboration and communication skills, bridging technical concepts with customer value. Technical Skills You'll Need Proficiency with common software languages, frameworks, and tools such as Python, TensorFlow, PyTorch, Docker, Kubernetes, JavaScript, HTML/CSS, REST APIs, and SQL/noSQL databases Expertise with cloud environments such as AWS, GCP, or Azure. Familiarity with LLM architectures, including evaluation techniques, prompt engineering, and adversarial testing methodologies. Preferred technical experience: exposure to platforms like LangChain, LangGraph; experience with distributed systems (Spark/Flink); familiarity with AI regulatory frameworks or secure deployment standards (e.g., SOC 2, ISO 27001). Why you'll love this role Purpose-driven work: Contribute meaningfully to a safer AI future while enabling groundbreaking innovation. Strategic visibility: Directly impact high-profile AI safety initiatives with industry-leading customers. Growth opportunities: Collaborate with top-tier AI talent and help shape an emerging industry. Supportive culture: Enjoy competitive compensation, flexible work arrangements, and significant investment in your professional and personal growth. Ready to ensure AI moves fast-and stays safe? Join us and shape the future of responsible AI innovation. How We Partner To Protect You: TaskUs will neither solicit money from you during your application process nor require any form of payment in order to proceed with your application. Kindly ensure that you are always in communication with only authorized recruiters of TaskUs. DEI: In TaskUs we believe that innovation and higher performance are brought by people from all walks of life. We welcome applicants of different backgrounds, demographics, and circumstances. Inclusive and equitable practices are our responsibility as a business. TaskUs is committed to providing equal access to opportunities. If you need reasonable accommodations in any part of the hiring process, please let us know. We invite you to explore all TaskUs career opportunities and apply through the provided URL https://www.taskus.com/careers/ . TaskUs is proud to be an equal opportunity workplace and is an affirmative action employer. We celebrate and support diversity; we are committed to creating an inclusive environment for all employees. TaskUs people first culture thrives on it for the benefit of our employees, our clients, our services, and our community. Req Id: R_2508_10545_3 Posted At: Tue Aug 05 2025 00:00:00 GMT+0000 (Coordinated Universal Time)

Posted 1 day ago

Apply

6.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

This role is for one of the Weekday's clients Min Experience: 6 years Location: indore JobType: full-time We are seeking a Tech Lead to drive the design, development, and delivery of scalable and high-performance software solutions. In this role, you will mentor engineers, influence architectural decisions, and lead the technical direction of key projects. You'll also have the opportunity to integrate AI/ML technologies across products in domains such as Fintech, eCommerce, Logistics, or Real Estate . Requirements Key Responsibilities Lead the end-to-end development of complex, distributed, and scalable software systems. Collaborate with product managers, designers, and stakeholders to transform business needs into robust technical solutions. Design and architect reliable systems, integrating AI/ML capabilities for personalization, automation, and intelligent decision-making. Drive system performance improvements and ensure architectural scalability. Conduct code reviews and promote coding best practices across the team. Mentor junior and mid-level engineers, providing guidance on design, implementation, and growth. Oversee timely delivery of high-quality, testable, and maintainable code. Actively contribute to Agile practices, including sprint planning, retrospectives, and backlog grooming. Troubleshoot and resolve production issues with strong root cause analysis and resolution planning. Ensure comprehensive technical documentation and streamline CI/CD pipelines and release processes. ✅ Required Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 5+ years of hands-on software development experience with 1-2+ years in a technical leadership capacity. Proficiency in backend languages such as Node.js, Python, Go, or Java. Experience with frontend frameworks like React, Angular, or Vue.js. Strong knowledge of system design, RESTful APIs, and microservices architecture. Hands-on experience with cloud platforms such as AWS, GCP, or Azure. Demonstrated experience integrating or deploying AI/ML models (e.g., NLP, recommendation systems, fraud detection). Experience in building or scaling platforms in Fintech, eCommerce, Logistics, or Real Estate. Excellent communication, stakeholder management, and problem-solving skills. Comfortable working in fast-paced environments with changing priorities. 🌟 Preferred Qualifications Prior experience working with LLMs, predictive analytics, or chatbot integration. Familiarity with Agile/Scrum methodologies. Working knowledge of CI/CD pipelines, Docker, Kubernetes, and observability tools. Strong business acumen and a customer-focused mindset. 💡 Key Skills AI/ML, NLP, LLMs Node.js, Python, Go, Java React, Angular, Vue.js System Design, RESTful APIs, Microservices AWS, GCP, Azure Docker, Kubernetes, CI/CD

Posted 1 day ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

At NiCE, we don’t limit our challenges. We challenge our limits. Always. We’re ambitious. We’re game changers. And we play to win. We set the highest standards and execute beyond them. And if you’re like us, we can offer you the ultimate career opportunity that will light a fire within you. About Nice: Nice is a leading provider of cloud-based and on-premises enterprise software solutions. Our innovative technology helps organizations improve customer interactions, optimize business processes, and ensure compliance with industry standards. With a global presence and a commitment to excellence, Nice is at the forefront of cloud contact center innovation. So, what’s the role all about? We are looking for an experienced QA professional with a strong background in testing distributed systems. The ideal candidate can work independently, has strong experience with test automation, and is self-driven to ensure high-quality software delivery. You will be working in multidisciplinary team with other professionals delivering high quality and secure software within an Agile delivery framework. The role will be based in Pune, India. Extensive collaboration and communication with US based teams will be a key part of the job, so excellent communication skills are critical. How will you make an impact? Automated Testing: Design, implement, and maintain automated test cases using tools and frameworks like Selenium and Playwright. Identify test scenarios and implement automated testing strategies for both UI and backend services Programming: Develop scripts for automated tests in languages such as Typescript (must). CI/CD Integration: Integrate automated tests into CI/CD pipelines to ensure continuous quality. Web and API Testing: Conduct comprehensive testing of web-based SaaS applications and RESTful APIs. Perform API testing using tools such as Postman, or similar. Test Frameworks: Utilize and manage testing frameworks like NUnit and and Playwright for creating and managing test cases. Defect Tracking: Report and manage software issues using defect tracking tools. Collaboration: Collaborate with developers and product managers to understand features and requirements, ensuring comprehensive test coverage. Mentor junior team members and help drive continuous improvement in testing practices Testing Strategies: Implement various testing strategies, including unit, integration, and end-to-end testing. Ensure compliance with QA best practices, processes, and methodologies Performance and Security Testing: (Optional) Conduct performance and security testing for SaaS applications. Problem-solving: Analyze and address software issues through automated testing processes. Have you got what it takes? BS or MS in Computer Science, Engineering, or related degree. 8+ years of experience in software testing & automated testing of microservices written using Typescript, C# or Java. Hands-on experience with Playwright or Selenium automation testing. Strong understanding of test automation design patterns, tools. ISTQB certification (or equivalent) preferred. Experience in creating clear test plans, scripts, and reporting results. Utilize Git or similar version control systems to manage test scripts, coordinate test coverage with development, and enable collaboration across teams. Practical experience in manual and automated testing across UI, business logic, data access, and APIs. Experience in API/Web Services Testing using tools like Postman. Experience in cross-browser and non-functional testing strategies. Good understanding of testing design patterns and experience in implementing automated tests for REST APIs and service-based architectures Experience with AWS tools such as Kafka, EKS, Kubernetes, and creating test scenarios for cloud-based microservices and messaging queues. Experience with Continuous Integration workflows and tooling, integrating automated tests into CI/CD pipelines to ensure early detection of defects. Stay updated with industry trends, emerging technologies, and best practices in test automation to drive innovation and efficiency within the QA team You will have an advantage if you also have: Familiarity and/or experience with public cloud infrastructures and technologies such as Amazon Web Services (AWS) Performance testing of API and UI based applications. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Reporting into: Tech Manager, Engineering, CX Role Type: Individual Contributor About NiCE NICE Ltd. (NASDAQ: NICE) software products are used by 25,000+ global businesses, including 85 of the Fortune 100 corporations, to deliver extraordinary customer experiences, fight financial crime and ensure public safety. Every day, NiCE software manages more than 120 million customer interactions and monitors 3+ billion financial transactions. Known as an innovation powerhouse that excels in AI, cloud and digital, NiCE is consistently recognized as the market leader in its domains, with over 8,500 employees across 30+ countries. NiCE is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, age, sex, marital status, ancestry, neurotype, physical or mental disability, veteran status, gender identity, sexual orientation or any other category protected by law.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Are you passionate about solving complex business challenges through cutting-edge AI and data science? Are you a proven leader with a vision for leveraging advanced analytics to create scalable solutions? We are looking for an accomplished and innovative Lead Data Scientist to join our growing team. This pivotal role offers the unique opportunity to shape data-driven strategies, lead groundbreaking AI initiatives, and mentor a high-performing team of data scientists. If you're ready to make a significant impact, we invite you to explore this exciting opportunity. As the Lead Data Scientist, you’ll work at the intersection of innovation and business impact, spearheading AI/ML solutions that address complex challenges. This role not only requires exceptional technical expertise but also the ability to inspire and lead a talented team, driving excellence in every project. Key Responsibilities 1. Leadership & Mentorship Lead, inspire, and mentor a team of data scientists, fostering a culture of collaboration, innovation, and continuous learning. Provide technical guidance to ensure the delivery of high-quality, scalable solutions within tight deadlines. Promote best practices, drive knowledge sharing, and encourage cross-functional collaboration to achieve organizational goals. 2. AI/ML Solution Development Architect and deploy scalable, enterprise-level AI solutions tailored to solve complex business problems. Engineer and optimize Generative AI models (GenAI), Large Language Models (LLMs), and Transformer-based architectures for top-notch performance. Utilize techniques like prompt engineering, transfer learning, and model optimization to deliver state-of-the-art AI solutions. 3. Natural Language Processing (NLP) Design advanced NLP solutions leveraging tools such as Word2Vec, BERT, SpaCy, NLTK, CoreNLP, TextBlob, and GloVe. Perform semantic analysis, sentiment analysis, text preprocessing, and tokenization to generate actionable business insights. 4. Cloud & Deployment Build and deploy AI/ML solutions using frameworks like FastAPI or gRPC for seamless delivery of services. Leverage cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP) to design high-performance, scalable systems. Deploy models using Docker containers on Kubernetes clusters for optimal scalability and reliability. 5. Database Management Manage and optimize large-scale data processing using SQL and NoSQL databases. Ensure seamless data flow and retrieval, enhancing overall system performance. 6. Big Data & Analytics Utilize big data technologies like Hadoop, Spark, and Hive for analyzing and processing massive datasets. Apply statistical and experimental design techniques to uncover meaningful insights and drive decision-making. 7. MLOps & CI/CD Pipelines Develop and maintain robust MLOps pipelines to streamline the integration, testing, and deployment of machine learning models. Ensure the scalability, reliability, and efficiency of AI/ML models in production environments. 8. Collaboration & Communication Partner with product managers, business analysts, and engineering teams to identify challenges and propose innovative solutions. Translate complex technical insights into actionable recommendations for technical and non-technical stakeholders alike. Key Qualifications Educational Background Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related technical field. Experience 5–8 years of industry experience, with a minimum of 2 years in a leadership role managing and mentoring data science teams. Proven track record in delivering end-to-end AI/ML solutions that solve real-world business challenges. Technical Skills Proficiency in Python and its data science libraries. Advanced expertise in NLP tools like Word2Vec, BERT, NLTK, SpaCy, TextBlob, CoreNLP, and GloVe. Strong knowledge of Transformer-based architectures and Generative AI/LLMs. Hands-on experience with cloud platforms (AWS, Azure, GCP) and deployment technologies (FastAPI, gRPC, Docker, Kubernetes). Proficiency in big data tools (Hadoop, Spark, Hive) and database systems (SQL/NoSQL). Strong grasp of statistical methods, machine learning algorithms, and experimental design principles. Domain Knowledge Prior experience in Online Reputation Management or product-based industries is highly desirable. Additional Skills Exceptional project management skills with the ability to manage multiple priorities simultaneously. Excellent communication and storytelling skills to convey complex technical concepts effectively to diverse audiences.

Posted 1 day ago

Apply

3.0 - 4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title : Cloud Engineer (Certified) Experience Required : 3 to 4 Years Location : Pune or Mumbai Employment Type : Full-time Job Summary : We are seeking a skilled & proactive Cloud Engineer with 3-4 years of hands-on experience in designing, deploying, and managing cloud infrastructure, primarily on Amazon Web Services (AWS) & Google Cloud Platform (GCP ) and holding Professional Architect or DevOps Engineer – Professional certificate. Applicants need to have an active AWS / GCP Pro Level Certificate Key Responsibilities : Design and implement scalable, secure, and cost-optimized cloud solutions on AWS and GCP Manage infrastructure as code using tools like Terraform, AWS CloudFormation, or CDK. Python or go scripting programming Knowledge Proficient in Kubernetes operations Monitor system performance, troubleshoot issues, and ensure high availability. Implement CI/CD pipelines and automate deployment processes. Work closely with development, security, and operations teams to ensure smooth cloud operations. Ensure best practices for cloud security, backup, and disaster recovery are followed. Required Skills & Qualifications : 3-4 years of proven experience as a Cloud Engineer working in AWS/GCP environments. Mandatory : Professional Certification (e.g. Solutions Architect – Professional or DevOps Engineer – Professional). Strong understanding of cloud security principles and networking (VPC, Subnets, VPN, etc.). Experience with Linux/Unix systems and scripting (Bash, Python, etc.). Familiarity with CI/CD tools like Jenkins, GitLab CI, or AWS CodePipeline. Experience with monitoring tools (CloudWatch, Prometheus, Grafana, etc.). Experience with containerization (Docker, Kubernetes, ECS, or EKS). Exposure to other cloud platforms like GCP or Azure. Knowledge of cost optimization and governance in multi-account AWS setups. Important Note : Being an extremely good communicator is a must. Tech Professional Certification is mandatory. Ready to travel for client meetings to drive solution messaging and demos. Ready for global engagements. #CloudEngineer #AWSJobs #GCPJobs #DevOpsIndia #Terraform #Kubernetes #HiringNow #StartupJobsIndia #PuneJobs #MumbaiJobs #Infimatrix #CloudJobs #DevOpsEngineer #AWSCommunity #GCPCloud #CloudCareers #InfrastructureAsCode #CI_CD #CloudSecurity #SREJobs #CloudComputing #HiringCloudEngineers #ITJobsIndia #TechJobsIndia #JobAlertIndia #NowHiring #TechHiring #MumbaiTechJobs #PuneTechJobs #EngineeringJobs #StartupCareers #TechStartupsIndia #AWSCertified #GCPProfessional #CertifiedCloudEngineer #DevOpsCareers #JoinOurTeam #GrowWithUs #TechTalent #InfimatrixCareers

Posted 1 day ago

Apply

3.0 - 6.0 years

13 - 15 Lacs

Pune, Maharashtra, India

On-site

About The Opportunity We’re a fast-growing enterprise AI platform provider in the cloud services & software (SaaS) sector , helping Fortune 500 clients modernize data pipelines, automate knowledge work and unlock new revenue with Generative AI. Backed by a deep bench of AI researchers and cloud architects, we build scalable, production-grade solutions. Join our Pune-based hybrid team to shape the next generation of agentic AI products.Role & Responsibilities Design and implement end-to-end data pipelines on GCP using BigQuery, Cloud Composer, Dataflow, Dataform and Pub/Sub. Develop scalable, secure REST/gRPC APIs that expose analytics and ML features to internal and external consumers. Optimize pipeline resilience & cost through partitioning, autoscaling, job-orchestration, and advanced monitoring (Cloud Monitoring, Stackdriver). Automate DevOps workflows—set up Git-based CI/CD (Cloud Build, Cloud Deploy, Terraform) and enforce IaC best practices. Embed data quality & governance with schema enforcement, versioned data contracts and automated regression testing. Partner cross-functionally with product managers, data scientists and SREs to deliver SLAs and measurable business value. Skills & Qualifications Must-Have 3-6 years building large-scale data solutions on Google Cloud Platform (BQ, Composer, Dataflow, Dataform, Pub/Sub). Strong Python development skills plus SQL chops for BigQuery. DevOps expertise: Git, containerization, CI/CD, infra-as-code (Terraform/Cloud Deploy). Proven track record of API development and throughput tuning for high-volume data services. Preferred Experience integrating or operating Elasticsearch / OpenSearch for log & metric search. Knowledge of streaming frameworks (Kafka, Flink) and advanced performance benchmarking on GCP. Skills: genrative AI,Python,API Development,GCP,CI,Cd,Git,Data,DevOps,langchain,Kubernetes,elastic search,Data Flow,RAG,LLMs,MLOps,API Platform,Cloud,Advanced,Pipelines

Posted 1 day ago

Apply

0 years

13 - 15 Lacs

Pune, Maharashtra, India

On-site

About The Opportunity We’re a fast-growing enterprise AI platform provider in the cloud services & software (SaaS) sector , helping Fortune 500 clients modernize data pipelines, automate knowledge work and unlock new revenue with Generative AI. Backed by a deep bench of AI researchers and cloud architects, we build scalable, production-grade solutions on Microsoft Azure. Join our Pune-based hybrid team to shape the next generation of agentic AI products.Role & Responsibilities Design, prototype and deploy GenAI applications (LLMs, RAG, multimodal) on Azure OpenAI, Cognitive Search and Kubernetes-based micro-services. Build and orchestrate agentic frameworks (LangGraph / AutoGen) to enable multi-agent reasoning, tool-calling and workflow automation at scale. Engineer robust data & prompt pipelines using Azure Data Factory, Event Hub and Cosmos DB, ensuring low-latency, high-throughput inference. Optimize model performance & cost via fine-tuning, quantization and scalable caching on Azure ML and AKS. Harden solutions for production with end-to-end CI/CD, observability (App Insights, Prometheus), security & responsible-AI guardrails. Collaborate cross-functionally with product managers, designers and customer success to deliver measurable business impact. Skills & Qualifications Must-Have 3-5 yrs hands-on in Generative AI / LLM engineering (GPT, Llama 2, Claude, etc.) with at least one product in production. Proven expertise in Microsoft Azure services: Azure OpenAI, Functions, Data Factory, Cosmos DB, AKS. Strong Python/TypeScript with agentic frameworks (LangChain, AutoGen, Semantic Kernel) and REST/GraphQL APIs. Solid grounding in cloud MLOps: Docker, Helm, Terraform/Bicep, GitHub Actions or Azure DevOps. Preferred Experience benchmarking & scaling pipelines to >10 K QPS using Vector DBs (Qdrant, Pinecone) and distributed caching. Familiarity with prompt-engineering, fine-tuning & retrieval-augmented generation (RAG) best practices. Knowledge of Kubernetes operators, Dapr, Service Mesh for fault-tolerant micro-services. Skills: Generative AI,Azure,Python,LLMs,SQL Azure,agentic framework,Langgraph,autogen,CI,Cd,Kubernetes,Microsoft Azure,Cloud

Posted 1 day ago

Apply

6.0 - 8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description for Cloud Security Engineer Job Title: Cloud Security Engineer Location: Hyderabad, Pune, Coimbatore Experience: 6 - 8 years of experience Workind Mode: 5 Days Work From Office Job Summary: We are looking for a Cloud Security Engineer with a minimum of 6 years of experience in Amazon Web Services (AWS) to join our dynamic team. The ideal candidate will have a deep understanding of cloud infrastructure and architecture, coupled with expertise in deploying, managing, and optimizing AWS services. As a Cloud Platform Engineer, you will play a crucial role in designing, implementing, and maintaining our cloud-based solutions to meet the evolving needs of organization and client. Responsibilities: Following are the day-to-day work activities: Using a broad range of AWS services (VPC, EC2, RDS, ELB, S3, AWS CLI, Cloud Watch, Cloud Trail, AWS Config, Kinesis, Route 53, Dynamo DB, and SNS) to develop and maintain an Amazon AWS based cloud solution. Implementing identity and access management (IAM) controls to manage user and system access securely. Collaborating with cloud architects and developers to create security solutions for cloud environments (e.g., AWS, Azure, GCP) by designing security controls and ensuring they are integrated into cloud platforms and by ensuring that cloud infrastructure adheres to relevant compliance standards (e.g., GDPR, HIPAA, PCI-DSS). Monitoring cloud environments for suspicious activities and threats using tools like SIEM (Security Information and Event Management) systems. Implementing security governance policies and maintaining audit logs for regulatory requirements. Automating cloud security processes using tools such as CloudFormation, Terraform, or Ansible. Implementing infrastructure as code (IaC) to ensure secure deployment and configuration. Building custom Terraform modules to provision cloud infrastructure and maintain them for the enhancements with the latest versions Collaborating with DevOps, network, and software development teams to promote secure cloud practices and training and educating employees about cloud security best practices. Securing and encrypting data by providing secret management solutions with versioning enabled. Building backup solutions for running application’s downtime and maintaining a parallelly running disaster recovery environment in the backend and Implementing Disaster Recovery strategies. Designed and delivered a scalable and highly available solution for the applications migrating to the Cloud with the launch configuration, Autoscaling group, Scaling policies, Cloud watch alarms, load balancer, Route53. Enabling an extra layer of security for cloud root accounts. Working with the data-based application migration teams for strategically and securely moving data from On-premises data centers to the cloud storage within an isolated environment Working with the source code management pipelines and debug issues caused by the failed IT development deployment. Remediate findings from the cybersecurity tools used for Cloud-Native Application Security and implementing resource/cloud services tagging strategies by enforcing compliance standards. Experience performing AWS operations within these areas: Threat Detection Threat Prevention Incident Mgmt Cloud Specific Technologies Control Tower and Service Control Policies AWS Security tools (AWS IAM, Detective, Inspector, Security Hub, etc) General understanding: Identity and Least Privilege Networking in AWS IaaS ITSM (Ticketing systems/process) Requirements: Candidates are required to have these mandatory skills to get the eligibility of their profile assessed. The must have requirements are: Bachelor’s degree in computer science, Engineering, or a related field (or equivalent work experience). Minimum of 6 years of hands-on experience as a Cloud Platform Engineer, with a strong focus on AWS. In-depth knowledge of AWS services such as EC2, S3, VPC, IAM, RDS, Lambda, ECS, and others. Proficiency in scripting and programming languages such as Python, Bash, or PowerShell. Experience with infrastructure as code (IaC) tools like Terraform, CloudFormation, or AWS CDK. Strong understanding of networking concepts, security best practices, and compliance standards in cloud environments. Hands-on experience with containerization technologies (Docker, Kubernetes) and serverless computing. Excellent problem-solving skills and the ability to troubleshoot complex issues in distributed systems. Strong communication skills with the ability to collaborate effectively with cross-functional teams. AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer) are a plus. About the Company: ValueMomentumis amongst the fastest growing insurance-focused IT services providers in North America. Leading insurers trust ValueMomentum with their core, digital and data transformation initiatives. Having grown consistently every year by 24%, we have now grown to over 4000 employees. ValueMomentum is committed to integrity and to ensuring that each team and employee is successful. We foster an open work culture where employees' opinions are valued. We believe in teamwork and cultivate a sense of fun, fellowship, and pride among our employees. Benefits: We at ValueMomentum offer you the opportunity to grow by working alongside the experts. Some of the benefits you can avail are: Competitive compensation package comparable to the best in the industry. Career Advancement: Individual Career Development, coaching and mentoring programs for professional and leadership skill development. Comprehensive training and certification programs. Performance Management: Goal Setting, continuous feedback and year-end appraisal. Reward & recognition for the extraordinary performers. Benefits: Comprehensive health benefits, wellness and fitness programs. Paid time off and holidays. Culture: A highly transparent organization with an open-door policy and a vibrant culture If you are interested in the above role, kindly fill in the details below or share your updated resume to Suresh.Tadi@valuemomentum.com Full Name: Overall Experience: Relevant Experience: Notice Period: Current CTC (Cost to Company): Expected CTC: Are you open to working 5 days a week from the office? (Yes/No): Preferred Location (if applicable): Are you currently employed? (Yes/No): Reason for Looking for a Change :

Posted 1 day ago

Apply

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

DevOps Engineer Location: Pune (Aundh) - Work From Office Opportunity: DevOps Engineer What you will do: Kubernetes Cluster Management : Oversee multiple Kubernetes clusters across various environments, including tasks such as version upgrades, security patching, and managing critical cluster applications. Application Deployment on Kubernetes : Deploy and optimize JVM-based applications for containerized environments, leveraging Kubernetes' proactive scaling capabilities. Manage deployments using GitOps practices to ensure consistency and automation. AWS Network Design : Demonstrate a strong understanding of AWS network design principles, including the ability to manage and control network traffic flows effectively. CI/CD Pipeline Development: Design and maintain robust CI/CD pipelines to automate application builds and deployments. Containerization Expertise : Exhibit expertise in Docker and containerization practices, ensuring efficient and reliable containerized application environments. Cloud Cost Optimization : Identify and resolve cloud cost inefficiencies, implementing strategies to optimize resource utilization and expenditure. Infrastructure as Code (IaC) : Leverage Infrastructure as Code (IaC) tools to automate infrastructure management, ensuring consistency, scalability, and ease of maintenance. Infrastructure Improvement : Proactively evaluate infrastructure for potential enhancements and recommend actionable improvements to ensure performance, reliability, and scalability. Ownership and Documentation : Take end-to-end ownership of products and applications you deploy. Provide comprehensive documentation detailing the deployment process and infrastructure setup. Experience Range: 3-5 years in DevOps Skills & Qualifications: Kubernetes Expertise : Proven experience in managing and operating Kubernetes clusters, particularly on Amazon EKS (Elastic Kubernetes Service). Containerized Java Applications : Solid understanding of deploying and maintaining containerized Java applications, including optimization for performance and scalability. GitOps and CI/CD : Proficiency in GitOps practices, with hands-on experience writing and managing CI/CD pipelines using GitLab or similar tools. Scripting and Automation : Strong scripting skills in languages such as Go (Golang), Python, or JavaScript to automate processes and solve operational challenges. Infrastructure Debugging : Demonstrated ability to debug complex network and infrastructure-related issues, ensuring system reliability and uptime. AWS Proficiency : Extensive experience working with AWS services, with a solid understanding of its ecosystem and best practices for cloud infrastructure Monitoring and Observability Tools : Familiarity with tools such as Prometheus, VictoriaMetrics, OpenSearch, and Kafka, as well as database systems like PostgreSQL. Experience with these is highly advantageous.

Posted 1 day ago

Apply

7.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. System Administrator - AI/ML Platform We are looking for a detail-oriented and technically proficient AI/ML Cloud Platform Administrator to manage, monitor, and secure our cloud-based platforms supporting machine learning and data science workloads. This role requires deep familiarity with both AWS and Azure cloud services, and strong experience in platform configuration, resource provisioning, access management, and operational automation. You will work closely with data scientists, MLOps engineers, and cloud security teams to ensure high availability, compliance, and performance of our AI/ML platforms. Your Responsibilities Will Include Provision, configure, and maintain ML infrastructure on AWS (e.g., SageMaker, Bedrock, EKS, EC2, S3) and Azure (e.g., Azure Foundry, Azure ML, AKS, ADF, Blob Storage) Manage cloud resources (VMs, containers, networking, storage) to support distributed ML workflows Deploy and Manage the open source orchestration ML Frameworks such as LangChain and LangGraph Implement RBAC, IAM policies, Azure AD, and Key Vault configurations to manage secure access. Monitor security events, handle vulnerabilities, and ensure data encryption and compliance (e.g., ISO, HIPAA, GDPR) Monitor and optimize performance of ML services, containers, and jobs Set up observability stacks using Fiddler, CloudWatch, Azure Monitor, Grafana, Prometheus, or ELK. Manage and troubleshoot issues related to container orchestration (Docker, Kubernetes – EKS/AKS) Use Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Bicep to automate environment provisioning Collaborate with MLOps teams to automate deployment pipelines and model operationalization Implement lifecycle policies, quotas, and data backups for storage optimization Required Qualification Bachelor's/Master’s in Computer Science, Engineering, or related discipline 7 years in cloud administration, with 2+ years supporting AI/ML or data platforms Proven hands-on experience with both AWS or Azure Proficient in Terraform, Docker, Kubernetes (AKS/EKS), Git, Python or Bash scripting Security Practices: IAM, RBAC, encryption standards, VPC/network setup Requisition ID: 611331 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!

Posted 1 day ago

Apply

7.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. System Administrator - AI/ML Platform We are looking for a detail-oriented and technically proficient AI/ML Cloud Platform Administrator to manage, monitor, and secure our cloud-based platforms supporting machine learning and data science workloads. This role requires deep familiarity with both AWS and Azure cloud services, and strong experience in platform configuration, resource provisioning, access management, and operational automation. You will work closely with data scientists, MLOps engineers, and cloud security teams to ensure high availability, compliance, and performance of our AI/ML platforms. Your Responsibilities Will Include Provision, configure, and maintain ML infrastructure on AWS (e.g., SageMaker, Bedrock, EKS, EC2, S3) and Azure (e.g., Azure Foundry, Azure ML, AKS, ADF, Blob Storage) Manage cloud resources (VMs, containers, networking, storage) to support distributed ML workflows Deploy and Manage the open source orchestration ML Frameworks such as LangChain and LangGraph Implement RBAC, IAM policies, Azure AD, and Key Vault configurations to manage secure access. Monitor security events, handle vulnerabilities, and ensure data encryption and compliance (e.g., ISO, HIPAA, GDPR) Monitor and optimize performance of ML services, containers, and jobs Set up observability stacks using Fiddler, CloudWatch, Azure Monitor, Grafana, Prometheus, or ELK. Manage and troubleshoot issues related to container orchestration (Docker, Kubernetes – EKS/AKS) Use Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Bicep to automate environment provisioning Collaborate with MLOps teams to automate deployment pipelines and model operationalization Implement lifecycle policies, quotas, and data backups for storage optimization Required Qualification Bachelor's/Master’s in Computer Science, Engineering, or related discipline 7 years in cloud administration, with 2+ years supporting AI/ML or data platforms Proven hands-on experience with both AWS or Azure Proficient in Terraform, Docker, Kubernetes (AKS/EKS), Git, Python or Bash scripting Security Practices: IAM, RBAC, encryption standards, VPC/network setup Requisition ID: 611330 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!

Posted 1 day ago

Apply

25.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

eInfochips (An Arrow Company): eInfochips, an Arrow company (A $27.9 B, NASDAQ listed (ARW); Ranked #154 on the Fortune List), is a leading global provider of product engineering and semiconductor design services. 25+ years of proven track record, with a team of over 2500+ engineers, the team has been instrumental in developing over 500+ products and 40M deployments in 140 countries. Company’s service offerings include Silicon Engineering, Embedded Engineering, Hardware Engineering & Digital Engineering services. eInfochips services 7 of the top 10 semiconductor companies and is recognized by NASSCOM, Zinnov and Gartner as a leading Semiconductor service provider. What we are looking : Experience: 5to 8 years in DevOps, with a strong focus on automation, cloud infrastructure, and CI/CD practices. Terraform: Advanced knowledge of Terraform, with experience in writing, testing, and deploying modules. AWS: Extensive experience with AWS services (EC2, S3, RDS, Lambda, VPC, etc.) and best practices in cloud architecture. Docker & Kubernetes: Proven experience in containerization with Docker and orchestration with Kubernetes in production environments. CI/CD: Strong understanding of CI/CD processes, with hands-on experience in CircleCI or similar tools. Scripting: Proficient in Python and Linux Shell scripting for automation and process improvement. Monitoring & Logging: Experience with Datadog or similar tools for monitoring and alerting in large-scale environments. Version Control: Proficient with Git, including branching, merging, and collaborative workflows. Configuration Management: Experience with Kustomize or similar tools for managing Kubernetes configurations Work Location - Ahmedabad/Pune/Bangalore/Hyderabad Shift timing (Rotational) - S1 : 6 AM to 2:30 PM IST S2 : 2 PM to 10:30 PM IST S3 : 10 PM to 6:30 AM IST Interested candidates can share resumes on arti.bhimani1@einfochips.com

Posted 1 day ago

Apply

5.0 years

0 Lacs

Udaipur, Rajasthan, India

On-site

MERN Developer Kadel Labs is a leading IT services company delivering top-quality technology solutions since 2017, focused on enhancing business operations and productivity through tailored, scalable, and future-ready solutions. With deep domain expertise and a commitment to innovation, we help businesses stay ahead of technological trends. As a CMMI Level 3 and ISO 27001:2022 certified company, we ensure best-in-class process maturity and information security, enabling organizations to achieve their digital transformation goals with confidence and efficiency. Role: MERN Developer Experience: 5+ Years Location: Udaipur Job Description: We are seeking a skilled MERN Developer with 5+ years of experience in designing and developing full-stack web applications using MongoDB, Express.js, React.js, and Node.js. The ideal candidate will be proficient in building scalable APIs and integrating frontend components, optimizing performance, and collaborating with cross-functional teams to deliver high-quality solutions. Key Responsibilities: Design, develop, and maintain end-to-end web applications using the MERN stack. Write clean, modular, and scalable code following best practices. Develop RESTful APIs and integrate them with frontend components. Optimize applications for maximum speed and scalability. Implement authentication and authorization mechanisms such as JWT and OAuth. Work with MongoDB including indexing, aggregation, and performance tuning. Collaborate with UI/UX designers and backend teams for seamless integration. Troubleshoot, debug, and enhance existing applications. Stay updated with the latest web development technologies and industry trends. Required Skills: 5+ years of hands-on experience in MERN stack development. Strong proficiency in React.js, Redux, Hooks, and Context API. Expertise in Node.js, Express.js, and scalable API development. Experience with MongoDB and Mongoose. Familiarity with Docker, Kubernetes, and CI/CD pipelines. Exposure to cloud platforms like AWS, GCP, or Azure. Knowledge of TypeScript is a plus. Excellent problem-solving and debugging skills. Strong communication and teamwork abilities. Preferred Skills: Experience with GraphQL. Knowledge of WebSockets and real-time applications. Exposure to microservices architecture. Educational Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field. Visit us: https://kadellabs.com/ https://in.linkedin.com/company/kadel-labs https://www.glassdoor.co.in/Overview/Working-at-Kadel-Labs-EI_IE4991279.11,21.htm

Posted 1 day ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Sonatype is the software supply chain security company. We provide the world’s best end-to-end software supply chain security solution, combining the only proactive protection against malicious open source, the only enterprise grade SBOM management and the leading open source dependency management platform. This empowers enterprises to create and maintain secure, quality, and innovative software at scale. As founders of Nexus Repository and stewards of Maven Central, the world’s largest repository of Java open-source software, we are software pioneers and our open source expertise is unmatched. We empower innovation with an unparalleled commitment to build faster, safer software and harness AI and data intelligence to mitigate risk, maximize efficiencies, and drive powerful software development. More than 2,000 organizations, including 70% of the Fortune 100 and 15 million software developers, rely on Sonatype to optimize their software supply chains. About The Role We’re hiring a Data Engineer to join our Data Platform team. You’ll help build and scale the systems that power analytics, reporting, and data-driven features across the company. This role works with engineers, analysts, and product teams to make sure our data is accurate, available, and usable. What You’ll Do Build and maintain reliable data pipelines and ETL/ELT workflows Develop and optimize data models for analytics and internal tools Work with team members to deliver clean, trusted datasets Support core data platform tools like Airflow, dbt, Spark, Redshift, or Snowflake Monitor data pipelines for quality, performance, and reliability Write clear documentation and contribute to test coverage and CI/CD processes Help shape our data lakehouse architecture and platform roadmap What You Need 2–4 years of experience in data engineering or a backend data-related role Strong skills in Python or another backend programming language Experience working with SQL and distributed data systems (e.g., Spark, Kafka) Familiarity with NoSQL stores like HBase or similar Comfortable writing efficient queries and building data workflows Understanding of data modeling for analytics and reporting Exposure to tools like Airflow or other workflow schedulers Bonus Points Experience with dbt, Databricks, or real-time data pipelines Familiarity with cloud infrastructure tools like Terraform or Kubernetes Interest in data governance, ML pipelines, or compliance standards Why Join Us? Work on data that supports meaningful software security outcomes Use modern tools in a cloud-first, open-source-friendly environment Join a team that values clarity, learning, and autonomy At Sonatype, we value diversity and inclusivity. We offer perks such as parental leave, diversity and inclusion working groups, and flexible working practices to allow our employees to show up as their whole selves. We are an equal-opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. If you have a disability or special need that requires accommodation, please do not hesitate to let us know.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Sonatype is the software supply chain security company. We provide the world’s best end-to-end software supply chain security solution, combining the only proactive protection against malicious open source, the only enterprise grade SBOM management and the leading open source dependency management platform. This empowers enterprises to create and maintain secure, quality, and innovative software at scale. As founders of Nexus Repository and stewards of Maven Central, the world’s largest repository of Java open-source software, we are software pioneers and our open source expertise is unmatched. We empower innovation with an unparalleled commitment to build faster, safer software and harness AI and data intelligence to mitigate risk, maximize efficiencies, and drive powerful software development. More than 2,000 organizations, including 70% of the Fortune 100 and 15 million software developers, rely on Sonatype to optimize their software supply chains. About The Role We are seeking a Software Engineer in Test to join our Quality Engineering team. In this role, you will be responsible for designing, developing, and maintaining automation frameworks to enhance our test coverage and ensure the delivery of high-quality software. You will collaborate closely with developers, product managers, and other stakeholders to drive test automation strategies and improve software reliability Key Responsibilities Design, develop, and maintain robust test automation frameworks for web, API, and backend services. Implement automated test cases to improve software quality and test coverage. Develop and execute performance and load tests to ensure the application behaves reliably in self-hosted environments. Integrate automated tests into CI/CD pipelines to enable continuous testing. Collaborate with software engineers to define test strategies, acceptance criteria, and quality standards. Conduct performance, security, and regression testing to ensure application stability. Investigate test failures, debug issues, and work with development teams to resolve defects. Advocate for best practices in test automation, code quality, and software reliability. Stay updated with industry trends and emerging technologies in software testing. Qualifications & Experience Bachelor's or Master’s degree in Computer Science, Engineering, or a related field. 3+ years of experience in software test automation. Proficiency in programming languages such as Java, Python, or JavaScript. Hands-on experience with test automation tools like Selenium, Cypress, Playwright, or similar. Strong knowledge of API testing using tools such as Postman, RestAssured, or Karate. Experience with CI/CD tools such as Jenkins, GitHub Actions, or GitLab CI/CD. Understanding of containerization and cloud technologies (Docker, Kubernetes, AWS, or similar). Familiarity with performance testing tools like JMeter or Gatling is a plus. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. At Sonatype, we value diversity and inclusivity. We offer perks such as parental leave, diversity and inclusion working groups, and flexible working practices to allow our employees to show up as their whole selves. We are an equal-opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. If you have a disability or special need that requires accommodation, please do not hesitate to let us know.

Posted 1 day ago

Apply

6.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

About Us Wayvida is a cutting-edge, all-in-one AI-powered teaching and learning platform created to empower coaches, educators, institutions, learners, and communities across the globe by helping them to launch Online Course Selling Platform in their Brand. With a mission to transform education through technology, Wayvida eliminates barriers to teaching and learning, paving the way for professional and personal growth. Wayvida equips you with the tools to create, manage, and market your courses effortlessly without any technical expertise. From live classes and recorded sessions to AI-powered Test creators, assessments, and community engagement and marketing tools, our platform is designed to foster personalized teaching and learning experiences. Combining advanced AI with an intuitive design, Wayvida democratizes education, making it accessible to everyone, everywhere. Job Description Design & manage CI/CD pipelines (Jenkins, GitLab CI/CD, GitHub Actions). Automate infrastructure provisioning (Terraform, Ansible, Pulumi). Monitor & optimize cloud environments (AWS, GCP, Azure). Implement containerization & orchestration (Docker, Kubernetes - EKS/GKE/AKS). Maintain logging, monitoring & alerting (ELK, Prometheus, Grafana, Datadog). Ensure system security, availability & performance tuning. Manage secrets & credentials (Vault, AWS Secrets Manager). Troubleshoot infrastructure & deployment issues. Implement blue-green & canary deployments. Collaborate with developers to enhance system reliability & productivity.. Requirements 6+ years in DevOps, SRE, or Infrastructure Engineering. Strong expertise in Cloud (AWS/GCP/Azure) & Infrastructure-as-Code (Terraform CloudFormation). Proficient in Docker & Kubernetes. Hands-on with CI/CD tools & scripting (Bash, Python, or Go). Strong knowledge of Linux, networking, and security best practices. Experience with monitoring & logging tools (ELK, Prometheus, Grafana). Familiarity with GitOps, Helm charts & automation. * Candidates who can relocate to Trivandrum (Kerala) , and who can understand Malayalam only need to apply *

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies