Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Us Edifyer is building the next generation of corporate learning for India. We are on a mission to help L&D teams create conversational AI-powered hyper‑personalised courses, exercises, and role‑plays that adapt in real time to each learner. As a founding AI Engineer, you will own the development of our core conversational AI engine and the no‑code authoring studio that lets L&D teams design and deploy training in minutes. If you love shipping 0→1 products, sweating latency and quality, and seeing your work in the hands of thousands of learners, this one’s for you. Who We're Looking For We're seeking a Founding AI Engineer who thrives in early-stage chaos, enjoys solving complex technical challenges, and is eager to lead the AI/ML strategy from the ground up. This is more than a job—it's an opportunity to join as a founding member, build and own key technical systems, and directly influence the trajectory of the company. Core Responsibilities Define and evolve the long-term technical vision, architecture, and system design for scalability Drive the transition from MVP prototypes to enterprise-grade, production-ready systems Collaborate closely with product and design leads to rapidly prototype and iterate on user-centric features Fine-tune and serve Large Language Models (LLMs) using Triton and TensorRT Integrate retrieval-augmented generation (RAG) pipelines and AI safety layers (filters, guardrails, etc.) Design real-time pipelines (STT → LLM → TTS ) using WebSockets or gRPC Set up spot-GPU orchestration using Kubernetes (K8s) and Terraform-based Infrastructure as Code (IaC) Build and manage CI/CD pipelines (blue-green deployments); set up monitoring dashboards for cost and latency Implement OAuth2/JWT-based authorization, secure secret management, and rate-limiting Lead security hardening (OWASP) and lay the groundwork for SOC 2 Type I compliance Engage directly with early customers and partners to gather feedback, debug live issues, and validate technical direction. What We’re Offering Opportunity to become a founding member + ESOP - Your contribution deserves the long-term upside in companies' success- this could be your life-changing opportunity for significant wealth creation. Creative and Technical Freedom - You’ll have a blank canvas to build, experiment, and ship without red tape. High-Impact Mission - Your work will lay the foundation for the next-generation of enterprise learning platforms from India that will transform the learning culture of organisations world-wide. Next-gen Tech Stack - You get to work with the cutting-edge LLMs, STT/TTS, ASR, scalable cloud infra, etc.— and build a world class system. Ideal Candidate Profile 3+ years building distributed or real-time ML systems Hands-on LLM or speech experience : Triton, TensorRT, Riva, Whisper, or similar - demonstrated < 1 s latency. Deep Python (FastAPI) expertise; comfort with micro-services, gRPC, WebSockets. Cloud-native engineering : Docker, K8s, autoscaling, Terraform/Pulumi, Prometheus/Grafana. Security mindset : OAuth2, TLS everywhere, moderation gates, GDPR awareness. Entrepreneurial stamina : Persistence and optimistic outlook even in the face of challenge or setback. Apply Send your resume/LinkedIn/GitHub to contact@edifyer.io with the subject line: “Founding AI Engineer — Your Name”. A short note on why this mission resonates with you will go a long way.
Posted 1 month ago
2.0 - 4.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Senior Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. In technology delivery at PwC, you will focus on implementing and delivering innovative technology solutions to clients, enabling seamless integration and efficient project execution. You will manage the end-to-end delivery process and collaborate with cross-functional teams to drive successful technology implementations. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Deploy and maintain critical applications on cloud-native microservices architecture Implement automation, effective monitoring, and infrastructure-as-code Deploy and maintain CI/CD pipelines across multiple environments Design and implement secure automation solutions for development, testing, and production environments Build and deploy automation, monitoring, and analysis solutions Manage continuous integration and delivery pipeline to maximize efficiency Develop and maintain solutions for operational administration, system/data backup, disaster recovery, and security/performance monitoring Mandatory Skill Sets:: Automating repetitive tasks using scripting (e.g. Bash, Python, Powershell, YAML, etc) Practical experience with Docker containerization and clustering (Kubernetes/ECS/AKS) Expertise with Azure Cloud Platform (e.g. ARM, App Service and Functions, Autoscaling, Load balancing etc.) Version control system experience (e.g. Git) Experience implementing CI/CD (e.g. Azure DevOps, Jenkins, etc.,) Experience with configuration management tools (e.g. Ansible, Chef) Experience with infrastructure-as-code (e.g. Terraform, Cloudformation) Preferred Skill Sets: Good communication skills Secure, scale, and manage Linux virtual environments, on-prem Windows Server environments Certifications/Credentials Certification in Windows Admin/Azure DevOps/Kubernetes Years of experience required: 2 -4 Years Education qualification Bachelor's degree in Computer Science, IT, or a related field. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Azure Devops, Linux Bash, Microsoft PowerShell, Python (Programming Language) Optional Skills Linux Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 1 month ago
8.0 years
0 Lacs
Kochi, Kerala, India
On-site
We are looking for an experienced Cloud Platform Lead to spearhead the design, implementation, and governance of scalable, secure, and resilient cloud-native platforms on Azure. This role requires deep technical expertise in Azure services, Kubernetes (AKS), containers, Gateway, Frontdoor, WAF, and API management, along with the ability to lead cross-functional initiatives and define cloud platform strategy and best practices. Key Responsibilities: Lead the architecture, development, and operations of Azure-based cloud platforms across environments (dev, staging, production). Design and manage Azure Front Door, Application Gateway, and WAF to ensure global performance, availability, and security. Design and implement Kubernetes platform (AKS), ensuring reliability, observability, and governance of containerized workloads. Drive adoption and standardization of Azure API Management for secure and scalable API delivery. Collaborate with security and DevOps teams to implement secure-by-design cloud practices, including WAF rules, RBAC, and network isolation. Guide and mentor engineers in Kubernetes, container orchestration, CI/CD pipelines, and Infrastructure as Code (IaC). Define and implement monitoring, logging, and alerting best practices using tools like Azure Monitor, ELK, Signoz Evaluate and introduce tools, frameworks, and standards to continuously evolve the cloud platform. Participate in cost optimization and performance tuning initiatives for cloud services. Required Skills & Qualifications: 8+ years of experience in cloud infrastructure or platform engineering, including at least 4+ years in a leadership or ownership role. Deep hands-on expertise with Azure Front Door, Application Gateway, Web Application Firewall (WAF), and Azure API Management. Strong experience with Kubernetes and Azure Kubernetes Service (AKS), including networking, autoscaling, and security. Proficient with Docker and container orchestration principles. Infrastructure-as-Code experience with Terraform, ARM Templates, or Bicep. Excellent understanding of cloud security, identity (AAD, RBAC), and compliance. Experience building and guiding CI/CD workflows using tools like Azure DevOps and Bitbucket Ci/CD, or similar. Education : B Tech / BE/ M Tech / MCA
Posted 1 month ago
10.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Description Purpose of role: Mid-level leadership role in Service Management. Maintain excellent service uptime levels for both external services for clients and internal high impacting tools. Maintain Engineers and architects within the team and provide higher management with a clear high level overview of the team’s activities and progress. Seniority is based on years of experience, knowledge and skill-set. This role is also hands-on in day-to-day operations of the team. Experience: 10+ years for Senior Manager (12+ years for Director position) Role: Technical, Sr. Manager / Director (MC) Knowledge And Skill-set Degree in Computer Science, Software Engineering, IT or related discipline 10+ years’ professional experience in infrastructure (on-premise/cloud) / Linux administration / networking / client project implementations and experience in leading a Infrastructure team Must have a strong background in Cloud infrastructure, from serverless up to containerization. Must have a general idea about (but not limited to): Cloud infrastructure, Continuous Integration/Continuous Deployment Must be an expert in Infrastructure best practices and practise them where applicable Must have in-depth knowledge of AWS (or similar), including: AutoScaling, S3, CloudFront, Route53, IAM, Certificate Manager, DynamoDB/MongoDB and RDS Must have in-depth knowledge of Jenkins or other CI/CD environments Must be familiar with cost optimisation both for clients’ and internal projects Must have the ability to develop and manage a budget Must have, at least, the following certification(s): AWS Certified Solutions Architect - Associate (Professional will be preferred) Must have an understanding of software development processes, tools, and skill in at least two languages (back-end/front-end/scripting/JS) Strong written and verbal communication skills in English. Must also be able to simplistically explain solutions to other team members and clients, who don’t necessarily have to be technical Experience with containerisation and orchestration Requirements Responsibilities: Lead the Infrastructure Operations team Act as an escalation point for the Infrastructure Operations Team Act as mentor and escalation point for the Support Engineering team Analyse system requirements Recommend alternative technologies where applicable Work closely with the higher management and provide high level reporting of the team’s activities Have the ability to document his/her work in a clear and concise manner Always be on the lookout for gaps in the general day to day operations Provide suggestions for where things can be automated Work closely with internal stakeholders (Delivery Managers, Engineers, Support, Products, and QA) for implementing the best solutions for clients and define clear roadmaps and milestones Work closely with the Engineering Directors to define architecture standards, policies and processes, and governing methodologies on the aspect of (but not limited to) infrastructure, efficiency, security, and reliability Draft, review and management of proposals and commercial contracts Carry-out Management tasks such as resourcing, budget, proposals and commercial contracts preparation/review, mentoring, etc
Posted 1 month ago
6.0 years
0 Lacs
India
On-site
Orion Innovation is a premier, award-winning, global business and technology services firm. Orion delivers game-changing business transformation and product development rooted in digital strategy, experience design, and engineering, with a unique combination of agility, scale, and maturity. We work with a wide range of clients across many industries including financial services, professional services, telecommunications and media, consumer products, automotive, industrial automation, professional sports and entertainment, life sciences, ecommerce, and education. Key Responsibilities Design and maintain resilient deployment patterns (blue-green, canary, GitOps syncs) across services. Instrument and optimize logs, metrics, traces, and alerts to reduce noise and improve signal. Review backend code (e.g., Django, Node.js, Go, Java) with a focus on infra touchpoints like database usage, timeouts, error handling, and memory consumption. Tune and troubleshoot GKE workloads, HPA configs, network policies, and node pool strategies. Improve or author Terraform modules for infrastructure resources (e.g., VPC, CloudSQL, Secrets, Pub/Sub). Diagnose production issues from logs, traces, dashboards, and lead or support incident response. Reduce config drift across environments and standardize secrets, naming, and resource tagging. Collaborate with developers to harden delivery pipelines, standardize rollout readiness, and clean up infra smells in code. Requirements Have 4–6+ years of experience in backend or infra-focused engineering roles (e.g., SRE, platform, DevOps, or fullstack). Can confidently write or review production-grade code and infra-as-code (Terraform, Helm, GitHub Actions, etc.). Have deep hands-on experience with Kubernetes in production, ideally on GKE, including workload autoscaling and ingress strategies. Understand cloud concepts like IAM, VPCs, secret storage, workload identity, and CloudSQL performance characteristics. Think in systems: you understand cascading failure, timeout boundaries, dependency health, and blast radius. Regularly contribute to incident mitigation or long-term fixes (not just closing alerts). Can influence through well-written PRs, documentation, and thoughtful design reviews. Tools and Expectations Datadog - Monitor infrastructure health, capture service-level metrics, reduce alert fatigue through high signal thresholds. PagerDuty - Own incident management pipeline. Route alerts by severity and align with business SLAs. GKE / Kubernetes - Improve cluster stability and workload isolation. Define auto-scaling configurations and tune for efficiency. Helm / GitOps (ArgoCD/Flux) - Validate release consistency across clusters. Monitor sync status and rollout safety. Orion is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, citizenship status, disability status, genetic information, protected veteran status, or any other characteristic protected by law. Candidate Privacy Policy Orion Systems Integrators, LLC And Its Subsidiaries And Its Affiliates (collectively, “Orion,” “we” Or “us”) Are Committed To Protecting Your Privacy. This Candidate Privacy Policy (orioninc.com) (“Notice”) Explains What information we collect during our application and recruitment process and why we collect it; How we handle that information; and How to access and update that information. Your use of Orion services is governed by any applicable terms in this notice and our general Privacy Policy.
Posted 1 month ago
5.0 years
7 - 10 Lacs
Coimbatore, Tamil Nadu, India
On-site
About The Opportunity A leader in Enterprise Software & Technology Services focused on delivering bespoke SaaS and digital-transformation solutions to enterprises across domains. The organisation builds scalable, secure, and performant applications—blending cloud-native architecture, microservices, and strong engineering practices to drive business outcomes. Primary Title: Technical Lead Location: Coimbatore Role & Responsibilities Lead design and delivery of backend systems and microservices—own architecture decisions that balance scalability, security, and time-to-market. Write and review production-quality code; drive best practices in API design, data modelling, and service decomposition. Define and maintain CI/CD pipelines, automated testing, observability, and release processes to ensure high uptime and fast recovery. Collaborate with product, QA, and DevOps to translate requirements into technical specifications and realistic delivery plans. Coach and mentor engineers, run technical reviews, and establish engineering standards (code quality, performance, documentation). Engage with stakeholders and clients on technical trade-offs, estimations, and delivery risks; coordinate cross-functional teams for successful launches. Skills & Qualifications Must-Have 5+ years in software engineering with 3+ years in a lead/tech-lead role or similar responsibility. Strong backend development experience in Java, Node.js, or Python; solid understanding of RESTful APIs and service-oriented design. Hands-on experience with microservices, containerization (Docker), and orchestration (Kubernetes). Practical cloud experience (AWS, Azure, or GCP) and familiarity with cloud-native patterns (load balancing, autoscaling, storage). Proven ability to implement CI/CD, automated testing, and observability (metrics, tracing, logging) in production systems. Excellent problem-solving, system design, and stakeholder communication skills; willing to work on-site in India. Preferred Experience with event-driven architectures, message brokers (Kafka, RabbitMQ) and caching strategies (Redis). Knowledge of SQL and NoSQL databases and data modelling for scale (Postgres, MySQL, MongoDB, Cassandra). Exposure to frontend integration, security best practices, and performance tuning at scale. Benefits & Culture Highlights High-impact on product architecture with opportunities to shape engineering practices and mentor teams. Collaborative, outcome-driven environment that values code quality, continuous improvement, and clear ownership. On-site role offering close collaboration with product and client stakeholders—ideal for hands-on leaders who enjoy delivery accountability. We are looking for a pragmatic Technical Lead who combines deep engineering skills with strong people leadership to deliver reliable, scalable software. Apply if you enjoy solving complex systems problems and leading teams to operational excellence in an on-site, fast-paced environment. Skills: tech lead,project,management,artificial intelligence
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Citco JOB DESCRIPTION Since the 1940s Citco has provided specialist financial services to alternative investment funds, investors, multinationals and private clients worldwide. With over 6,000 employees in 45 countries we pioneer innovative solutions that meet our clients’ evolving needs, and deliver exceptional service. Our continuous investment in learning means our people are among the best in the industry. And our corporate social responsibility programs provide meaningful and fulfilling work in the community. A career at Citco isn’t just a job – it’s an opportunity to excel in an environment that genuinely supports your personal and professional development. About The Role As a Cloud DevOps Engineer, you will be working in a cross-functional team that will be responsible for designing and implementing re-usable frameworks, APIs, CI/CD pipelines, infrastructure automation , test automation leveraging modern cloud native designs/patterns and AWS services. You will be part of a culture of innovation where you’ll use AWS/Azure services to help team solve business challenges such as rapidly releasing products/services to the market or building an elastic, scalable, cost optimized application. You will have the opportunity to shape and execute a strategy to build knowledge and broaden use of public cloud in a dynamic professional environment. Education, Experience and Skill Bachelor’s degree in Engineering, Computer Science, or equivalent. 3 to 5 years in IT or Software Engineering including 1 to 2 years in an cloud environment (AWS preferred). Minimum 2 years of DevOps experience. Experience with AWS Services: CloudFormation, Terraform, EC2, Fargate, ECS, Docker, Autoscaling ,ELB, Jenkins, CodePipeline, CodeDeploy, CodeBuild, CodeCommit / Git, RDS, S3, CloudWatch, Lambda, IAM, Artifactory, ECR Highly proficient in Python. Experience in setting up and troubleshooting AWS production environments. Experience in implementing end to end CI/CD Delivery pipelines. Experience working in an agile environment. Hands-on skills operating in Linux and Windows. Proven knowledge of application architecture, networking, security, reliability and scalability concepts; software design principles and patterns. Must be self-motivated and driven. Job Duties In Brief Implement end to end highly scalable, available and resilient cloud engineering solutions for infrastructure and application components using AWS. Implement CI/CD pipelines for infrastructure and applications. Write infrastructure automation scripts, templates and integrate with DevOps tools. Automate smoke test and integrate test automation scripts such as unit tests, integration tests, performance tests into the CI\CD process. Troubleshoot AWS environments. A challenging and rewarding role in an award-winning global business. The above statements are intended to describe the general nature and level of work being performed. They are not intended to be an exhaustive list of all duties, responsibilities and skills. About You Position reports to the Development Lead under the Hedge Fund Accounting IT (HFAIT) department. HFAIT department manages the core accounting platform( Æxeo ®) and datawarehouse within Citco. The platform is used by clients globally and is the first true straight-through, proprietary front-to-back solution for hedge funds that uses a single database for all activities including order capture, position and P&L reporting and accounting. What We Offer Opportunities for personal and professional career development. Great working environment, competitive salary and benefits, and opportunities for educational support. Be part of an industry leading global team, renowned for excellence. Confidentiality Assured. Citco welcomes and encourages applications from people with disabilities. Accommodations are available on request for candidates taking part in all aspects of the selection process.
Posted 1 month ago
30.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Company Overview: Galaxy Office Automation Pvt. Ltd. is a trusted enterprise technology partner with 30+ years of experience in delivering secure, large-scale IT systems across India’s top enterprises and government institutions. As we scale into the next frontier of AI-driven intelligence, we’re building a new stack of AI-native services, products, agents, swarms, and conversational assistants that bring powerful, modular intelligence to real-world enterprise workflows – while evolving towards an AI-Factory model to enable scalable, reusable, and composable intelligence across use cases. Our goal is not buzzwords—it’s capability. We focus on deploying reliable, composable, and production-grade AI systems that can plug directly into existing ecosystems and deliver immediate business value. Role Summary : We’re looking for a Data Scientist / AI Engineer / Machine Learning Engineer who can own systems end to end and contribute from the ground up . In this role, you’ll architect and deploy real-world AI solutions using LLMs, predictive modeling, multimodal intelligence , and FastAPI-based microservices . You’ll work across core platform modules and client-specific projects—bridging the gap between cutting-edge AI and enterprise-grade deployment. What You’ll Work On ● Multi-Agent Collaboration, Reasoning, Memory & Human Alignment Build intelligent agents and swarms with multi-agent collaboration, reasoning, planning, memory, and alignment with human feedback. Use protocols such as MCP and A2A and frameworks such as LangChain and Crew AI . ● Retrieval-Augmented Generation (RAG) Develop hybrid pipelines using vector databases (FAISS, Qdrant, Pinecone) and transformer-based generation models ● Multimodal AI (Language + Vision + Audio + Video) Build systems that process and combine intelligence across language, vision, audio, and video. ● Predictive Modeling Build predictive pipelines using deep learning architectures (e.g., LSTMs, CNNs, RNNs), transformer-based models (e.g. openAI, LLama, Qwen, Mistral) , and ensemble methods (e.g., XGBoost, LightGBM, Random Forests). Emphasize modeling depth, generalization, interpretability, maintainability and dynamic improvements. ● Conversational Assistants Develop conversational assistants with advanced capabilities such as model based recommendations, user-query based what-if scenario analyses and continuous improvements based on memory and human feedback. ● FastAPI-Based Backend APIs Wrap agents and models into versioned, secure, and production-grade FastAPI microservices . ● Model Lifecycle Management Track, evaluate, and manage model lifecycles using MLflow, DVC, and internal governance tools. ● Data Engineering & Integration Ingest and transform data from SQL and NoSQL (e.g. MongoDB) sources, APIs, and distributed pipelines. Required Skills and Qualifications ● 1–4 years of experience in AI/ML product development or applied data science ● Strong Python skills: Pandas, NumPy, scikit-learn, Transformers, PyTorch/TensorFlow ● Hands-on experience with LLMs (OpenAI, Mistral, Claude, Llama, Deepseek, Gemini, etc.), LangChain, prompt engineering, and integration with real-world use cases ● Proven experience building agentic systems, including reasoning agents and multi-agent collaboration ● Deep expertise in predictive modeling using transformers, deep learning and ensemble methods ● Familiarity with image, audio, and video model development ● Familiarity with model monitoring and re-training ● Exposure to both SQL and NoSQL databases ● Experience building and deploying Python based backend APIs using FastAPI ● Proficiency with Git workflows, CI/CD, and modular code development ● Strong communication, documentation, and architectural thinking Desirable Skills ● Candidates with experience in reinforcement learning (including RLHF) will be preferred. ● Prior exposure to supervised fine-tuning (SFT) and parameter-efficient tuning approaches such as LoRA and QLoRA is desirable. ● Familiarity with workflow orchestration tools like Airflow, Celery, Prefect, or Dagster will be advantageous. ● Experience building autoscaling and serverless architectures using AWS Lambda, ECS, or EKS is a plus. ● Candidates with an expertise in backend engineering with a focus on microservice optimization will be preferred. ● Candidates with experience in containerization and orchestration using Docker and Kubernetes will be valued. ● Hands-on knowledge of designing conversational UI or chatbot pipelines is beneficial. ● Prior experience working with big data tools like Spark, Hive, or Hadoop is desirable. ● Experience deploying AI systems in BFSI, healthcare, e-commerce, or government environments will be an added advantage. What You’ll Gain ● End-to-End Ownership from design to deployment ● Agent-First Ecosystem Exposure ● Product + Custom Work across client and platform needs ● Leadership Track into applied AI architecture ● Live Enterprise Impact across sectors If you think in embeddings, talk in APIs, and believe human-aligned, reasoning-first multi-agent systems are the future—join Galaxy Office Automation and help build them.
Posted 1 month ago
7.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Kenvue is currently recruiting for a: Staff Engineer What we do At Kenvue, we realize the extraordinary power of everyday care. Built on over a century of heritage and rooted in science, we’re the house of iconic brands - including NEUTROGENA®, AVEENO®, TYLENOL®, LISTERINE®, JOHNSON’S® and BAND-AID® that you already know and love. Science is our passion; care is our talent. Who We Are Our global team is ~ 22,000 brilliant people with a workplace culture where every voice matters, and every contribution is appreciated. We are passionate about insights, innovation and committed to delivering the best products to our customers. With expertise and empathy, being a Kenvuer means having the power to impact millions of people every day. We put people first, care fiercely, earn trust with science and solve with courage – and have brilliant opportunities waiting for you! Join us in shaping our future–and yours. Role reports to: Senior Manager Platform Engineering Location: Asia Pacific, India, Karnataka, Bangalore Work Location: Hybrid What you will do Job Description Kenvue is seeking a Lead Full Stack Software Engineer – Cloud Native to join our innovative and forward-thinking software engineering team. You will play a pivotal role in designing, building, and delivering platforms and applications that shape the future of digital healthcare for our consumer health business. As a technical leader reporting to the Software Engineering Manager, you will drive the development, testing, and release of products that power our business. You will champion modern engineering practices, implement robust solutions in a multi-cloud, event-driven architecture, and collaborate across teams to enable high-performing, product-centric delivery. Key Responsibilities Demonstrate ownership and foster a collaborative team environment. Lead the software development lifecycle, from design to deployment. Deliver high-quality features and tools aligned with reference architecture and coding standards. Ensure production-ready solutions for various product initiatives. Stay engaged with the evolving Consumer Health landscape. Qualifications Bachelor’s degree in Computer Science, Computer Engineering, Software Engineering, or related field. 10+ years of IT experience, including at least 7+ years of hands-on software development. Proficiency in frontend development; experience with JavaScript/TypeScript preferred. Strong background in API design and development; Node JS experience preferred. Experience with CI/CD pipelines in any cloud environment; AWS preferred. Ability to implement standards and resolve build, Sonar, or Snyk issues. Solid understanding of databases; experience with PostgreSQL preferred. Experience with Docker & Kubernetes. PR reviews and guiding fellow engineers on Code quality. AI experience in workflow design, backend development, data engineering, system management, cost optimization, and autoscaling, using tools such as AWS Bedrock, Vertex AI, Azure OpenAI, and LangSmith is a plus Great to Have Experience with Backstage or other developer portals is a plus If you are an individual with a disability, please check our Disability Assistance page for information on how to request an accommodation.
Posted 1 month ago
1.0 years
0 Lacs
Delhi, India
On-site
Key Responsibilities Architect, develop, and maintain high-performance, scalable web applications capable of handling rapid user growth. Design and implement distributed, fault-tolerant systems with load balancing, caching, and database sharding as needed. Optimize backend services for low latency and high throughput, supporting millions of API calls or concurrent sessions. Set up CI/CD pipelines to automate build, test, and deployment. Develop robust, secure, and scalable APIs (REST/GraphQL) for web and mobile consumption. Deploy and manage applications on cloud platforms, GCP, with autoscaling infrastructure. Manage containerized deployments (Docker, Kubernetes) for scaling services effectively. Note Applicants must have 1+ years of hands-on experience in building and deploying full-stack applications. This is not an entry-level role; fresh graduates will not be considered. This Job comes with 6 months of probation (Rs 10,000 - Rs 15,000 per month during probation). It will be negotiable if the year of experience is more than 2 years. About Company: Stirring Minds is a premier startup ecosystem in India, dedicated to helping businesses launch, scale, and succeed. As a leading incubator, we provide funding, co-working spaces, and mentorship to support the growth of innovative companies. In addition to our incubator services, we also host the largest startup event in the country known as Startup Summit Live, bringing together entrepreneurs and industry leaders to connect, learn, and collaborate. Our community-driven approach extends beyond our event and incubator offerings, as we work to create communities of like-minded individuals who can support and learn from one another. We have been recognized by top media outlets both in India and internationally, including the BBC, The Guardian, Entrepreneur, and Business Insider. Our goal is to provide a comprehensive ecosystem for startups and help turn their ideas into reality.
Posted 1 month ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Key Responsibilities System Architecture & Event-Driven Design • Design and implement event-driven architectures using Apache Kafka to orchestrate distributed microservices and streaming pipelines. • Define scalable message schemas (e.g., JSON/Avro), data contracts, and versioning strategies to support AI-powered services. • Architect hybrid event + request-response systems to balance real-time streaming and synchronous business logic. Backend & AI/ML Integration • Develop Python-based microservices using FastAPI, enabling both standard business logic and AI/ML model inference endpoints. • Collaborate with AI/ML teams to operationalize ML models (e.g., classification, recommendation, anomaly detection) via REST APIs, batch processors, or event consumers. • Integrate model-serving platforms such as SageMaker, MLflow, or custom Flask/ONNX-based services. Cloud-Native & Serverless Deployment (AWS) • Design and deploy cloud-native applications using AWS Lambda, API Gateway, S3, CloudWatch, and optionally SageMaker or Fargate. • Build AI/ML-aware pipelines that automate retraining, inference triggers, or model selection based on data events. • Implement autoscaling, monitoring, and alerting for high-throughput AI services in production. Data Engineering & Database Integration • Ingest and manage high-volume structured and unstructured data across MySQL, PostgreSQL, and MongoDB. • Enable AI/ML feedback loops by capturing usage signals, predictions, and outcomes via event streaming. • Support data versioning, feature store integration, and caching strategies for efficient ML model input handling. Testing, Monitoring & Documentation • Write unit, integration, and end-to-end tests for both standard services and AI/ML pipelines. • Implement tracing and observability for AI/ML inference latency, success/failure rates, and data drift. • Document ML integration patterns, input/output schema, service contracts, and fallback logic for AI systems. Preferred Qualifications • 6+ years of backend software development experience with 2+ years in AI/ML integration or MLOps. • Strong experience in productionizing ML models for classification, regression, or NLP use cases. • Experience with streaming data pipelines and real-time decision systems. • AWS Certifications (Developer Associate, Machine Learning Specialty) are a plus. • Exposure to data versioning tools (e.g., DVC), feature stores, or vector databases is advantageous.
Posted 1 month ago
12.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Skill: AWS Architect, Application Modernization Experience:12-19 years Joining Time: Need 30-45 Days Joiner Work Location: Kolkata Job Description: • 15+ years of hands on IT experience in design and development of complex system • Minimum of 5+ years in a solution or technical architect role using service and hosting solutions such as private/public cloud IaaS, PaaS and SaaS platforms • At least 4+ years of experience hands on experience in cloud native architecture design, implementation of distributed, fault tolerant enterprise applications for Cloud. • Experience in application migration to AWS cloud using Refactoring, Rearchitecting and Re-platforming approach • 3+ Proven experience using AWS services in architecting PaaS solutions. • AWS Certified Architect Technical Skills • Deep understanding of Cloud Native and Microservices fundamentals • Deep understanding of Gen AI usage and LLM Models, Hands on experience creating Agentic Flows using AWS Bedrock, Hands on experience using Amazon Q for Dev/Transform • Deep knowledge and understanding of AWS PaaS and IaaS features • Hands on experience in AWS services i.e. EC2, ECS, S3, Aurora DB, DynamoDB, Lambda, SQS, SNS, RDS, API gateway, VPC, Route 53, Kinesis, cloud front, Cloud Watch, AWS SDK/CLI etc. • Strong experience in designing and implementing core services like VPC, S3, EC2, RDS, IAM, Route 53, Autoscaling , Cloudwatch, AWS Config, Cloudtrail, ELB, AWS Migration services, ELB, VPN/Direct connect • Hands on experience in enabling Cloud PaaS app and data services like Lambda, RDS, SQS, MQ,, Step Functions, App flow, SNS, EMR, Kinesis, Redshift, Elastic Search and others • Experience automation and provisioning of cloud environments using API’s, CLI and scripts. • Experience in deploy, manage and scale applications using Cloud Formation/ AWS CLI • Good understanding of AWS Security best practices and Well Architecture Framework • Good knowledge on migrating on premise applications to AWS IaaS • Good knowledge of AWS IaaS (AMI, Pricing Model, VPC, Subnets etc.) • Good to have experience in Cloud Data processing and migration, advanced analytics AWS Redshift, Glue, AWS EMR, AWS Kinesis, Step functions • Creating, deploying, configuring and scaling applications on AWS PaaS • Experience in java programming languages Spring, Spring boot, Spring MVC, Spring Security and multi-threading programming • Experience in working with hibernate or other ORM technologies along with JPA • Experience in working on modern web technologies such as Angular, Bootstrap, HTML5, CSS3, React • Experience in modernization of legacy applications to modern java applications • Experience in DevOps tool Jenkins/Bamboo, Git, Maven/Gradle, Jira, SonarQube, Junit, Selenium, Automated deployments and containerization • Knowledge on relational database and no SQL databases i.e. MongoDB, Cassandra etc. • Hands on experience with Linux operating system • Experience in full life-cycle agile software development • Strong analytical & troubleshooting skills • Experienced in Python, Node and Express JS (Optional) Main Duties: • AWS architect takes company’s business strategy and outlines the technology systems architecture that will be needed to support that strategy. • Responsible for analysis, evaluation and development of enterprise long term cloud strategic and operating plans to ensure that the EA objectives are consistent with the enterprise’s long-term business objectives. • Responsible for the development of architecture blueprints for related systems • Responsible for recommendation on Cloud architecture strategies, processes and methodologies. • Involved in design and implementation of best fit solution with respect to Azure and multi-cloud ecosystem • Recommends and participates in activities related to the design, development and maintenance of the Enterprise Architecture (EA). • Conducts and/or actively participates in meetings related to the designated project/s • Participate in Client pursuits and be responsible for technical solution • Shares best practices, lessons learned and constantly updates the technical system architecture requirements based on changing technologies, and knowledge related to recent, current and upcoming vendor products and solutions. • Collaborates with all relevant parties in order to review the objectives and constraints of each solution and determine conformance with the EA. Recommends the most suitable technical architecture and defines the solution at a high level.
Posted 1 month ago
4.0 - 6.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
Experience Required: 4-6 years Location: Gurgaon Department: Product and Engineering Working Days: Alternate Saturdays Working (1st and 3rd) 🔧 Key Responsibilities Design, implement, and maintain highly available and scalable infrastructure using AWS Cloud Services. Build and manage Kubernetes clusters (EKS, self-managed) to ensure reliable deployment and scaling of microservices. Develop Infrastructure-as-Code using Terraform, ensuring modular, reusable, and secure provisioning. Containerize applications and optimize Docker images for performance and security. Ensure CI/CD pipelines (Jenkins, GitHub Actions, etc.) are optimized for fast and secure deployments. Drive SRE principles including monitoring, alerting, SLIs/SLOs, and incident response. Set up and manage observability tools (Prometheus, Grafana, ELK, Datadog, etc.). Automate routine tasks with scripting languages (Python, Bash, etc.). Lead capacity planning, auto-scaling, and cost optimization efforts across cloud infrastructure. Collaborate closely with development teams to enable DevSecOps best practices. Participate in on-call rotations, handle outages with calm, and conduct postmortems. 🧰 Must-Have Technical Skills Kubernetes (EKS, Helm, Operators) Docker & Docker Compose Terraform (modular, state management, remote backends) AWS (EC2, VPC, S3, RDS, IAM, CloudWatch, ECS/EKS) Linux system administration CI/CD pipelines (Jenkins, GitLab CI, GitHub Actions) Logging & monitoring tools: ELK, Prometheus, Grafana, CloudWatch Site Reliability Engineering practices Load balancing, autoscaling, and HA architectures 💡 Good-To-Have GCP or Azure exposure Service Mesh (Istio, Linkerd) Secrets management (Vault, AWS Secrets Manager) Security hardening of containers and infrastructure Chaos engineering exposure Knowledge of networking (DNS, firewalls, VPNs) 👤 Soft Skills Strong problem-solving attitude; calm under pressure Good documentation and communication skills Ownership mindset with a drive to automate everything Collaborative and proactive with cross-functional teams
Posted 1 month ago
10.0 - 15.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role Name: Cloud Solution Architect Exp: 10 to 15 Years Roles & Responsibilities Understands business objectives, current/Target State Architecture, Future Vision and migration path for Airtel cloud product/Solutions Translate business requirements/challenges to technical solutions using enterprise architecture frameworks and methodologies. Participate in end to end RFP/Bidding process along with sales and other technical teams Prepare technical solutions in response to RFP/Tender documents by understanding the requirements and qualifications given in the RFP. Present technical solutions to clients C-level executives and technical teams Device costing estimates for various services offered including resource efforts required for the delivery of proposed solutions Requirement Gathering, Architecture/Design (HLD), Executing Proof Of concept (POC), Integration Architecture and creation of Bill of Material/Pricing Demonstrate deep subject matter Expertise of cloud architecture and implementation features (OS, multi-tenancy, virtualization, orchestration, elastic scalability, Security, DevOps, Management, Migration etc) Collaborate with Business teams, Enterprise Cloud Architecture team of Airtel and Product teams to facilitate and deliver the right solution to the Enterprise customer. Collaborate with Engineering, DevOps, and Security teams to ensure well-architected cloud solutions. Coach and mentor other team members, including mentoring on cloud standards, frameworks, etc., as well as will be a vital enabler of the Team’s cultural change for cloud adoption. Lead the definition and development of cloud reference architecture and management systems Additional Skills Experience in designing & delivering solutions involving hyperscale Cloud Service providers – AWS, Azure, OCI and Google on Cloud adoption/transformation projects Experience with developing solutions using Private cloud solutions like Openstack/Nutanix will be an added advantage Delivered few numbers of successful Cloud transformation projects end to end and dive-deep into the technicalities of the solution. Good understanding of private cloud solutions is a must (Nutanix/Redhat/Oracle PCA) Delivered couple of Cloud Consulting engagement where the candidate has conducted stakeholder interviews & Cloud maturity assessments to evaluate the capabilities for Technology Architecture, Automation & Hybrid/Public Cloud Operating Model. Developed Consulting work products like Cloud maturity assessments, Target State Blueprints and Transformation Roadmaps. Defined and delivered Cloud Landing zone design involving Account/Organization /Subscription structure, IAM, factoring in Functional & Non-Functional Requirements Expertise in Cloud Native Services (IaaS/PaaS) such as Compute, Virtual Networking, Storage 9Block/Object/File), NLB/ALB, Cloud Alerting, Cloud Notification, DBaaS (SQL/Postgres/NoSql etc.), CDN, etc. Kubernetes and associated technologies (Autoscaling, Ingress design, Service Mesh etc). Should have experience with Cloud native Kubernetes platforms such as AKS, EKS, GKE etc. Infrastructure as Code (IAC), DevOps & Cloud Automation Technologies & practices leveraging Cloud native tools, CI/CD, etc. Knowledge of Languages & Scripting frameworks leveraged for Cloud automation & integration such as Python, javascript, Ansible etc Industry Standard Cloud Architect Certifications on Azure/GCP/Redhat OpenStack is a must Excellent written and verbal communication skills. Good negotiation skills with logical thinking abilities #ADL #BAL
Posted 1 month ago
10.0 - 18.0 years
0 Lacs
Thane, Maharashtra, India
On-site
Position: Cloud Security Lead Experience: 10-18 years KEY SKILLS AND EXPERIENCE Cloud computing experience (AWS, AliCloud, Azure), and in particular, CloudFormation, EC2, EMR, S3, Redshift, RDS, SQS and AutoScaling Groups Experience in patch management and vulnerability scanning Familiarity with industry standards, guidelines, and regulatory compliance requirements related to information security and cloud computing such as ISO 27001, Cloud Security Alliance, NIST 800-53, PCI DSS, SOC2 and FedRamp Strong analytical & problem-solving skills with ability to translate ideas into practical implementation Ability to manage stakeholder relationships including team members, vendors and partners Excellent leadership and communication skills with ability to present and communicate effectively with both technical and non-technical audience Ability to provide technical and professional leadership, guidance, and training to others. KEY RESPONSIBILITIES Possess strong understanding of the concepts, approaches, methods, and techniques used for the implementation and effective management of a cloud security program Support in design, implementation and configuration of the Cloud security systems Lead cloud security assessments and provide recommendations on required configurations for client cloud platforms (such as AWS, Azure, GCP, Alibaba Cloud, Oracle Cloud) and environments based on Cloud Cyber Risk Framework and industry standard frameworks such as PDP, GDPR, ISO, CSA-CSM and NIST. Interface with Engineering and assist with testing or troubleshooting Identify security gaps or concerns pertaining to new infrastructure or application build out. Troubleshoot problems with cloud infrastructure (e.g., domain name service, virtual network peering, dedicated cloud connectivity services) and resources (e.g., virtual machines, virtual networks, cloud databases) in a multi-cloud vendor environment and lead analysis of technical platform issues, and resolution as part of cyber risk mitigation steps. Create and maintain documentation of departmental security procedures pertaining to cloud security Serve as an expert on cloud cyber risk for other business and technology stakeholders. Maintain and build effective, professional relationships with third party vendors and service providers that result in timely delivery of requirements and the highest standards of quality and cost effectiveness Work with internal stakeholders to gather requirements and develop the most effective solutions
Posted 1 month ago
12.0 years
0 Lacs
Greater Kolkata Area
On-site
Mandatory Qualifications 12+ years of professional experience with Drupal (has worked on major enterprise projects on Drupal 9/10+). Proven track record designing high-traffic, multi-region platforms Mastery of Drupal core APIs (Entity, Field, Plugin, Services, Cache), Composer workflows, and Symfony components. Expert knowledge of cloud infrastructure (AWS, Azure, or GCP) and container orchestration (Kubernetes/EKS, ECS, or Openshift). Strong DevSecOps background: Docker-based local stacks, IaC (Terraform/CloudFormation), observability tooling (New Relic, Datadog, Grafana), vulnerability management. Deep experience integrating Drupal with external systems: CRM (Salesforce/MS Dynamics), DAM, search (Solr/Elastic), SSO/OIDC, payment gateways, marketing-automation, analytics. Solid front-end understanding: Twig, SCSS, ES6; able to guide React/Vue/Next.js developers on headless patterns. Excellent communication skill - able to write architecture decision records , run whiteboard sessions, and brief executives. Bachelors degree in Computer Science, Software Engineering, or related field (or equivalent professional TO HAVE : Certifications: One or more cloud architect certifications Experience with composable commerce (Drupal Commerce, BigCommerce, CommerceTools) and personalization (Acquia CDP, Optimizely, Adobe Target). Prior work in regulated industries (finance, healthcare, public sector) and familiarity with WCAG. Contributions to Drupal core initiatives, module maintainership, or speaking engagements at DrupalCons or camps. Familiarity with micro-frontend architectures, edge compute (Cloudflare Workers, AWS Lambda@Edge), and JAMstack/Static Site Generation patterns (Gatsby, Next.js ISR). Key Responsibilities Define and document end-to-end solution architecture for complex, multi-site Drupal ecosystems: content model, configuration-management strategy, custom module design, and integration topology. Establish platform standards (coding, branching, secrets, performance, accessibility, security) and govern adherence across internal and vendor teams. Select and justify contrib/custom components, third-party services, and headless or hybrid approaches (REST, JSON:API, GraphQL) in line with scalability and performance targets. Design cloud-native deployment patternscontainerization, autoscaling, blue-green/Canary releases, disaster recovery, CDN, global edge caching. Own non-functional requirements: availability, performance SLAs, data residency, compliance (e.g GDPR) (ref:hirist.tech)
Posted 1 month ago
4.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category: Administration Main location: India, Karnataka, Bangalore Position ID: J0725-1998 Employment Type: Full Time Position Description: Role Overview GKE-focused Infrastructure Engineer to manage, maintain, and scale production Kubernetes clusters in Google Cloud (GKE). This role is entirely centered on GKE infrastructure, networking, security, and performance Key Responsibilities Operate and scale GKE clusters (multi-region, multi-tenant setup) Implement autoscaling, pod networking, node pool management Handle backup and recovery strategies for GKE workloads Ensure high availability and zero-downtime deployments Manage VPC configuration, firewall rules, and service mesh (if any) Troubleshoot container runtime, persistent storage, and ingress issues Work closely with SRE/Platform teams to implement observability and compliance standards Document operational procedures and cluster standards Required Skills & Experience 4+ years of experience in container orchestration/infrastructure 2+ years hands-on with GKE in production Strong knowledge of Kubernetes internals, pod lifecycle, volumes, taints/tolerations Familiar with GCP networking (VPC, subnets, load balancing, PSC, etc.) Comfortable managing GKE across multiple environments (Dev, QA, Prod) Preferred GCP Kubernetes Engine Specialist certification /Certified Kubernetes Administrator (CKA) Familiarity with Anthos, Istio, or service mesh concepts Skills: Google Cloud Platform Kubernetes Kubernetes Administrator What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 1 month ago
2.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Role Industry Type: Fintech / IoT Department: Engineering – DevOps Employment Type: Full Time, Permanent Role Category: DevOps / SRE Work Mode: Onsite Experience: 2+ years Responsibilities Design, implement, and manage scalable infrastructure across AWS and GCP environments. Build and manage Kubernetes (EKS/GKE) clusters, including autoscaling, Helm deployments, upgrades, and observability. Develop and maintain CI/CD pipelines using Jenkins, GitLab CI/CD, or GitHub Actions. Automate infrastructure provisioning using Terraform, following best practices in infrastructure as code (IaC). Implement and manage secret management solutions using HashiCorp Vault or AWS Secrets Manager. Set up centralized logging and monitoring systems using ELK Stack, Prometheus, Grafana, and cloud-native tooling. Enforce DevSecOps practices, including vulnerability scanning, RBAC, and compliance in CI/CD workflows. Collaborate closely with development and QA teams to support application delivery across environments. Take full ownership of building and managing DevOps systems from scratch. Document infrastructure, workflows, standard operating procedures (SOPs), and incident responses in a clear, structured, and version-controlled format. Required Skills & Experience 2+ years of hands-on experience in DevOps/SRE roles. Strong expertise in AWS (EKS, EC2, S3, RDS, IAM, VPC, CloudWatch) and working knowledge of GCP (GKE, IAM, GCS, etc.). Experience managing Kubernetes clusters and deploying using Helm. Proficient in Jenkins for CI/CD pipeline creation and orchestration. Strong knowledge of Terraform, with experience managing IaC in production environments. Hands-on experience with HashiCorp Vault or equivalent secret management tools. Experience with Prometheus, Grafana, ELK/EFK, or OpenSearch. Good scripting abilities in Bash, Python, or Go. Strong understanding of Linux, networking, DNS, SSL/TLS, and system hardening. Familiarity with DevSecOps tools such as Trivy, Snyk, or Aqua Security. Excellent documentation skills – ability to clearly record architecture diagrams, runbooks, troubleshooting guides, and automation workflows.
Posted 1 month ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description We are seeking a highly skilled and experienced Platform Engineer to manage and enhance our entire application delivery platform, from Cloudfront to the underlying EKS clusters and their associated components. The ideal candidate will possess deep expertise across cloud infrastructure, networking, Kubernetes, and service mesh technologies, coupled with strong programming skills. This role involves maintaining the stability, scalability, and performance of our production environment, including day-to-day operations, upgrades, troubleshooting, and developing in-house tools. Main Responsibilities Perform regular upgrades and patching of EKS clusters and associated components & oversee the health, performance, and scalability of the EKS clusters. Manage and optimize related components such as Karpenter (cluster autoscaling) and ArgoCD (GitOps continuous delivery). Implement and manage service mesh solutions (e.g., Istio, Linkerd) for enhanced traffic management, security, and observability. Participate in an on-call rotation to provide 24/7 support for critical platform issues and monitor the platform for potential issues and implement preventative measures. Develop, maintain, and automate in-house tools and scripts using programming languages like Python or Go to improve platform operations and efficiency. Configure and manage CloudFront distributions, WAF Policies for efficient & secure content delivery & routing. Develop and maintain documentation for platform architecture, processes, and troubleshooting guides. Tech Stack AWS: VPC, EC2, ECS, EKS, Lambda, Cloudfront, WAF, MWAA, RDS, ElastiCache, DynamoDB, Opensearch, S3, CloudWatch, Cognito, SQS, KMS, Secret Manager, KMS, MSK Terraform, Github Actions, Prometheus, Grafana, Atlantis, ArgoCD, OpenTelemetry Required Skills and Experiences Proven 6+ Years experience as a Platform Engineer, Site Reliability Engineer (SRE), or similar role with a focus on end-to-end platform ownership. In-depth knowledge and hands-on experience of at least 4 years with Amazon EKS and Kubernetes. Strong understanding and practical experience with Karpenter, ArgoCD, Terraform.. Solid grasp of core networking concepts and extensive experience of at least 5 years with AWS networking services (VPC, Security Groups, Network ACLs, CloudFront, WAF, ALB, DNS). Demonstrable experience with SSL/TLS certificate management. Proficiency in programming languages such as Python or Go for developing and maintaining automation scripts and internal tools. Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack). Excellent problem-solving and debugging skills across complex distributed systems. Strong communication and collaboration abilities. Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent practical experience). Preferred Qualifications Prior experience working with service mesh technologies (preferably Istio) in a production environment. Experience building or contributing to Kubernetes Controllers. Experience with multi-cluster Kubernetes architectures. Experience building AZ isolated, DR architectures. Remarks *Please note that you cannot apply for PayPay (Japan-based jobs) or other positions in parallel or in duplicate. PayPay 5 senses Please refer PayPay 5 senses to learn what we value at work. Working Conditions Employment Status Full Time Office Location Gurugram (Wework) ※The development center requires you to work in the Gurugram office to establish the strong core team.
Posted 1 month ago
7.0 - 9.0 years
6 - 10 Lacs
Hyderābād
On-site
General information Country India State Telangana City Hyderabad Job ID 45594 Department Development Experience Level MID_SENIOR_LEVEL Employment Status FULL_TIME Workplace Type On-site Description & Requirements As a Senior DevOps Engineer, you will be responsible for leading the design, development, and operationalization of cloud infrastructure and CI/CD processes. You will serve as a subject matter expert (SME) for Kubernetes, AWS infrastructure, Terraform automation, and DevSecOps practices. This role also includes mentoring DevOps engineers, contributing to architecture decisions, and partnering with cross-functional engineering teams to implement best-in-class cloud and deployment solutions. Essential Duties: Design, architect, and automate cloud infrastructure using Infrastructure as Code (IaC) tools such as Terraform and CloudFormation. Lead and optimize Kubernetes-based deployments, including Helm chart management, autoscaling, and custom controller integrations. Implement and manage CI/CD pipelines for microservices and serverless applications using Jenkins, GitLab, or similar tools. Champion DevSecOps principles, integrating security scanning (SAST/DAST) and policy enforcement into the pipeline. Collaborate with architects and application teams to build resilient and scalable infrastructure solutions across AWS services (EC2, VPC, Lambda, EKS, S3, IAM, etc.). Establish and maintain monitoring, alerting, and logging practices using tools like Prometheus, Grafana, CloudWatch, ELK, or Datadog. Drive cost optimization, environment standardization, and governance across cloud environments. Mentor junior DevOps engineers and participate in technical reviews, playbook creation, and incident postmortems. Develop self-service infrastructure provisioning tools and contribute to internal DevOps tooling. Actively participate in architecture design reviews, cloud governance, and capacity planning efforts. Basic Qualifications: 7–9 years of hands-on experience in DevOps, Cloud Infrastructure, or SRE roles. Strong expertise in AWS cloud architecture and automation using Terraform or similar IaC tools. Solid knowledge of Kubernetes, including experience managing EKS clusters, Helm, and custom resources. Deep experience in Linux administration, networking, and security hardening. Advanced experience building and maintaining CI/CD pipelines (Jenkins, GitLab CI, etc.). Proficient in scripting with Bash, Groovy, or Python. Strong understanding of containerization using Docker and orchestration strategies. Experience with monitoring and logging stacks like ELK, Prometheus, and CloudWatch. Familiarity with security, identity management, and cloud compliance frameworks. Excellent troubleshooting skills and a proactive approach to system reliability and resilience. Strong interpersonal skills and ability to work cross-functionally. Bachelor’s degree in Computer Science, Information Systems, or equivalent. Preferred Qualifications: Experience with GitOps using ArgoCD or FluxCD. Knowledge of multi-account AWS architecture, VPC peering, and Service Mesh. Exposure to DataOps, platform engineering, or large-scale data pipelines. Familiarity with Serverless Framework, API Gateway, and event-driven designs. Certifications such as AWS DevOps Engineer – Professional, CKA/CKAD, or equivalent. Experience in regulated environments (e.g., SOC2, ISO27001, GDPR, HIPAA). About Infor Infor is a global leader in business cloud software products for companies in industry specific markets. Infor builds complete industry suites in the cloud and efficiently deploys technology that puts the user experience first, leverages data science, and integrates easily into existing systems. Over 60,000 organizations worldwide rely on Infor to help overcome market disruptions and achieve business-wide digital transformation. For more information visit www.infor.com Our Values At Infor, we strive for an environment that is founded on a business philosophy called Principle Based Management™ (PBM™) and eight Guiding Principles: integrity, stewardship & compliance, transformation, principled entrepreneurship, knowledge, humility, respect, self-actualization. Increasing diversity is important to reflect our markets, customers, partners, and communities we serve in now and in the future. We have a relentless commitment to a culture based on PBM. Informed by the principles that allow a free and open society to flourish, PBM™ prepares individuals to innovate, improve, and transform while fostering a healthy, growing organization that creates long-term value for its clients and supporters and fulfillment for its employees. Infor is an Equal Opportunity Employer. We are committed to creating a diverse and inclusive work environment. Infor does not discriminate against candidates or employees because of their sex, race, gender identity, disability, age, sexual orientation, religion, national origin, veteran status, or any other protected status under the law. If you require accommodation or assistance at any time during the application or selection processes, please submit a request by following the directions located in the FAQ section at the bottom of the infor.com/about/careers webpage.
Posted 1 month ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Help empower our global customers to connect to culture through their passions. Job Description: Senior Software Engineer (Search Backend) Why you’ll love this role StockX is an established global startup headquartered in the USA with development offices in Bangalore India. In the Search & Recommendation ML team, we work together to productionalize custom machine-learning models that can drive product vision and customer impact at scale. We are looking for a Sr Software Engineer who is sophisticated in large-scale search systems. This member will be responsible for the health of the Search Backend system and working with other ML engineers for productionizing ML innovations in the Search Domains. If you’re passionate about search performance, ranking pipelines, and search index maintenance, this role is for you. What You’ll Do Design and maintain the infrastructure behind our core search stack. Build scalable, fault-tolerant indexing pipelines for real-time and batch data ingestion. Partner with ML engineers and relevance teams to support offline/online ranking experimentation. Optimize search latency, throughput, and uptime using observability tooling and performance profiling. Collaborate with product and data teams to understand query patterns and evolve system design accordingly. Drive migration to more modern indexing and vector search frameworks. Implement safeguards and autoscaling policies to ensure SLAs under traffic spikes and failovers. About You 6+ years of experience building scalable backend systems; ideally in search, recommendation, or large-scale data retrieval. Strong experience with search engines Solid grasp of distributed systems (e.g., Kafka, Kubernetes, microservices architecture). Proficiency in Go and Python. Comfort with performance tuning and profiling low-latency systems. Experience deploying and operating production systems in cloud environments (AWS, GCP, Azure). Familiarity with Databricks, Unity Catalog, or Lakehouse architecture is highly desirable. Bachelor’s or Master’s in Computer Science, Engineering, or a related technical field. Nice To Have Skills Familiarity with MLOps, vector databases (e.g., Faiss, Milvus, Weaviate), or ANN algorithms Experience with Kubernetes and Docker for productionalizing models Experience in building machine learning systems at scale. Experience in using AWS Cloud Platform, Databricks and/or OpenSearch or Elastic Search. Experience in LLM serving / Open AI or equivalent / Langchain / Agents / RAG Apps. About StockX StockX is proud to be a Detroit-based technology leader focused on the large and growing online market for sneakers, apparel, accessories, electronics, collectibles, trading cards, and more. StockX's powerful platform connects buyers and sellers of high-demand consumer goods from around the world using dynamic pricing mechanics. This approach affords access and market visibility powered by real-time data that empowers buyers and sellers to determine and transact based on market value. The StockX platform features hundreds of brands across verticals including Jordan Brand, adidas, Nike, Supreme, BAPE, Off-White, Louis Vuitton, Gucci; collectibles from brands including LEGO, KAWS, Bearbrick, and Pop Mart; and electronics from industry-leading manufacturers Sony, Microsoft, Meta, and Apple. Launched in 2016, StockX employs 1,000 people across offices and verification centers around the world. Learn more at www.stockx.com. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. This job description is intended to convey information essential to understanding the scope of the job and the general nature and level of work performed by job holders within this job. However, this job description is not intended to be an exhaustive list of qualifications, skills, efforts, duties, responsibilities or working conditions associated with the position. StockX reserves the right to amend this job description at any time. StockX may utilize AI to rank job applicant submissions against the position requirements to assist in determining candidate alignment.
Posted 1 month ago
0 years
0 Lacs
India
On-site
Who We are Looking For : We are hiring a hands-on Gen-AI Developer who has strong coding experience in C# , a solid understanding of Gen AI (Generative AI) concepts like LLMs (Large Language Models) and AI agents , and is well-versed in using Azure cloud services . This role is not for managers or theorists — we need someone who can build, test, and deploy real-world AI applications and solutions. Must-Have Skills & Expertise Programming & Frameworks C# Programming Language – Strong command and ability to build enterprise-level applications. .NET and .NET Aspire – Experience building scalable AI applications using the .NET ecosystem. AI & Gen-AI Development Experience working with LLMs (Large Language Models) such as OpenAI, GPT, Azure OpenAI, or local models. Hands-on experience with Generative AI tools and frameworks . Semantic Kernel (Microsoft) – Ability to use and integrate Semantic Kernel for building AI agents and orchestrating tasks. AI Agent Concepts Understanding of how AI agents work (multi-step reasoning, task decomposition, autonomous behavior). Ability to design, build, and optimize Agentic AI systems . Cloud Platform: Microsoft Azure Should have deployed or worked on AI solutions using the following Azure services: App Services Containers (Docker on Azure) AI Search Bot Services AI Foundry Cloud-native development and serverless architectures are a strong plus. Data Science & Machine Learning End-to-end ML pipeline development: from data ingestion , model training , fine-tuning , to deployment . Comfortable working with ML frameworks like MLflow , Kubeflow , or TFX . Experience with model fine-tuning and deployment , especially LLMs. Data & Pipelines Knowledge of building data pipelines using: Apache Airflow Apache Kafka Azure Data Factory Experience with both structured (SQL) and unstructured data (NoSQL) . Familiarity with Data Lakes , Data Warehouses , and ETL workflows. Infrastructure & DevOps Experience with: Containerization using Docker and Kubernetes . Infrastructure as Code tools like Terraform or Azure Resource Manager (ARM) . CI/CD tools like Azure DevOps , GitHub Actions , or Jenkins . Building and automating end-to-end pipelines for AI/ML models . Cloud Security & Cost Management Solid understanding of: Cloud security best practices – IAM, VPCs, firewalls, encryption, etc. Cloud cost optimization – autoscaling, efficient resource allocation, and budget tracking. Key Responsibilities Develop, test, and deploy intelligent applications using C# and Gen-AI technologies . Build and optimize AI agents using Semantic Kernel and LLMs . Create full ML/AI solutions — from data processing, model training, evaluation, to production deployment. Integrate and manage Azure AI services in enterprise solutions. Design and maintain data pipelines , model orchestration workflows, and automated deployments . Work collaboratively with cross-functional teams (data scientists, DevOps engineers, backend developers). Ensure performance optimization of deployed models and infrastructure. Maintain cloud cost efficiency and monitor infrastructure using the right tools and strategies. Follow Agile methodologies (Scrum/Kanban), participate in sprint planning, code reviews, and team stand-ups. Maintain code quality, documentation, and test coverage. Soft Skills Required Clear communication skills – You should be able to explain technical ideas to both tech and non-tech stakeholders. Collaborative mindset – You’ll work closely with DevOps, ML Engineers, and Data Scientists. Strong analytical and problem-solving skills – Able to break down complex problems into actionable steps. Self-motivated and hands-on – You enjoy coding, experimenting, and deploying real systems. Adaptable to new tools and fast-changing Gen-AI landscape . Ideal Candidate Summary Someone who can code in C# , work with Azure services , and understands AI at a hands-on level . You’ve built or deployed Gen-AI models, worked with LLMs and AI Agents , and can set up the whole stack — from data to deployment , securely and efficiently. You are not afraid to get your hands dirty with containers, pipelines, code, or model tuning.
Posted 1 month ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Company : Our client is prominent Indian multinational corporation specializing in information technology (IT), consulting, and business process services and its headquartered in Bengaluru with revenues of gross revenue of ₹222.1 billion with global work force of 234,054 and listed in NASDAQ and it operates in over 60 countries and serves clients across various industries, including financial services, healthcare, manufacturing, retail, and telecommunications. The company consolidated its cloud, data, analytics, AI, and related businesses under the tech services business line. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru, kochi, kolkatta, Noida. · Job Title: " AWS DevOps Engineer". · Location: Gurugram · Experience: 10 + Year's. · Job Type : Contract to hire. · Notice Period:- Immediate joiner's. Mandatory Skill Experience : AWS DevOps Engineer. Terraform Iaas Load Balancer CI/CD Pipelines. JOB DESCRIPTION : • DevOps specialist supporting the client AWS Cloud environments • Setup and automation of high availability clusters with AWS Autoscaling, Load Balancers and Route53 SSO in cloud • Estimating AWS usage costs and identifying operational cost control mechanisms • Monitoring via Splunk Cloudwatch Cloudtrail Required Skills: • 10+ years of experience with AWS Cloud Platform and DevOps Enginnering • Strong experience with developing infrastructure as code build and deployment pipelines • Hands on Experience with IaaS Terraform • Previous work experience in Agile and Scrum methodologies practices • Previous experience designing and developing a multitude application using almost all of the main services of the AWS stack (e.g. EC2, ECS, EKS, s3, RDS, VPC, IAM, ELB, Cloud Watch, Route 53, Lamba and CloudFormation • Working knowledge of AWS, VPC subnets InternetGateway and Route Table • Experience with Java Spring Boot, CI/CD pipeline tools would be advantageous • Worked on deployment automation using Shell with more concentration of DevOps and CI/CD Jenkins ------ ------Developer / Software Engineer - One to Three Years,AWS DevOps - One to Three Years------PSP Defined SCU in Data Engineering_Data Engineer
Posted 1 month ago
0.0 - 9.0 years
0 Lacs
Hyderabad, Telangana
On-site
General information Country India State Telangana City Hyderabad Job ID 45594 Department Development Experience Level MID_SENIOR_LEVEL Employment Status FULL_TIME Workplace Type On-site Description & Requirements As a Senior DevOps Engineer, you will be responsible for leading the design, development, and operationalization of cloud infrastructure and CI/CD processes. You will serve as a subject matter expert (SME) for Kubernetes, AWS infrastructure, Terraform automation, and DevSecOps practices. This role also includes mentoring DevOps engineers, contributing to architecture decisions, and partnering with cross-functional engineering teams to implement best-in-class cloud and deployment solutions. Essential Duties: Design, architect, and automate cloud infrastructure using Infrastructure as Code (IaC) tools such as Terraform and CloudFormation. Lead and optimize Kubernetes-based deployments, including Helm chart management, autoscaling, and custom controller integrations. Implement and manage CI/CD pipelines for microservices and serverless applications using Jenkins, GitLab, or similar tools. Champion DevSecOps principles, integrating security scanning (SAST/DAST) and policy enforcement into the pipeline. Collaborate with architects and application teams to build resilient and scalable infrastructure solutions across AWS services (EC2, VPC, Lambda, EKS, S3, IAM, etc.). Establish and maintain monitoring, alerting, and logging practices using tools like Prometheus, Grafana, CloudWatch, ELK, or Datadog. Drive cost optimization, environment standardization, and governance across cloud environments. Mentor junior DevOps engineers and participate in technical reviews, playbook creation, and incident postmortems. Develop self-service infrastructure provisioning tools and contribute to internal DevOps tooling. Actively participate in architecture design reviews, cloud governance, and capacity planning efforts. Basic Qualifications: 7–9 years of hands-on experience in DevOps, Cloud Infrastructure, or SRE roles. Strong expertise in AWS cloud architecture and automation using Terraform or similar IaC tools. Solid knowledge of Kubernetes, including experience managing EKS clusters, Helm, and custom resources. Deep experience in Linux administration, networking, and security hardening. Advanced experience building and maintaining CI/CD pipelines (Jenkins, GitLab CI, etc.). Proficient in scripting with Bash, Groovy, or Python. Strong understanding of containerization using Docker and orchestration strategies. Experience with monitoring and logging stacks like ELK, Prometheus, and CloudWatch. Familiarity with security, identity management, and cloud compliance frameworks. Excellent troubleshooting skills and a proactive approach to system reliability and resilience. Strong interpersonal skills and ability to work cross-functionally. Bachelor’s degree in Computer Science, Information Systems, or equivalent. Preferred Qualifications: Experience with GitOps using ArgoCD or FluxCD. Knowledge of multi-account AWS architecture, VPC peering, and Service Mesh. Exposure to DataOps, platform engineering, or large-scale data pipelines. Familiarity with Serverless Framework, API Gateway, and event-driven designs. Certifications such as AWS DevOps Engineer – Professional, CKA/CKAD, or equivalent. Experience in regulated environments (e.g., SOC2, ISO27001, GDPR, HIPAA). About Infor Infor is a global leader in business cloud software products for companies in industry specific markets. Infor builds complete industry suites in the cloud and efficiently deploys technology that puts the user experience first, leverages data science, and integrates easily into existing systems. Over 60,000 organizations worldwide rely on Infor to help overcome market disruptions and achieve business-wide digital transformation. For more information visit www.infor.com Our Values At Infor, we strive for an environment that is founded on a business philosophy called Principle Based Management™ (PBM™) and eight Guiding Principles: integrity, stewardship & compliance, transformation, principled entrepreneurship, knowledge, humility, respect, self-actualization. Increasing diversity is important to reflect our markets, customers, partners, and communities we serve in now and in the future. We have a relentless commitment to a culture based on PBM. Informed by the principles that allow a free and open society to flourish, PBM™ prepares individuals to innovate, improve, and transform while fostering a healthy, growing organization that creates long-term value for its clients and supporters and fulfillment for its employees. Infor is an Equal Opportunity Employer. We are committed to creating a diverse and inclusive work environment. Infor does not discriminate against candidates or employees because of their sex, race, gender identity, disability, age, sexual orientation, religion, national origin, veteran status, or any other protected status under the law. If you require accommodation or assistance at any time during the application or selection processes, please submit a request by following the directions located in the FAQ section at the bottom of the infor.com/about/careers webpage.
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Job Title : DevOps Engineer / Cloud Engineer Location : Bangalore / Pune / Remote (Hybrid / Partially Remote) Employment Type : Full-time Experience : 4 to 8 Years Notice Period : Up to 30 days About The Role We are looking for an experienced DevOps / Cloud Engineer to architect, build, and maintain scalable, secure, and highly available cloud-native infrastructure. You will work closely with development and product teams to automate deployments, optimize reliability, and enable rapid delivery through modern CI/CD practices across AWS, Azure, or GCP environments. Key Responsibilities Design, implement, and manage cloud infrastructure on AWS, Azure, or GCP following best practices for scalability, security, and cost-efficiency. Build, maintain, and enhance CI/CD pipelines to enable automated build, test, and deployment workflows. Containerize applications using Docker and orchestrate them using Kubernetes (self-managed or managed services like EKS/AKS/GKE). Define, write, and maintain infrastructure-as-code using Terraform (or equivalent) to provision and version cloud resources. Monitor system health, performance, and availability; implement alerting, logging, and observability (metrics, tracing, centralized logs). Collaborate with developers to enable efficient release processes, including blue/green or canary deployments, rollback mechanisms, and feature flag integration. Implement security best practices at infrastructure and deployment layers (IAM, secrets management, network segmentation, vulnerability scanning). Manage environment configurations and secrets securely (e.g., HashiCorp Vault, AWS Secrets Manager, Azure Key Vault). Perform capacity planning, cost optimization, and disaster recovery planning. Troubleshoot production incidents, perform root cause analysis, and drive postmortems with actionable remediation. Automate routine operational tasks (backup, scaling, health checks, patching) and maintain runbooks. Assist in onboarding, mentoring junior engineers, and driving DevOps culture across the organization. Required Skills & Qualifications 4 to 8 years of hands-on experience in DevOps, Cloud Engineering, or Site Reliability Engineering. Strong experience with at least one major cloud provider : AWS, Azure, or GCP. Proficiency with Docker and container lifecycle management. Production experience with Kubernetes (deployment patterns, Helm charts, autoscaling, service mesh familiarity is a plus). Solid understanding and implementation of CI/CD pipelines using tools like GitHub Actions, GitLab CI, Jenkins, CircleCI, etc. Infrastructure-as-code expertise, especially Terraform (writing modules, state management, workspace strategies). Knowledge of networking, load balancing, DNS, VPN, firewalls, and cloud security configurations. Familiarity with logging/monitoring stacks (Prometheus/Grafana, ELK/EFK, Cloud-native equivalents). Scripting skills (Python, Bash, or equivalent) for automation. Experience with version control systems (Git), branching strategies, and release management. Strong problem-solving skills and ability to operate in a fast-paced, collaborative environment. Nice-to-Have Experience with service meshes (Istio, Linkerd) or API gateways. Exposure to serverless architectures (Lambda, Functions, Cloud Run). Knowledge of policy-as-code (e.g., Open Policy Agent) and compliance automation. Familiarity with GitOps paradigms (Flux, Argo CD). Experience with database operations in cloud (managed PostgreSQL, Redis, etc.). Working knowledge of observability platforms (OpenTelemetry). Certification(s) such as AWS Certified DevOps Engineer, Azure DevOps Engineer Expert, or Google Professional Cloud DevOps Engineer. Working Model Hybrid / Partially Remote: Flexibility to work remotely with periodic in-office collaboration (for Bangalore / Pune-based candidates). Core overlap hours to ensure team sync, with some flexibility in start/end times. Cross-functional collaboration with product, development, QA, and security teams. What We Offer Flexible work location Learning and certification support. Modern cloud-native tech stack. Health and wellness benefits (if applicable to your company). Inclusive culture with autonomy and ownership. (ref:hirist.tech)
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |