Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
ahmedabad, gujarat, india
On-site
Job Description AWS Cloud Infrastructure : Design, deploy, and manage scalable, secure, and highly available systems on AWS. Optimize cloud costs, enforce tagging, and implement security best practices (IAM, VPC, GuardDuty, etc.). Automate infrastructure provisioning using Terraform or AWS CDK. Ensure backup, disaster recovery, and high availability (HA) strategies are in place. Kubernetes (EKS preferred) : Manage and scale Kubernetes clusters (preferably Amazon EKS). Implement CI/CD pipelines with GitOps (e.g., ArgoCD or Flux) or traditional tools (e.g., Jenkins, GitLab). Enforce RBAC policies, namespaces isolation, and pod security policies. Monitor cluster health, optimize pod scheduling, autoscaling, and resource limits/requests. Monitoring and Observability (Datadog) : Build and maintain Datadog dashboards for real-time visibility across systems and services. Set up alerting policies, SLOs, SLIs, and incident response workflows. Integrate Datadog with AWS, Kubernetes, and applications for full-stack observability. Conduct post-incident reviews using Datadog analytics to reduce MTTR. Automation and DevOps : Automate manual processes (e.g., server setup, patching, scaling) using Python, Bash, or Ansible. Maintain and improve CI/CD pipelines (Jenkins) for faster and more reliable deployments. Drive Infrastructure-as-Code (IaC) practices using Terraform to manage cloud resources. Promote GitOps and version-controlled deployments. Linux Systems Administration : Administer Linux servers (Ubuntu, RHEL, Amazon Linux) for stability and performance. Harden OS security, configure SELinux, firewalls, and ensure timely patching. Troubleshoot system-level issues: disk, memory, network, and processes. Optimize system performance using tools like top, htop, iotop, netstat, etc. (ref:hirist.tech)
Posted 3 weeks ago
3.0 - 8.0 years
3 - 15 Lacs
chandigarh, india
On-site
Position Overview We are seeking a highly motivated and skilled DevOps Engineer with 3-8 years of experience to join our dynamic team. The ideal candidate will have a strong foundation in Linux, infrastructure automation, containerization, orchestration tools, and cloud platforms. This role offers an opportunity to work on cutting-edge technologies and contribute to the development and maintenance of scalable, secure, and efficient CI/CD pipelines. Key Responsibilities Design, implement, and maintain scalable CI/CD pipelines to streamline software development and deployment. Manage, monitor, and optimize infrastructure using tools like Terraform for Infrastructure as Code (IaC). Deploy, configure, and manage containerized applications using Docker and orchestrate them with Kubernetes. Develop and maintain Helm charts for managing Kubernetes deployments. Automate repetitive operational tasks using scripting languages such as Python, Bash, or PowerShell. Collaborate with development teams to ensure seamless integration and delivery of applications. Monitor and troubleshoot system performance, ensuring high availability and reliability of services. Configure and maintain cloud infrastructure on AWS. Implement and maintain security best practices in cloud environments and CI/CD pipelines. Manage and optimize system logs and metrics using monitoring tools like Prometheus, Grafana, ELK Stack, or Cloud-native monitoring tools. Key Requirements Experience: 3-8 years in a DevOps or similar role. Linux: Strong proficiency in Linux-based systems, including configuration, troubleshooting, and performance tuning is must IaC Tools: Hands-on experience with Terraform for infrastructure provisioning and automation. Containerization: Proficient in using Docker to build, deploy, and manage containers. Kubernetes: Experience with Kubernetes for container orchestration, including knowledge of deployments, services, pv, pvc and ingress controllers. Helm Charts: Familiarity with creating and managing Helm charts for Kubernetes applications. CI/CD Tools: Knowledge of tools like Jenkins, GitHub Actions, GitLab CI/CD, or CircleCI for continuous integration and deployment. Cloud Platforms: Hands-on experience with at least one major cloud provider (AWS, Azure, or GCP). Scripting: Proficiency in automation scripting using Python, Bash, or similar languages. Monitoring: Understanding of monitoring and logging tools such as Prometheus, Grafana, or ELK Stack. Version Control: Strong experience with version control tools like Git. Preferred Qualifications Knowledge of networking concepts (e.g., DNS, load balancing, firewalls). Familiarity with security practices such as role-based access control (RBAC) and secrets management. Exposure to Agile/Scrum methodologies and tools like Jira. Certification in any of the cloud platforms (AWS Certified DevOps Engineer, Azure DevOps Expert, or GCP Professional DevOps Engineer) is a plus. Soft Skills Strong problem-solving and troubleshooting skills. Ability to work collaboratively in a team-oriented environment. Excellent communication and documentation skills. Proactive approach to learning new tools and technologies. Note: Experience over Linux is Must. Skills:- Amazon Web Services (AWS), cicd orchestration, Kubernetes, Docker, Jenkins, Terraform, cicd, Web application security, Linux/Unix, grafana, sonarqube, autoscaling, load balancer, IaC, helm and Bash
Posted 3 weeks ago
10.0 years
0 Lacs
india
Remote
About Glowingbud Glowingbud is a rapidly growing eSIM services platform that simplifies connectivity with powerful APIs, robust B2B and B2C interfaces, and seamless integrations with Telna. Our platform enables global eSIM lifecycle management, user onboarding, secure payment systems, and scalable deployments. Recently acquired by Telna, we are expanding our product offerings and team to meet increasing demand and innovation goals. Job Summary We are seeking a Software Architect to join Glowingbud and play a pivotal role in shaping our technology infrastructure. You will be responsible for designing scalable, robust, and secure software architectures across the application stack—from front-end to back-end, databases to cloud infrastructure. You’ll work closely with our product, development, and DevOps teams to ensure smooth delivery and performance of our platforms. Key Responsibilities Architecture Design: Define and evolve the architectural roadmap for Glowingbud’s applications, microservices, APIs, and infrastructure. Application Development Oversight: Guide engineering teams in implementing scalable and maintainable codebases across web and mobile applications. Database Architecture: Design high-performance, scalable, and secure database solutions (MongoDB, PostgreSQL, or others). AWS & Cloud Infrastructure: Architect and manage cloud solutions including deployment pipelines, security, autoscaling, monitoring, and disaster recovery. DevOps Collaboration: Work with DevOps engineers to streamline CI/CD workflows and infrastructure automation. Performance & Reliability: Ensure system reliability, scalability, and performance through proactive monitoring, refactoring, and optimization. Technical Leadership: Provide mentorship and technical guidance to developers and participate in code/design reviews. Documentation: Maintain architectural documentation, tech specs, and best practices. Qualifications 10+ years of hands-on software development experience, including 3+ years in a software architecture or lead engineering role. Proven experience designing and scaling complex full-stack applications across front-end, back-end, and mobile platforms. Deep expertise in cloud architecture and infrastructure, especially with AWS services like EC2, S3, RDS, Lambda, ECS, API Gateway, and CloudFormation. Strong knowledge of modern web technologies and frameworks (e.g., Node.js, Angular/React, Express). Experience with database architecture and optimization, including both SQL (PostgreSQL, MySQL) and NoSQL (MongoDB) systems. Solid understanding of DevOps practices, including CI/CD, infrastructure as code (Terraform or CloudFormation), and containerization (Docker, Kubernetes). Familiarity with software security principles and best practices. Experience in telecom, eSIM, or SaaS product domains is a plus. Strong leadership, communication, and mentoring skills. Ability to translate business requirements into scalable and maintainable technical solutions. What We Offer A chance to build at the intersection of telecom and cutting-edge technology. Opportunity to make architectural decisions from the ground up. A fast-paced, product-focused environment with real impact. Remote-friendly culture with flexible work hours. Competitive compensation, and benefits. Join Us If you're a tech visionary with a passion for building reliable, scalable systems, we’d love to hear from you. Help us shape the future of connectivity at Glowingbud.
Posted 3 weeks ago
7.0 years
0 Lacs
hyderabad, telangana, india
On-site
About the Role We are seeking an experienced AWS Cloud Architect with deep expertise in event-driven solutions and multi-account cloud strategy. The ideal candidate will design and implement scalable, secure, and cost-optimized AWS architectures that support real-time workloads, multiple projects, and organizational governance. This role requires both hands-on technical expertise and strategic vision to align cloud adoption with business priorities. Key Responsibilities Event-Driven Architecture Architect and deliver event-driven systems leveraging AWS Event Bridge, SNS, SQS, Kinesis, Lambda, and Step Functions. Apply event sourcing, CQRS, and pub/sub patterns to build scalable, decoupled, and resilient systems. Develop real-time data pipelines for IoT, AI/ML, analytics, and transactional applications. Cloud Strategy & Governance Define and manage the AWS multi-account strategy using AWS Organizations, Control Tower, and Service Control Policies (SCPs). Establish account structures for dev/test, production, and shared services. Provide guidance on multi-project management under single AWS accounts through IAM, VPC segmentation, and tagging policies. Ensure alignment of cloud adoption with organizational governance and compliance frameworks. Cost Optimization Implement cost management practices using AWS Cost Explorer, Budgets and Trusted Advisor. Create tagging standards and cost allocation models across projects and departments. Optimize resources via autoscaling, right-sizing, spot instances, and savings plans. Establish chargeback/showback models for financial transparency and accountability. Security & Compliance Enforce least-privilege IAM access, SCPs, and automated guardrails. Centralize logging and monitoring (CloudTrail, GuardDuty, Security Hub). Ensure compliance with industry standards (PCI DSS, HIPAA, SOC 2, GDPR). Design secure event flows with encryption, key rotation, and monitoring. Collaboration & Leadership Partner with engineering, product, and operations teams to drive cloud-first, event-driven adoption. Lead POCs, reference architectures, and innovation initiatives for new event-driven technologies. Train and mentor teams on event-driven principles, multi-account best practices, and FinOps awareness. Act as a cloud evangelist, aligning stakeholders around long-term AWS strategy. Qualifications Required Skills Strong expertise in AWS Event-Driven Services (EventBridge, SNS, SQS, Kinesis, Lambda, Step Functions). Proven experience with AWS multi-account management (Organizations, Control Tower, SCPs, IAM). Solid knowledge of cost optimization strategies (tagging, chargeback/showback, reserved/spot instances). Proficiency in Infrastructure as Code (Terraform, Cloud Formation, AWS CDK). Deep understanding of security, networking, and compliance in AWS environments. Strong communication and leadership skills with ability to engage both technical and executive stakeholders. Preferred Skills Experience with Kafka/MSK or other event-streaming platforms. Familiarity with FinOps practices and cloud economics. Background in enterprise-scale migrations to event-driven architectures. Certifications (Preferred) AWS Certified Solutions Architect – Professional. AWS Certified DevOps Engineer – Professional. AWS Certified Advanced Networking – Specialty. FinOps Certified Practitioner (bonus). Experience 7+ years in IT, with 4+ years in AWS cloud architecture. Proven experience delivering enterprise-scale event-driven solutions. Hands-on background in multi-account strategy, governance, and cost optimization.
Posted 3 weeks ago
0 years
0 Lacs
pune, maharashtra, india
Remote
Own and scale our AWS-based platform with Kubernetes (EKS), Terraform IaC, and GitHub Actions–driven CI/CD. You’ll streamline container build/deploy, observability, security, and developer workflows (including Slack integrations) to deliver reliable, cost-efficient, and secure infrastructure. Responsibilities Manage and optimize Kubernetes (EKS) on AWS, including deployments, scaling, ingress, networking, and security. Maintain and extend Terraform-based IaC for consistent, repeatable, multi-environment deployments (modules, remote state). Build, maintain, and optimize GitHub Actions pipelines for backend, frontend, and infrastructure. Manage Docker images and AWS ECR (build, tag/version, cache, and vulnerability scanning). Monitor health, performance, and costs across AWS; recommend and implement improvements. Implement alerting, logging, and observability using CloudWatch, Prometheus, Grafana (or equivalents). Automate operational tasks via scripting and internal tooling to reduce manual work. Partner with developers to ensure environment parity, smooth deployments, and fast incident resolution. Integrate Slack with CI/CD, monitoring, and incident workflows. Enforce security best practices across AWS, Kubernetes, CI/CD pipelines, and container images. Requirements Must-have skills & experience Cloud (AWS): IAM, VPC, EC2, S3, RDS, CloudWatch, EKS, ECR. Kubernetes (EKS): Deployments, autoscaling, ingress, networking, secrets, ConfigMaps, Helm. IaC (Terraform): Modular code, remote state, environment patterns. Containers: Docker image building/optimization and vulnerability scanning. CI/CD (GitHub Actions): Workflows, matrix builds, caching, secrets management. Monitoring & Logging: Hands-on with CloudWatch, Prometheus, Grafana, ELK/EFK, or Loki. Security: Practical knowledge of IAM policies, K8s RBAC, and hardening practices. Scripting: Proficiency in Bash. Collaboration: Experience wiring Slack for deployments, alerts, and on-call workflows. Nice to have AWS cost optimization experience. Service mesh / advanced Kubernetes networking. Secrets management (AWS Secrets Manager, HashiCorp Vault). Familiarity with incident response processes and on-call rotations. What You Can Expect In Return ESOPs based on performance Health insurance Statutory benefits like PF & Gratuity Remote work options Professional development opportunities Collaborative and inclusive work culture About The Company EduFund is India’s first dedicated education-focused fintech platform, built to help Indian families plan, save and secure their child’s education. Founded in 2020, our mission is to remove financial stress from education planning. We offer a full suite of solutions, including investments, education loans, visa and immigration support, international remittance, and expert counselling, making it India’s only end‑to‑end education financial planning platform. Whether it's saving early or funding a degree abroad, EduFund helps parents make smarter financial decisions for their child’s future. With 2.5 lakh+ families, 40+ AMC partners, 15+ lending partners, and a growing presence across Tier 1 and Tier 2 cities, EduFund is becoming the go-to platform for education financing in India. In July 2025, EduFund raised $6 million in Series A funding, led by Cercano Management and MassMutual Ventures, bringing the total capital raised to $12 million. Explore more at www.edufund.in Skills: aws,kubernetes,ci,cd,github,terraform
Posted 3 weeks ago
0 years
0 Lacs
mumbai metropolitan region
Remote
Own and scale our AWS-based platform with Kubernetes (EKS), Terraform IaC, and GitHub Actions–driven CI/CD. You’ll streamline container build/deploy, observability, security, and developer workflows (including Slack integrations) to deliver reliable, cost-efficient, and secure infrastructure. Responsibilities Manage and optimize Kubernetes (EKS) on AWS, including deployments, scaling, ingress, networking, and security. Maintain and extend Terraform-based IaC for consistent, repeatable, multi-environment deployments (modules, remote state). Build, maintain, and optimize GitHub Actions pipelines for backend, frontend, and infrastructure. Manage Docker images and AWS ECR (build, tag/version, cache, and vulnerability scanning). Monitor health, performance, and costs across AWS; recommend and implement improvements. Implement alerting, logging, and observability using CloudWatch, Prometheus, Grafana (or equivalents). Automate operational tasks via scripting and internal tooling to reduce manual work. Partner with developers to ensure environment parity, smooth deployments, and fast incident resolution. Integrate Slack with CI/CD, monitoring, and incident workflows. Enforce security best practices across AWS, Kubernetes, CI/CD pipelines, and container images. Requirements Must-have skills & experience Cloud (AWS): IAM, VPC, EC2, S3, RDS, CloudWatch, EKS, ECR. Kubernetes (EKS): Deployments, autoscaling, ingress, networking, secrets, ConfigMaps, Helm. IaC (Terraform): Modular code, remote state, environment patterns. Containers: Docker image building/optimization and vulnerability scanning. CI/CD (GitHub Actions): Workflows, matrix builds, caching, secrets management. Monitoring & Logging: Hands-on with CloudWatch, Prometheus, Grafana, ELK/EFK, or Loki. Security: Practical knowledge of IAM policies, K8s RBAC, and hardening practices. Scripting: Proficiency in Bash. Collaboration: Experience wiring Slack for deployments, alerts, and on-call workflows. Nice to have AWS cost optimization experience. Service mesh / advanced Kubernetes networking. Secrets management (AWS Secrets Manager, HashiCorp Vault). Familiarity with incident response processes and on-call rotations. What You Can Expect In Return ESOPs based on performance Health insurance Statutory benefits like PF & Gratuity Remote work options Professional development opportunities Collaborative and inclusive work culture About The Company EduFund is India’s first dedicated education-focused fintech platform, built to help Indian families plan, save and secure their child’s education. Founded in 2020, our mission is to remove financial stress from education planning. We offer a full suite of solutions, including investments, education loans, visa and immigration support, international remittance, and expert counselling, making it India’s only end‑to‑end education financial planning platform. Whether it's saving early or funding a degree abroad, EduFund helps parents make smarter financial decisions for their child’s future. With 2.5 lakh+ families, 40+ AMC partners, 15+ lending partners, and a growing presence across Tier 1 and Tier 2 cities, EduFund is becoming the go-to platform for education financing in India. In July 2025, EduFund raised $6 million in Series A funding, led by Cercano Management and MassMutual Ventures, bringing the total capital raised to $12 million. Explore more at www.edufund.in Skills: aws,kubernetes,ci,cd,github,terraform
Posted 3 weeks ago
3.0 years
0 Lacs
noida, uttar pradesh, india
On-site
Job Description – Senior Data Integration Engineer (Azure Fabric + CRM Integrations) Location: Noida Employment Type: Full-time About Crenovent Technologies Crenovent is building RevAi Pro , an enterprise-grade Revenue Operations SaaS platform that integrates CRM, billing, contract, and marketing systems with AI agents and Generative AI search. Our vision is to redefine RevOps with AI-driven automation, real-time intelligence, and industry-specific workflows. We are now hiring a Senior Data Integration Engineer to lead the integration of CRM platforms (Salesforce, Microsoft Dynamics, HubSpot) into Azure Fabric and enable secure, multi-tenant ingestion pipelines for RevAi Pro. Role Overview You will be responsible for designing, building, and scaling data pipelines that bring CRM data into Azure Fabric (OneLake, Data Factory, Synapse-style pipelines) and transform it into RevAi Pro’s standardized schema (50+ core fields, industry-specific mappings) . This is a hands-on, architecture + build role where you will work closely with RevOps SMEs, product engineers, and AI teams to ensure seamless data availability, governance, and performance across multi-tenant environments. Key Responsibilities Data Integration & Pipelines Design and implement data ingestion pipelines from Salesforce, Dynamics 365, and HubSpot into Azure Fabric. Build ETL/ELT workflows using Azure Data Factory, Fabric pipelines, and Python/SQL. Ensure real-time and batch sync options for CRM objects (Leads, Accounts, Opportunities, Forecasts, Contracts). Schema & Mapping Map CRM fields to RevAi Pro’s standardized schema (50+ fields across industries) . Maintain schema consistency across SaaS, Banking, Insurance, and E-commerce use cases. Implement data transformation, validation, and enrichment logic. Data Governance & Security Implement multi-tenant isolation policies in Fabric (Purview, RBAC, field-level masking). Ensure PII compliance, GDPR, SOC2 readiness . Build audit logs, lineage tracking, and monitoring dashboards. Performance & Reliability Optimize pipeline performance (latency, refresh frequency, cost efficiency). Implement autoscaling, retry logic, error handling in pipelines. Work with DevOps to set up CI/CD for Fabric integrations. Collaboration Work with RevOps SMEs to validate business logic for CRM fields. Partner with AI/ML engineers to expose clean data to agents and GenAI models. Collaborate with frontend/backend developers to provide APIs for RevAi Pro modules. Required Skills & Experience 3+ years in Data Engineering / Integration roles. Strong expertise in Microsoft Azure Fabric , including: OneLake , Data Factory , Synapse pipelines , Power Query . Hands-on experience with CRM APIs & data models : Salesforce, Dynamics 365, HubSpot. Strong SQL and Python for data transformations. Experience with ETL/ELT workflows , schema mapping, and multi-tenant SaaS data handling. Knowledge of data governance tools (Azure Purview, RBAC, PII controls) . Strong grasp of cloud security & compliance (GDPR, SOC2, HIPAA optional) . Preferred (Nice to Have) Prior experience building integrations for Revenue Operations, Sales, or CRM platforms . Knowledge of middleware (MuleSoft, Boomi, Workato, Azure Logic Apps). Familiarity with AI/ML data pipelines . Experience with multi-cloud integrations (AWS, GCP) . Understanding of business RevOps metrics (pipeline, forecast, quota, comp plans). Soft Skills Strong ownership and problem-solving ability. Ability to translate business needs (RevOps fields) into technical data pipelines. Collaborative mindset with cross-functional teams. Comfortable working in a fast-paced startup environment .
Posted 3 weeks ago
7.0 years
0 Lacs
gurugram, haryana, india
Remote
We are seeking a talented individual to join our AMSI at MMC Corporate. This role will be based in Noida/Gurugram/Pune/Mumbai. This is a hybrid role that has a requirement of working at least three days a week in the office. Senior Principal Engineer - IT Systems Engineering - Finops What can you expect? We are seeking a Cloud Cost Management professional responsible for monitoring, optimizing, and forecasting cloud spend across multiple cloud platforms (AWS, Azure, OCI, etc.). The ideal candidate will work closely with engineering, Cloud Business Office and procurement teams to ensure efficient cloud usage and cost transparency, while driving accountability and financial governance within the organization. What is in it for you? Monitor and analyse cloud usage and spending patterns across platforms (AWS, Azure, OCI, etc.). Provide detailed reporting, dashboards, and insights to technical and business stakeholders. Partner with finance to support cloud budgeting, forecasting, and variance analysis. Implement and manage cost allocation tags, chargeback/show back models, and budget alerts. Identify and drive opportunities for cost optimization, including rightsizing, autoscaling, and eliminating idle resources. Collaborate with DevOps/engineering teams to implement cost-aware architectural practices. Stay up to date with evolving pricing models and cost management tools from cloud vendors. Support FinOps culture by promoting cloud cost accountability and best practices across teams. Manage system as a portfolio. For example, tracking and managing work plan for the systems to ensure we meet key deliverables and communicating to stakeholders. What you need to have: Bachelor's degree in Computer Science, IT, or a related field. 5–7 years of experience in cloud operations, cloud financial management, or FinOps. Strong understanding of public cloud services and pricing models (AWS, Azure, OCI). Proficiency in cloud cost tools (e.g., AWS Cost Explorer, Azure Cost Management, Finout, CloudHealth, CloudCheckr, or similar). Experience with reporting and analytics tools (Excel, Power BI, Tableau, etc.). Excellent analytical, communication, and cross-functional collaboration skills. What makes you stand out? FinOps Certified Practitioner or relevant cloud certification. Experience with scripting (Python, PowerShell) or IaC tools (Terraform, CloudFormation) for automation of cost monitoring. Familiarity with SaaS billing models and multi-cloud cost comparison. Prior experience building and running cost governance programs. Detail-oriented with strong problem-solving abilities. Ability to work independently and influence without authority. Proactive communicator and a strong advocate for cost efficiency. Why join our team: We help you be your best through professional development opportunities, interesting work and supportive leaders. We foster a vibrant and inclusive culture where you can work with talented colleagues to create new solutions and have impact for colleagues, clients and communities. Our scale enables us to provide a range of career opportunities, as well as benefits and rewards to enhance your well-being. Marsh McLennan (NYSE: MMC) is the world’s leading professional services firm in the areas of risk, strategy and people. The Company’s more than 85,000 colleagues advise clients in over 130 countries. With annual revenue of $23 billion, Marsh McLennan helps clients navigate an increasingly dynamic and complex environment through four market-leading businesses. Marsh provides data-driven risk advisory services and insurance solutions to commercial and consumer clients. Guy Carpenter develops advanced risk, reinsurance and capital strategies that help clients grow profitably and pursue emerging opportunities. Mercer delivers advice and technology-driven solutions that help organizations redefine the world of work, reshape retirement and investment outcomes, and unlock health and well being for a changing workforce. Oliver Wyman serves as a critical strategic, economic and brand advisor to private sector and governmental clients. For more information, visit marshmclennan.com, or follow us on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people regardless of their sex/gender, marital or parental status, ethnic origin, nationality, age, background, disability, sexual orientation, caste, gender identity or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their Marsh McLennan (NYSE: MMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businesses: Marsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marshmclennan.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person. R_318759
Posted 3 weeks ago
8.0 - 12.0 years
0 Lacs
noida, uttar pradesh, india
On-site
Experience: 8-12 years Role: Senior DevOps Engineer Skills: Kafka, ELk, Jenkins, Kubernetes, Docker, CI/CD Location: Sector 16, Noida (Onsite) Notice Period: Immediate / Serving only About Times Internet (across TIL) At Times Internet, we create premium digital products that simplify and enhance the lives of millions. As India’s largest digital products company, we have a significant presence across a wide range of categories, including News, Sports, Fintech, and Enterprise solutions. Our portfolio features market-leading and iconic brands such as TOI, ET, NBT, Cricbuzz, Times Prime, Times Card, Indiatimes, Whatshot, Abound, Willow TV, Techgig and Times Mobile among many more. Each of these products is crafted to enrich your experiences and bring you closer to your interests and aspirations. As an equal opportunity employer, Times Internet strongly promotes inclusivity and diversity. We are proud to have achieved overall gender pay parity in 2018, verified by an independent audit conducted by Aon Hewitt. We are driven by the excitement of new possibilities and are committed to bringing innovative products, ideas, and technologies to help people make the most of every day. Join us and take us to the next level! About the Role We are looking for a highly skilled and proactive Senior DevOps Engineer to join our infrastructure team and play a critical role in enhancing the scalability, reliability, and performance of our systems. In this role, you will be responsible for managing and optimizing our platform’s core components, including Apache Kafka, distributed data systems, and observability tooling. You will lead initiatives around cost optimization, automation, and CI/CD to ensure seamless and efficient operations across environments. Work Responsibilities Proven expertise in managing and tuning Apache Kafka clusters, with a strong focus on cost optimization, resource efficiency, and high throughput performance. Deep expertise in observability tools including ELK, Loki, Promtail, and Grafana. Ability to design and implement comprehensive monitoring, logging, and alerting solutions using tools such as Prometheus, Grafana, ELK, and Signoz. Strong hands-on experience with distributed data systems and caching technologies, including HDFS, Redis, Apache Spark,Mysql and Cassandra. Strong understanding of cost optimization strategies for virtual machines (VMs), including rightsizing, instance scheduling, reserved instances, and autoscaling best practices. Hands-on experience with Akamai for content delivery, caching, and web performance optimization. Familiarity with scripting languages (Python, Bash, Go, etc.). Build and optimize CI/CD pipelines for rapid, reliable deployment(Jenkins). Automate infrastructure provisioning, configuration, and application deployment. Strong problem-solving and communication skills. Solid understanding of networking, DNS, load balancing, firewalls, and VPCs. Monitor systems and respond to incidents, outages, and performance degradation. Hands on experience on coding as well. Nice to have Deep understanding of container orchestration (Kubernetes and Docker) Expertise in cloud platforms (AWS, GCP, or Azure). Background in software engineering or Coding practices Skills, Experience & Expertise Knowledge of Apache Kafka, ELK, Hadoop, HDFS, Redis,Cassendra, Spark and monitoring tools Basic Knowledge of coding Grip on debugging and troubleshooting on Linux environments Good communication and data analytical skills Past experience of working on Multi distributed system Eligibility A minimum of 8 years of experience in a related field of work M.tech or MCA from a premier institute Software engineer(Coding) background will serve as a major advantage
Posted 3 weeks ago
3.0 years
12 - 20 Lacs
india
Remote
Primary Title: Full-Stack AI Engineer (Remote - India) About The Opportunity A fast-scaling player in the Enterprise AI and Intelligent Applications sector building production-grade GenAI features and end-user applications. We deliver developer-first AI products and cloud-native services that combine modern web experiences with low-latency model inference, retrieval-augmented workflows and secure data pipelines. This is a fully remote role (India) focused on turning prototype ML/LLM research into reliable, production services. Role & Responsibilities Design and implement full-stack features that integrate LLMs/ML models into web and API-driven products—end-to-end from UX to inference. Build and maintain backend inference services and RAG pipelines (embedding, vector search, retrieval, prompt orchestration) with strong SLAs. Develop responsive front-end components (React/TypeScript) and connect them to secure, well-documented APIs for AI workflows. Productionise models: containerise, deploy, automate CI/CD, and tune autoscaling, latency and cost for inference at scale. Collaborate with ML researchers and product teams to define experiments, implement evaluation metrics, and roll successful models to production. Implement observability, monitoring, and alerting for model performance, data drift, and system reliability; enforce security and privacy best practices. Skills & Qualifications Must-Have 3+ years building full-stack applications with strong backend experience in Python (FastAPI/Flask) and front-end experience in React/TypeScript. Hands-on experience integrating and productionising LLMs/GenAI (OpenAI/HuggingFace) and implementing RAG patterns with vector search. Proficiency with containerisation (Docker), CI/CD pipelines, and deploying services to cloud platforms (AWS/GCP/Azure). Practical knowledge of ML frameworks (PyTorch or TensorFlow) and exposure to model serving frameworks or MLOps tools. Experience building scalable APIs (REST/gRPC), working with SQL and NoSQL datastores, and implementing secure data handling. Strong debugging, testing (unit/integration), Git-centred workflows and documentation skills. Preferred Experience with vector DBs and libraries (FAISS, Milvus, Pinecone) and embedding pipelines. Familiarity with Kubernetes, autoscaling, infra-as-code (Terraform), and cloud-native monitoring (Prometheus/Grafana). Knowledge of prompt engineering, evaluation for RAG, and optimisation for low-latency inference. Prior work in SaaS or product companies shipping customer-facing AI features at scale. Benefits & Culture Highlights Fully remote (India) role with flexible hours and asynchronous collaboration. High-impact product work: visibility to leadership and opportunity to own end-to-end AI features. Learning & growth budget, access to AI tooling, and a collaborative team culture focused on engineering excellence. Apply if you enjoy building production-grade AI systems, shipping polished user experiences, and operating reliable, scalable services. Keywords: Full-Stack AI Engineer, GenAI, LLM, RAG, LangChain, FAISS, PyTorch, React, Node.js, MLOps, DevOps, Cloud. Skills: rag,vector database,langchain,python,redis
Posted 3 weeks ago
0.0 - 3.0 years
9 - 16 Lacs
mumbai, maharashtra
On-site
Minimum of 3 years of backend development experience. Strong proficiency Python(Mandatory), node.js, javascript, AWS services, work with REST APIs Experience with database technologies, both relational (SQL,MY-SQL,PostgreSQL) and NoSQL. Familiarity with code versioning tools, such as Git. Work with other team members such as front-end developers Ensure modern security standards are implemented to prevent unauthorized entry Identify and fix code where bugs have been identified Write adequate test suites to ensure all functionality originally outlined by the design is being met Ensure server side code is correctly interfacing with databases Ensure the code written is extremely robust and able to provide high performance to the end user Integrate server side code with front end components High level of understanding of JavaScript Understanding of modern patterns for how back end code interacts with the front end systems Designing and developing APIs (Rest /Restful) Develop and maintain scalable server-side logic using appropriate backend frameworks and languages. Ensure high performance and responsiveness to requests from the front-end. Integrate user-facing elements developed by front-end developers with server-side logic. Implement security and data protection solutions. Design and implementation of data storage solutions. Well-versed in frontend frameworks such as Typescript/React as well as server-side frameworks such as Python/ FastAPI Hands-on expertise with Object Relational Mapper (ORM) tools (SQL Alchemy) Proficiency in NoSQL and SQL databases and in high-throughput data-related architecture and technologies (e.g. Kafka, Spark, Hadoop) Familiar with the most up-to-date technologies, frameworks and best practices, such as autoscaling and serverless backends, as well as CI/CD A thorough understanding of performance-related choices and optimizations. Passion for writing clean, well-maintainable code Familiarity with database technology such as MySQL, Postgre SQL, MongoDB Understanding and experience of using OOPS concepts Proficiency with Git (or another version control system as required) Database scaling and DB optimization knowledge Database design and management, including being up on the latest practices and associated versions Job Type: Full-time Pay: ₹900,000.00 - ₹1,600,000.00 per year Ability to commute/relocate: Mumbai, Maharashtra: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Post selection, can you join immediately? Or what is your notice period? Experience: Python: 3 years (Required) Work Location: In person
Posted 4 weeks ago
3.0 years
20 - 28 Lacs
Hyderabad, Telangana, India
On-site
Role & Responsibilities Operate and improve platform reliability for cloud-native services: set SLIs/SLOs, define error budgets, and drive uptime and performance improvements. Design and maintain Infrastructure-as-Code and automated CI/CD pipelines (Terraform/CloudFormation, GitHub Actions/Jenkins) to ship safely and quickly. Build observability and alerting: instrument services with metrics, logs, and traces (Prometheus, Grafana, ELK/EFK, Jaeger) and manage alerting runbooks. Lead incident response and postmortems—triage, mitigate, automate remediation, and implement long-term fixes to reduce repeat incidents. Automate operational tasks and scaling (autoscaling policies, capacity planning, cost optimizations) to keep systems efficient and resilient. Collaborate with product and engineering teams to design reliable architectures, provide operational guidance, and embed reliability early in the delivery lifecycle. Skills & Qualifications Must-Have 3+ years experience in SRE/DevOps/Platform engineering or equivalent hands-on systems engineering role. Strong Linux administration skills and production troubleshooting experience. Proven experience with containerization and orchestration (Docker & Kubernetes). Hands-on with at least one major cloud provider (AWS, GCP or Azure) and IaC tools (Terraform or CloudFormation). Practical scripting or programming skills (Python, Go, or Bash) to automate operations and build reliability tooling. Experience implementing monitoring, alerting and distributed tracing (Prometheus/Grafana, ELK/EFK, Jaeger) and designing SLIs/SLOs. Preferred Experience with service meshes (Istio/Linkerd), Helm/Kustomize and chaos engineering tools. Familiarity with security hardening, cost-optimization practices and multi-cloud deployments. Knowledge of platform observability automation, canary releases, and progressive delivery patterns. Benefits & Culture Highlights Hybrid working model with flexible hours and focus on work-life balance (India). Competitive compensation, health benefits and learning & development allowance to upskill in cloud and SRE practices. Collaborative, blameless postmortem culture that rewards ownership, experimentation, and continuous improvement. Keywords: Site Reliability Engineer, SRE, Kubernetes, AWS, Terraform, CI/CD, Prometheus, Grafana, Observability, Incident Response, SLIs/SLOs, Linux, Cloud Infrastructure. Skills: aws,eks,kubernetes,python
Posted 4 weeks ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About the Role We’re looking for an MLOps Engineer to build and operate reliable, secure, and scalable ML/LLM infrastructure—from data ingestion and training pipelines to model serving, monitoring, and continuous improvement. You’ll partner with Data Science, Platform, and Security teams to ship models to production with strong SLAs, observability, and cost control. Responsibilities Productionize models end-to-end: automate data ingestion, feature engineering, training, evaluation, packaging, and deployment (batch & real-time). Model serving & orchestration: design/operate low-latency model endpoints and batch jobs using Kubernetes, Docker, job schedulers, and serving frameworks. CI/CD for ML: implement reproducible pipelines (code, data, features, models) with unit/integration tests, approvals, and canary/blue-green rollouts. Monitoring & reliability: build drift, performance, and data-quality monitors; set alerts and on-call runbooks; drive incident response and postmortems. Observability: instrument tracing/logging/metrics (e.g., OpenTelemetry, Prometheus, Grafana) across data flows and model requests. Model registry & governance: manage lineage, versioning, approvals, and audit trails; enforce security (IAM, secrets management) and compliance controls. Cost & capacity management: optimize GPU/CPU usage, autoscaling, caching, batching, quantization, and instance right-sizing. LLM & RAG pipelines (nice if applicable): stand up vector databases, retrieval flows, prompt/version management, guardrails, and evaluations. Collaboration & enablement: create templates, docs, and self-service tooling for data scientists and app teams. Required Qualifications 3–7 years in MLOps/Platform/DevOps/SRE roles supporting ML in production. Strong with Python and one of Go/TypeScript/Bash ; proficiency in Docker and Kubernetes . Experience building ML pipelines with tools like Airflow/Prefect/Kedro/Flyte/Metaflow . CI/CD expertise (GitHub Actions/GitLab/Jenkins/Argo), including artifact/version management and automated testing. Data stack: object storage (S3/GCS/Azure Blob), data warehouses/lakes, message queues/streams (Kafka/PubSub), and caching layers. Monitoring/observability: Prometheus, Grafana, ELK/EFK, alerting (PagerDuty/VictorOps), tracing (OpenTelemetry/Jaeger). Security fundamentals: IAM, network policies, secrets (Vault/SSM), image signing, SBOMs. Solid understanding of ML lifecycle: data versioning, feature stores, experiment tracking, evaluation, and rollback.
Posted 4 weeks ago
10.0 - 14.0 years
0 Lacs
noida, uttar pradesh
On-site
You are an Individual Contributor at Adobe, one of the world's most innovative software companies, transforming digital experiences for billions of users globally. Adobe empowers individuals, businesses, and organizations to unleash their creativity, collaboration, and productivity through its cutting-edge products. As part of the 30,000+ strong global team at Adobe, you have the opportunity to shape the future by creating high-quality and performant web solutions and features. Your role involves driving solutioning, guiding the team technically, and collaborating with product management to ensure technical feasibility and enhance user experience and performance. To succeed in this role, you must possess a strong technical background, analytical skills, and hands-on experience in Java, JavaScript, and web applications. Your ability to adapt to new technologies, work effectively in diverse teams, and lead engineering projects is crucial. With over 10 years of software engineering experience and proficiency in technologies like Web Component, TypeScript, MVC frameworks, and AWS services, you are well-equipped to define APIs, integrate them into web applications, and drive innovation. At Adobe, a culture of collaboration, shared accomplishments, and continuous learning prevails. You are encouraged to stay updated on industry trends, make data-driven decisions, and foster a fun and impactful work environment. By leveraging your technical expertise, problem-solving skills, and proactive approach, you can contribute to Adobe's mission of revolutionizing digital experiences and creating personalized digital solutions that change the world. Join Adobe, where every employee is empowered to make a difference and where you can unleash your potential, grow your career, and be part of a global community dedicated to driving innovation and positive change. For a rewarding career experience and the opportunity to work in a supportive and inclusive environment, Adobe is the ideal place to thrive and make a meaningful impact.,
Posted 1 month ago
6.0 years
6 - 9 Lacs
Bengaluru
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities: • B.Tech/M.Tech from a premier institute with hands on design / development experience in building and operating highly available services, ensuring stability, security and scalability • 6+ years of software development experience preferably in product companies • Proficiency in the latest technologies like Web Component, React/Vue/Bootstrap, Redux, NodeJS, Type Script, Dynamic web applications, Redis, Memcached, Docker, Kafka, MySQL. • Deep understanding of MVC framework and concepts like HTML, DOM, CSS, REST, AJAX, responsive design, Test-Driven Development. • Experience with AWS with knowledge of AWS Services like Autoscaling, ELB, ElastiCache, SQS, SNS, RDS, S3, Serverless Architecture, Lambda, Gateway, and Amazon DynamoDB, etc... or similar technology stack • Experience with Operations (AWS, Terraform, scalability, high availability & security) is a big plus • Able to define APIs and integrate them into web applications using XML, JSON, SOAP/REST APIs. • Knowledge of software fundamentals including design principles & analysis of algorithms, data structure design, and implementation, documentation, and unit testing and the acumen to apply them • Ability to work proactively and independently with minimal supervision Mandatory skill sets: Must Have - • React (minimum 4 years hands-on experience) • Node JS (minimum 4 years hands-on experience) • Strong working knowledge of data structure • XML, JSON, SOAP/REST APIs. Preferred skill sets: • Git • CI/CD • Docker • Unit Testing Years of experience required: 6 – 9 Years Education qualification: BE/B.Tech/MC Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, MBA (Master of Business Administration), Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Front End Programming, Full Stack Development Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Algorithm Development, Alteryx (Automation Platform), Analytical Thinking, Analytic Research, Big Data, Business Data Analytics, Communication, Complex Data Analysis, Conducting Research, Creativity, Customer Analysis, Customer Needs Analysis, Dashboard Creation, Data Analysis, Data Analysis Software, Data Collection, Data-Driven Insights, Data Integration, Data Integrity, Data Mining, Data Modeling, Data Pipeline {+ 38 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 1 month ago
5.0 - 12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking a talented Lead Software Engineer with expertise in AWS and Java to join our dynamic team. This role involves working on critical application modernization projects, transforming legacy systems into cloud-native solutions, and driving innovation in security, observability, and governance. You'll collaborate with self-governing engineering teams to deliver high-impact, scalable software solutions. We are looking for candidates with strong expertise in Cloud Native Development, AWS, Microservices architecture, Java/J2EE, and hands-on experience in implementing CI/CD pipelines. Responsibilities Lead end-to-end development in Java and AWS services, ensuring high-quality deliverables Design, develop, and implement REST APIs using AWS Lambda/APIGateway, JBoss, or Spring Boot Utilize AWS Java SDK to interact with various AWS services effectively Drive deployment automation through AWS Java CDK, CloudFormation, or Terraform Architect containerized applications and manage orchestrations via Kubernetes on AWS EKS or AWS ECS Apply advanced microservices concepts and adhere to best practices during development Build, test, and debug code while addressing technical setbacks effectively Expose application functionalities via APIs using Lambda and Spring Boot Manage data formatting (JSON, YAML) and handle diverse data types (String, Numbers, Arrays) Implement robust unit test cases with JUnit or equivalent testing frameworks Oversee source code management through platforms like GitLab, GitHub, or Bitbucket Ensure efficient application builds using Maven or Gradle Coordinate development requirements, schedules, and other dependencies with multiple stakeholders Requirements 5 to 12 years of experience in Java development and AWS services Expertise in AWS services including Lambda, SQS, SNS, DynamoDB, Step Functions, and API Gateway Proficiency in using Docker and managing container orchestration through Kubernetes on AWS EKS or ECS Strong understanding of AWS Core services such as EC2, VPC, RDS, EBS, and EFS Competency in deployment tools like AWS CDK, Terraform, or CloudFormation Knowledge of NoSQL databases, storage solutions, AWS Elastic Cache, and DynamoDB Understanding of AWS Orchestration tools for automation and data processing Capability to handle production workloads, automate tasks, and manage logs effectively Experience in writing scalable applications employing microservices principles Nice to have Proficiency with AWS Core Services such as Autoscaling, Load Balancers, Route 53, and IAM Skills in scripting with Linux/Shell/Python/Windows PowerShell or using Ansible/Chef/Puppet Experience with build automation tools like Jenkins, AWS CodeBuild/CodeDeploy, or GitLab CI Familiarity with collaborative tools like Jira and Confluence Knowledge of in-place deployment strategies, including Blue-Green or Canary Deployment Showcase of experience in ELK (Elasticsearch, Logstash, Kibana) stack development
Posted 1 month ago
5.0 - 12.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
We are seeking a talented Lead Software Engineer with expertise in AWS and Java to join our dynamic team. This role involves working on critical application modernization projects, transforming legacy systems into cloud-native solutions, and driving innovation in security, observability, and governance. You'll collaborate with self-governing engineering teams to deliver high-impact, scalable software solutions. We are looking for candidates with strong expertise in Cloud Native Development, AWS, Microservices architecture, Java/J2EE, and hands-on experience in implementing CI/CD pipelines. Responsibilities Lead end-to-end development in Java and AWS services, ensuring high-quality deliverables Design, develop, and implement REST APIs using AWS Lambda/APIGateway, JBoss, or Spring Boot Utilize AWS Java SDK to interact with various AWS services effectively Drive deployment automation through AWS Java CDK, CloudFormation, or Terraform Architect containerized applications and manage orchestrations via Kubernetes on AWS EKS or AWS ECS Apply advanced microservices concepts and adhere to best practices during development Build, test, and debug code while addressing technical setbacks effectively Expose application functionalities via APIs using Lambda and Spring Boot Manage data formatting (JSON, YAML) and handle diverse data types (String, Numbers, Arrays) Implement robust unit test cases with JUnit or equivalent testing frameworks Oversee source code management through platforms like GitLab, GitHub, or Bitbucket Ensure efficient application builds using Maven or Gradle Coordinate development requirements, schedules, and other dependencies with multiple stakeholders Requirements 5 to 12 years of experience in Java development and AWS services Expertise in AWS services including Lambda, SQS, SNS, DynamoDB, Step Functions, and API Gateway Proficiency in using Docker and managing container orchestration through Kubernetes on AWS EKS or ECS Strong understanding of AWS Core services such as EC2, VPC, RDS, EBS, and EFS Competency in deployment tools like AWS CDK, Terraform, or CloudFormation Knowledge of NoSQL databases, storage solutions, AWS Elastic Cache, and DynamoDB Understanding of AWS Orchestration tools for automation and data processing Capability to handle production workloads, automate tasks, and manage logs effectively Experience in writing scalable applications employing microservices principles Nice to have Proficiency with AWS Core Services such as Autoscaling, Load Balancers, Route 53, and IAM Skills in scripting with Linux/Shell/Python/Windows PowerShell or using Ansible/Chef/Puppet Experience with build automation tools like Jenkins, AWS CodeBuild/CodeDeploy, or GitLab CI Familiarity with collaborative tools like Jira and Confluence Knowledge of in-place deployment strategies, including Blue-Green or Canary Deployment Showcase of experience in ELK (Elasticsearch, Logstash, Kibana) stack development
Posted 1 month ago
5.0 - 12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a talented Lead Software Engineer with expertise in AWS and Java to join our dynamic team. This role involves working on critical application modernization projects, transforming legacy systems into cloud-native solutions, and driving innovation in security, observability, and governance. You'll collaborate with self-governing engineering teams to deliver high-impact, scalable software solutions. We are looking for candidates with strong expertise in Cloud Native Development, AWS, Microservices architecture, Java/J2EE, and hands-on experience in implementing CI/CD pipelines. Responsibilities Lead end-to-end development in Java and AWS services, ensuring high-quality deliverables Design, develop, and implement REST APIs using AWS Lambda/APIGateway, JBoss, or Spring Boot Utilize AWS Java SDK to interact with various AWS services effectively Drive deployment automation through AWS Java CDK, CloudFormation, or Terraform Architect containerized applications and manage orchestrations via Kubernetes on AWS EKS or AWS ECS Apply advanced microservices concepts and adhere to best practices during development Build, test, and debug code while addressing technical setbacks effectively Expose application functionalities via APIs using Lambda and Spring Boot Manage data formatting (JSON, YAML) and handle diverse data types (String, Numbers, Arrays) Implement robust unit test cases with JUnit or equivalent testing frameworks Oversee source code management through platforms like GitLab, GitHub, or Bitbucket Ensure efficient application builds using Maven or Gradle Coordinate development requirements, schedules, and other dependencies with multiple stakeholders Requirements 5 to 12 years of experience in Java development and AWS services Expertise in AWS services including Lambda, SQS, SNS, DynamoDB, Step Functions, and API Gateway Proficiency in using Docker and managing container orchestration through Kubernetes on AWS EKS or ECS Strong understanding of AWS Core services such as EC2, VPC, RDS, EBS, and EFS Competency in deployment tools like AWS CDK, Terraform, or CloudFormation Knowledge of NoSQL databases, storage solutions, AWS Elastic Cache, and DynamoDB Understanding of AWS Orchestration tools for automation and data processing Capability to handle production workloads, automate tasks, and manage logs effectively Experience in writing scalable applications employing microservices principles Nice to have Proficiency with AWS Core Services such as Autoscaling, Load Balancers, Route 53, and IAM Skills in scripting with Linux/Shell/Python/Windows PowerShell or using Ansible/Chef/Puppet Experience with build automation tools like Jenkins, AWS CodeBuild/CodeDeploy, or GitLab CI Familiarity with collaborative tools like Jira and Confluence Knowledge of in-place deployment strategies, including Blue-Green or Canary Deployment Showcase of experience in ELK (Elasticsearch, Logstash, Kibana) stack development
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
You will be responsible for developing and working on consumer-facing web/app products using Node.js and React.js. Your primary focus will be on databases such as MongoDB (Expert Level) and PostgreSQL, along with Queue Systems like Kafka and Job Scheduler like Bull. You should have expertise in infrastructure technologies like Docker and Kubernetes (K8s). Additionally, you should have experience working with large datasets and possess expertise in logging, tracing, and application monitoring. You must have hands-on experience in JavaScript and Node.js, with knowledge of frameworks like Express.js, Koa.js, or Socket.io. Proficiency in async programming using Callbacks, Promises, and Async/Await is essential. You should also be familiar with Frontend technologies including HTML, CSS, and AJAX, along with databases like MongoDB, Redis, and MySQL. A good understanding of Data Structures, Algorithms, and Operating Systems is required. Experience with AWS services such as EC2, ELB, AutoScaling, CloudFront, and S3 is preferred. While experience with Frontend Stack and Vue.js would be beneficial, you will have the opportunity to learn new tools with guidance and resources. The role requires full-time commitment and offers permanent employment. Your experience in HTML, MongoDB, MySQL, AWS, and Express.js should be at least 4 years. The work location is in person.,
Posted 1 month ago
7.0 years
0 Lacs
Gurugram, Haryana, India
Remote
We are seeking a talented individual to join our AMSI at MMC Corporate. This role will be based in Noida/Gurugram/Pune/Mumbai. This is a hybrid role that has a requirement of working at least three days a week in the office. Senior Principal Engineer - IT Systems Engineering - Finops What can you expect? We are seeking a Cloud Cost Management professional responsible for monitoring, optimizing, and forecasting cloud spend across multiple cloud platforms (AWS, Azure, OCI, etc.). The ideal candidate will work closely with engineering, Cloud Business Office and procurement teams to ensure efficient cloud usage and cost transparency, while driving accountability and financial governance within the organization. What is in it for you? Monitor and analyse cloud usage and spending patterns across platforms (AWS, Azure, OCI, etc.). Provide detailed reporting, dashboards, and insights to technical and business stakeholders. Partner with finance to support cloud budgeting, forecasting, and variance analysis. Implement and manage cost allocation tags, chargeback/show back models, and budget alerts. Identify and drive opportunities for cost optimization, including rightsizing, autoscaling, and eliminating idle resources. Collaborate with DevOps/engineering teams to implement cost-aware architectural practices. Stay up to date with evolving pricing models and cost management tools from cloud vendors. Support FinOps culture by promoting cloud cost accountability and best practices across teams. Manage system as a portfolio. For example, tracking and managing work plan for the systems to ensure we meet key deliverables and communicating to stakeholders. What you need to have: Bachelor's degree in Computer Science, IT, or a related field. 5–7 years of experience in cloud operations, cloud financial management, or FinOps. Strong understanding of public cloud services and pricing models (AWS, Azure, OCI). Proficiency in cloud cost tools (e.g., AWS Cost Explorer, Azure Cost Management, Finout, CloudHealth, CloudCheckr, or similar). Experience with reporting and analytics tools (Excel, Power BI, Tableau, etc.). Excellent analytical, communication, and cross-functional collaboration skills. What makes you stand out? FinOps Certified Practitioner or relevant cloud certification. Experience with scripting (Python, PowerShell) or IaC tools (Terraform, CloudFormation) for automation of cost monitoring. Familiarity with SaaS billing models and multi-cloud cost comparison. Prior experience building and running cost governance programs. Detail-oriented with strong problem-solving abilities. Ability to work independently and influence without authority. Proactive communicator and a strong advocate for cost efficiency. Why join our team: We help you be your best through professional development opportunities, interesting work and supportive leaders. We foster a vibrant and inclusive culture where you can work with talented colleagues to create new solutions and have impact for colleagues, clients and communities. Our scale enables us to provide a range of career opportunities, as well as benefits and rewards to enhance your well-being. Marsh McLennan (NYSE: MMC) is the world’s leading professional services firm in the areas of risk, strategy and people. The Company’s more than 85,000 colleagues advise clients in over 130 countries. With annual revenue of $23 billion, Marsh McLennan helps clients navigate an increasingly dynamic and complex environment through four market-leading businesses. Marsh provides data-driven risk advisory services and insurance solutions to commercial and consumer clients. Guy Carpenter develops advanced risk, reinsurance and capital strategies that help clients grow profitably and pursue emerging opportunities. Mercer delivers advice and technology-driven solutions that help organizations redefine the world of work, reshape retirement and investment outcomes, and unlock health and well being for a changing workforce. Oliver Wyman serves as a critical strategic, economic and brand advisor to private sector and governmental clients. For more information, visit marshmclennan.com, or follow us on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people regardless of their sex/gender, marital or parental status, ethnic origin, nationality, age, background, disability, sexual orientation, caste, gender identity or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their Marsh McLennan (NYSE: MMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businesses: Marsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marshmclennan.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person. R_318759
Posted 1 month ago
10.0 - 14.0 years
0 Lacs
chennai, tamil nadu
On-site
As an experienced AWS Cloud Architect at our IT Services company based in Chennai, Tamil Nadu, India, you will be responsible for designing and implementing AWS architectures for complex, enterprise-level applications. You will be involved in the deployment, automation, management, and maintenance of AWS cloud-based production systems to ensure smooth operation of applications. Your role will also include configuring and fine-tuning cloud infrastructure systems, deploying and configuring AWS services according to best practices, and monitoring application performance to optimize AWS services for improved efficiency and reduced costs. In this position, you will implement auto-scaling and load balancing mechanisms to handle varying workloads and ensure the availability, performance, security, and scalability of AWS production systems. You will manage the creation, release, and configuration of production systems, as well as build and set up new development tools and infrastructure. Troubleshooting system issues and resolving problems across various application domains and platforms will also be part of your responsibilities. Additionally, you will maintain reports and logs for AWS infrastructure, implement and maintain security policies using AWS security tools and best practices, and monitor AWS infrastructure for security vulnerabilities to address them promptly. Ensuring data integrity and privacy by implementing encryption and access controls will be crucial, along with developing Terraform scripts for automating infrastructure provisioning and setting up automated CI/CD pipelines using Kubernetes, Helm, Docker, and CircleCI. Your role will involve providing backup and long-term storage solutions for the infrastructure, setting up monitoring and log aggregation dashboards, and alerts for AWS infrastructure. You will work towards maintaining application reliability and uptime throughout the application lifecycle, identifying technical problems, and developing software updates and fixes. Leveraging best practices and cloud security solutions, you will provision critical system security and provide recommendations for architecture and process improvements. Moreover, you will define and deploy systems for metrics, logging, and monitoring on the AWS platform, design, maintain, and manage tools for automating different operational processes, and collaborate with a team to ensure the smooth operation of AWS cloud solutions. Your qualifications should include a Bachelor's degree in computer science, Information Technology, or a related field (or equivalent work experience), along with relevant AWS certifications such as AWS Certified Solutions Architect or AWS Certified DevOps Engineer. Proficiency in Infrastructure as Code tools, cloud security principles, AWS services, scripting skills, monitoring, and logging tools for AWS, as well as problem-solving abilities and excellent communication skills will be essential for success in this role.,
Posted 1 month ago
30.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Employment Typ e: Full-Time Compensation : Competitive; based on experience and capability Company Overview Galaxy Office Automation Pvt. Ltd. is a trusted enterprise technology partner with 30+ years of experience in delivering secure, large-scale IT systems across India’s top enterprises and government institutions. As we scale into the next frontier of AI-driven intelligence, we’re building a new stack of AI-native services, products, agents, swarms, and conversational assistants that bring powerful, modular intelligence to real-world enterprise workflows – while evolving towards an AI-Factory model to enable scalable, reusable, and composable intelligence across use cases. Our goal is not buzzwords—it’s capability. We focus on deploying reliable, composable, and production-grade AI systems that can plug directly into existing ecosystems and deliver immediate business value. Role Summary : We’re looking for a Data Scientist / AI Engineer / Machine Learning Engineer who can own systems end to end and contribute from the ground up . In this role, you’ll architect and deploy real-world AI solutions using LLMs, predictive modeling, multimodal intelligence , and FastAPI-based microservices . You’ll work across core platform modules and client-specific projects—bridging the gap between cutting-edge AI and enterprise-grade deployment. What You’ll Work On Multi-Agent Collaboration, Reasoning, Memory & Human Alignment Build intelligent agents and swarms with multi-agent collaboration, reasoning, planning, memory, and alignment with human feedback. Use protocols such as MCP and A2A and frameworks such as LangChain and Crew AI . Retrieval-Augmented Generation (RAG) Develop hybrid pipelines using vector databases (FAISS, Qdrant, Pinecone) and transformer-based generation models Multimodal AI (Language + Vision + Audio + Video) Build systems that process and combine intelligence across language, vision, audio, and video. Predictive Modeling Build predictive pipelines using deep learning architectures (e.g., LSTMs, CNNs, RNNs), transformer-based models (e.g. openAI, LLama, Qwen, Mistral) , and ensemble methods (e.g., XGBoost, LightGBM, Random Forests). Emphasize modeling depth, generalization, interpretability, maintainability and dynamic improvements. Conversational Assistants Develop conversational assistants with advanced capabilities such as model based recommendations, user-query based what-if scenario analyses and continuous improvements based on memory and human feedback. FastAPI-Based Backend APIs Wrap agents and models into versioned, secure, and production-grade FastAPI microservices . Model Lifecycle Management Track, evaluate, and manage model lifecycles using MLflow, DVC, and internal governance tools. Data Engineering & Integration Ingest and transform data from SQL and NoSQL (e.g. MongoDB) sources, APIs, and distributed pipelines. Required Skills And Qualifications 1–4 years of experience in AI/ML product development or applied data science Strong Python skills: Pandas, NumPy, scikit-learn, Transformers, PyTorch/TensorFlow Hands-on experience with LLMs (OpenAI, Mistral, Claude, Llama, Deepseek, Gemini, etc.), LangChain, prompt engineering, and integration with real-world use cases Proven experience building agentic systems, including reasoning agents and multi-agent collaboration Deep expertise in predictive modeling using transformers, deep learning and ensemble methods Familiarity with image, audio, and video model development Familiarity with model monitoring and re-training Exposure to both SQL and NoSQL databases Experience building and deploying Python based backend APIs using FastAPI Proficiency with Git workflows, CI/CD, and modular code development Strong communication, documentation, and architectural thinking Desirable Skills Candidates with experience in reinforcement learning (including RLHF) will be preferred. Prior exposure to supervised fine-tuning (SFT) and parameter-efficient tuning approaches such as LoRA and QLoRA is desirable. Familiarity with workflow orchestration tools like Airflow, Celery, Prefect, or Dagster will be advantageous. Experience building autoscaling and serverless architectures using AWS Lambda, ECS, or EKS is a plus. Candidates with an expertise in backend engineering with a focus on microservice optimization will be preferred. Candidates with experience in containerization and orchestration using Docker and Kubernetes will be valued. Hands-on knowledge of designing conversational UI or chatbot pipelines is beneficial. Prior experience working with big data tools like Spark, Hive, or Hadoop is desirable. Experience deploying AI systems in BFSI, healthcare, e-commerce, or government environments will be an added advantage. What You’ll Gain End-to-End Ownership from design to deployment Agent-First Ecosystem Exposure Product + Custom Work across client and platform needs Leadership Track into applied AI architecture Live Enterprise Impact across sectors If you think in embeddings, talk in APIs, and believe human-aligned, reasoning-first multi-agent systems are the future—join Galaxy Office Automation and help build them.
Posted 1 month ago
8.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Role Description This is a full-time on-site role for a Back End Developer, located in Jaipur. We’re hiring a Backend Engineer to build and operate the services that power our ML products. You’ll design event-driven microservices, real-time data pipelines, and low-latency model-serving APIs. You’ll own reliability, performance, and observability, and you’ll partner closely with ML and data teams to ship measurable impact. Our stack includes Kubernetes, Kafka, Postgres/Redis, and model serving with Triton/TorchServe/Ray Serve. You care about clean interfaces, robust testing, and secure, cost-efficient systems. Core Responsibilities Service architecture Design REST/gRPC microservices, event-driven workflows, and internal SDKs. Define contracts, schemas, versioning, and backward compatibility. Model serving & ML infra Package and deploy models (batch + online) via Triton/TorchServe/Ray Serve/FastAPI. Implement feature retrieval, feature stores, and online/offline parity. Add A/B, canary, shadow, traffic mirroring, and model rollback hooks. Data streaming & queues Build producers/consumers with Kafka/Pulsar/Kinesis or SQS/RabbitMQ. Exactly-once/at-least-once semantics, retries, DLQs, idempotency keys. Batch & streaming pipelines Orchestrate workflows with Airflow/Argo/Prefect. State management, watermarking, late data handling, compaction. Performance & reliability Profiling, caching (Redis), rate-limiting, circuit breakers. SLOs/SLIs (latency, p95/p99, availability), autoscaling, capacity planning. Storage & schemas OLTP (Postgres/MySQL), OLAP (BigQuery/Redshift/Snowflake), object stores. Schema evolution, migrations, CDC, time-series storage. Security & governance AuthN/Z (OIDC/JWT), secrets management, IAM least privilege. PII handling, audit logs, data retention, encryption in transit/at rest. Observability Metrics/tracing/logs (Prometheus/OpenTelemetry/Grafana/ELK). Playbooks, runbooks, SRE handoff, on-call rotation readiness. DevEx & CI/CD Docker/K8s, Helm, GitHub Actions/GitLab CI, artifact registries. Contract tests, e2e tests, load tests, chaos experiments. Collaboration Partner with ML to define interfaces (input validation, schema, SLAs). Work with product to refine requirements; with data eng on pipelines. Stack Languages: Python, Go, or Node.js/TypeScript (pick 1–2 primary) APIs: FastAPI/gRPC/Express + OpenAPI/Buf Streaming/Queues: Kafka/Pulsar/Kinesis/SQS Serving: NVIDIA Triton, TorchServe, Ray Serve, BentoML Orchestration: Airflow/Argo/Prefect Stores: Postgres, Redis, S3/GCS, feature store (Feast) Infra: Docker, Kubernetes, Helm, Terraform Obs: Prometheus, Grafana, OpenTelemetry, Loki/ELK Experience 3–8+ years backend experience designing high-throughput services. Strong with event-driven systems, concurrency, and data modeling. Hands-on with one major queue/stream platform and one SQL DB. Experience packaging and running ML models in production (GPU/CPU). Solid testing discipline (unit, contract, property, and load testing). Vector DBs (FAISS/pgvector/Weaviate), retrieval patterns for RAG. Cost optimization, GPU scheduling, CUDA basics, Triton kernels. Privacy/regulated domains (HIPAA/GxP/PCI), fine-grained access control.
Posted 1 month ago
4.0 - 5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
DevOps Engineer (Azure | Jenkins | Cloud & Security) Gurugram, India Experience: 4-5 years Employment Type: Full-time NO - AWS & GCP Only - Azure About the Role VyntraPro Innovations Pvt. Ltd. is seeking a skilled DevOps Engineer with 4-5 years of experience to join our growing technology team in Gurugram. You will be responsible for automating, managing, and securing the entire infrastructure and deployment lifecycle. This is a hands-on role for someone who thrives in an agile environment and is deeply experienced with Azure, Jenkins, and end-to-end DevOps practices. Key Responsibilities - Design, deploy, and manage scalable and secure cloud infrastructure on Microsoft Azure. Build and maintain CI/CD pipelines using Jenkins. Automate infrastructure provisioning using Terraform, ARM templates, or Bicep. Implement and enforce cloud security best practices including RBAC, Key Vault, and network policies. Monitor application and infrastructure health with tools like Azure Monitor, Grafana, or Prometheus. Collaborate with development and QA teams for seamless build-test-deploy workflows. Manage containerization using Docker and orchestrate deployments with Kubernetes (AKS). Oversee code repositories, branching workflows, and release cycles. Lead incident response, conduct root cause analysis, and execute disaster recovery plans. Perform regular security assessments and cloud compliance audits. Key Skills & Qualifications Must-Have: 4-5 years of proven DevOps experience In-depth knowledge of Azure cloud services (App Services, VMs, Key Vault, AKS, Networking). Expertise in Jenkins, including pipeline scripting and automation. Proficiency with Infrastructure as Code tools (Terraform, Bicep, ARM). Strong scripting skills in Bash, PowerShell, or Python. Deep understanding of CI/CD pipelines, release strategies, and cloud automation. Hands-on with security configuration, firewall rules, and RBAC. Familiarity with monitoring/logging tools like Azure Monitor, ELK, or Datadog. Nice to Have: Exposure to GitHub Actions or Azure DevOps Pipelines. Working knowledge of Helm, Kubernetes, and service mesh architecture. Experience in cost monitoring, autoscaling, and cloud optimization. Awareness of OWASP, DevSecOps, and compliance standards (ISO, SOC2) Soft Skills- Strong analytical and debugging skills- Comfortable in agile, fast-paced environments. Excellent verbal and written communication. Self-driven with a sense of ownership and responsibility. Willingness to mentor junior engineers (added advantage
Posted 1 month ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
CodelogicX is a forward-thinking tech company dedicated to pushing the boundaries of innovation and delivering cutting-edge solutions. We are seeking a Senior DevOps Engineer with at least 5 years of hands-on experience in building, managing, and optimizing scalable infrastructure and CI/CD pipelines. The ideal candidate will play a crucial role in automating deployment workflows, securing cloud environments and managing container orchestration platforms. You will leverage your expertise in AWS, Kubernetes, ArgoCD, and CI/CD to streamline our development processes, ensure the reliability and scalability of our systems, and drive the adoption of best practices across the team. Key Responsibilities Design, implement, and maintain CI/CD pipelines using GitHub Actions and Bitbucket Pipelines. Develop and manage Infrastructure as Code (IaC) using Terraform for AWS-based infrastructure. Setup and administer SFTP servers on cloud-based VMs using chroot configurations and automate file transfers to S3-backed Glacier. Manage SNS for alerting and notification integration. Ensure cost optimization of AWS services through billing reviews and usage audits. Implement and maintain secure secrets management using AWS KMS, Parameter Store, and Secrets Manager. Configure, deploy, and maintain a wide range of AWS services, including but not limited to: Compute Services Provision and manage compute resources using EC2, EKS, AWS Lambda, and EventBridge for compute-driven, serverless and event-driven architectures. Storage & Content Delivery Manage data storage and archival solutions using S3, Glacier, and content delivery through CloudFront. Networking & Connectivity Design and manage secure network architectures with VPCs, Load Balancers, Security Groups, VPNs, and Route 53 for DNS routing and failover. Ensure proper functioning of Network Services like TCP/IP, reverse proxies (e.g., NGINX). Monitoring & Observability Implement monitoring, logging, and tracing solutions using CloudWatch, Prometheus, Grafana, ArgoCD, and OpenTelemetry to ensure system health and performance visibility. Database Services Deploy and manage relational databases via RDS for MySQL, PostgreSQL, Aurora, and healthcare-specific FHIR database configurations. Security & Compliance Enforce security best practices using IAM (roles, policies), AWS WAF, Amazon Inspector, GuardDuty, Security Hub, and Trusted Advisor to monitor, detect, and mitigate risks. GitOps Apply excellent knowledge of GitOps practices, ensuring all infrastructure and application configuration changes are tracked and versioned through Git commits. Architect and manage Kubernetes environments (EKS), implementing Helm charts, ingress controllers, autoscaling (HPA/VPA), and service meshes (Istio), troubleshoot advanced issues related to pods, services, DNS, and kubelets. Apply best practices in Git workflows (trunk-based, feature branching) in both monorepo and multi-repo environments. Maintain, troubleshoot, and optimize Linux-based systems (Ubuntu, CentOS, Amazon Linux). Support the engineering and compliance teams by addressing requirements for HIPAA, GDPR, ISO 27001, SOC 2, and ensuring infrastructure readiness. Perform rollback and hotfix procedures with minimal downtime. Collaborate with developers to define release and deployment processes. Manage and standardize build environments across dev, staging, and production.Manage release and deployment processes across dev, staging, and production. Work cross-functionally with development and QA teams. Lead incident postmortems and drive continuous improvement. Perform root cause analysis and implement corrective/preventive actions for system incidents. Set up automated backups/snapshots, disaster recovery plans, and incident response strategies. Ensure on-time patching. Mentor junior DevOps engineers. Experience: 5-10 Years Working Mode: Hybrid Job Type: Full-Time Location: Kolkata Requirements Required Qualifications: Bachelor's degree in Computer Science, Engineering, or equivalent practical experience. 5+ years of proven DevOps engineering experience in cloud-based environments. Advanced knowledge of AWS, Terraform, CI/CD tools, and Kubernetes (EKS). Strong scripting and automation mindset. Solid experience with Linux system administration and networking. Excellent communication and documentation skills. Ability to collaborate across teams and lead DevOps initiatives independently. Preferred Qualifications Experience with infrastructure as code tools such as Terraform or CloudFormation. Experience with GitHub Actions is a plus. Certifications in AWS (e.g., AWS DevOps Engineer, AWS SysOps Administrator) or Kubernetes (CKA/CKAD). Experience working in regulated environments (e.g., healthcare or fintech). Exposure to container security tools and cloud compliance scanners. Benefits Perks and benefits: Health insurance Hybrid working mode Provident Fund Parental leave Yearly Bonus Gratuity
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |