Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
0 Lacs
delhi, india
Remote
About Us HighLevel is an AI powered, all-in-one white-label sales & marketing platform that empowers agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. We are proud to support a global and growing community of over 2 million businesses, comprised of agencies, consultants, and businesses of all sizes and industries. HighLevel empowers users with all the tools needed to capture, nurture, and close new leads into repeat customers. As of mid 2025, HighLevel processes over 4 billion API hits and handles more than 2.5 billion message events every day. Our platform manages over 470 terabytes of data distributed across five databases, operates with a network of over 250 microservices, and supports over 1 million hostnames. Our People With over 1,500 team members across 15+ countries, we operate in a global, remote-first environment. We are building more than software; we are building a global community rooted in creativity, collaboration, and impact. We take pride in cultivating a culture where innovation thrives, ideas are celebrated, and people come first, no matter where they call home. Our Impact As of mid 2025, our platform powers over 1.5 billion messages, helps generate over 200 million leads, and facilitates over 20 million conversations for the more than 2 million businesses we serve each month. Behind those numbers are real people growing their companies, connecting with customers, and making their mark - and we get to help make that happen. What You'll Do Own services end‑to‑end: Design and ship multi‑tenant services (checkout, payments orchestration, subscriptions, settlement & reconciliation, invoicing, tax hooks). Integrate payments at depth: Add PSPs and local methods (cards/3DS, UPI, wallets, BNPL) with resilient webhooks and strictly idempotent APIs. Model money correctly: Implement an append‑only double‑entry ledger; use event sourcing/CQRS where it meaningfully improves correctness and scale. Raise the quality bar: Ship production‑grade code with contract, integration, property‑based, and performance tests; instrument everything with OpenTelemetry. Run it like you own it: Operate on GKE (GCP) with CI/CD, canaries, autoscaling, SLIs/SLOs, incident response, and on‑call participation. Collaborate & mentor: Partner closely with PM/Design and mentor SDE I/II engineers to multiply team impact. What You'll Lead Cross‑service designs that connect orchestration → ledger → reconciliation with clear contracts and failure‑mode analysis. Reliability strategy: Define SLIs/SLOs, capacity models, and multi‑region failover plans; drive DR and chaos drills. Data lifecycle: Establish data retention/archival strategies compliant with regulatory requirements. Engineering excellence: Level‑up design reviews, postmortems, and technical strategy across the Payments & Commerce domain. Minimum Qualifications 3–6+ years building backend services, with strong TypeScript/ Node.js and NestJS experience. Practical expertise with MongoDB and/or Firestore (transactions, indexing), plus cache design with Redis and event‑driven architecture (Kafka, GCP Pub/Sub). Strong grasp of distributed systems fundamentals: idempotency, retries/backoff & jitter, message ordering, de‑dup/outbox for “exactly‑once‑ish” processing, race conditions, concurrency control, and eventual consistency. Hands‑on with Kubernetes and at least one cloud (GCP preferred); familiarity with secrets/KMS and PCI basics. Deep testing discipline: unit, integration, contract, and performance testing. Schema design: Design and plan schemas with a focus on scalability and backward compatibility. Strong problem‑solving and debugging skills. Nice to have Frontend: Vue 3 + Vite. Data & analytics: BigQuery, Dataflow/Beam, Snowflake. Multi‑region design: Experience with latency budgets and consistency trade‑offs. Observability at scale and hands‑on chaos/DR practice. Prior experience at a fintech, payment processor, or e‑commerce company. What Success Looks Like (first 6–12 Months) You own 1–2 services from design → production with P99 latency targets met and zero Sev‑1s caused by your changes. Failure retry rates drop by >30% through improved idempotency and retry/backoff policy. Codebases are measurably cleaner (coverage up, linting strict, typed contracts) and you actively mentor 2+ engineers. You define or sharpen SLIs/SLOs, capacity models, a multi‑region failover plan, and data retention/archival standards for your area. The company is an Equal Opportunity Employer. As an employer subject to affirmative action regulations, we invite you to voluntarily provide the following demographic information. This information is used solely for compliance with government record keeping, reporting, and other legal requirements. Providing this information is voluntary and refusal to do so will not affect your application status. This data will be kept separate from your application and will not be used in the hiring decision.
Posted 13 hours ago
6.0 years
0 Lacs
ahmedabad, gujarat, india
On-site
Role summary: Be an integral part of our enterprise-scale migration from Bitbucket to Github Enterprise Cloud (GHEC), design and roll out GitHub Actions based CI/CD, and establish secure, complaint, and observable build/release pipelines for a 300-developer organization in the healthcare domain. You will be the technical owner for source control strategy, build infrastructure, and release automation and an emphasis on reliability, speed, and HIPAA/SOC2 compliance. What you’ll do: Plan & execute the migration Inventory repos, pipelines, users, secrets, and integrations; define cutover strategy and rollback plans. Migrate code, issues and CI from Bitbucket to GHEC with minimal downtime, script repeatable migration runbooks. Normalize repository standards (branch naming, default branches, protection rules, CODEOWNERS, templates) Design CI/CD on GitHub Actions Architect multistage pipelines (build->test-->security scans-->artifact publish--->deploy) Implement reusable workflows, composite actions, and organization-level workflow templates. Set up self-hosted runners and autoscaling runner fleets(containerized/ephemeral) for Linux/windows/macOS as needed. Establish secret management via OIDC to cloud providers; remove long live credentials. Security & compliance for healthcare Enable GitHub Advanced security (Code scanning, Dependabot, secret scanning) Enforce SSO/SAML, branch protection, required checks, signed commits, and PR review policies. Implement policy-as-code (e.g., Open policy agent, repo/rule sets), change-management controls, and audit-ready logs. Ensure pipelines and artifacts are aligned with HIPAA, SOC2, GDPR and least privilege principles avoid PHI in logs. Build & release engineering Standardize build images, caching, and artifact storage; speed up CI with dependency caches and test parallelization. Create environment promotion flows (dev/stage/prod) with approvals and progressive delivery (canary/blue green) Integrate QA automation, performance tests, and SAST/DAST into pipelines. Observability & reliability Define and track DORA metrics (lead time, deployment frequency, MTTR, change failure rate) Add telemetry for pipeline duration, queue times, and flake rates; publish dashboards and SLAs for CI. Change management & enablement Drive communications, training, and documentation; run office hours and migration pilots. Partner with security, compliance, SRE, and product teams; Required Qualifications: 6+ years in Build/Release/DevOps/Platform Engineering; 2+ years leading large SCM/CI migrations. Proven previous experience of migrating code from Bitbucket to GitHub Enterprise cloud. Expert with Git, GitHub Enterprise Cloud and GitHub Actions at organization scale. Proven experience running self-hosted/ephemeral runners and tuning CI performance. Strong CI/CD for polyglot stacks (Java/Kotlin, .NET, Node, Python, mobile) Hands on with artifact registries (GitHub packages/Artifactory), Iac (Terraform), containers (Docker), and one major cloud (AWS/Azure/GCP) preferably Azure. Security background: branch protection, CODEOWNERS, signed artifacts, SBOMs, dependency governance, secrets handling (ODIC) Healthcare or other regulated industry experience; understanding of HIPAA controls and audit requirements. Excellent scripting (Bash/PowerShell) and one high level language (Python/Go) Bitbucket to GitHub migrations using enterprise importers; Jira/GitHub Projects integrations.
Posted 18 hours ago
8.0 years
0 Lacs
noida, uttar pradesh, india
On-site
Job Description – Senior DevOps Engineer Location: Hybrid (2–3 days per week in office) (Bangalore, Hyderabad, Noida, Chennai, Pune) Work Hours: 11:00 AM – 8:30 PM IST Experience: 6–8 years Employment Type: Full-Time About the Role We are looking for a highly skilled Senior DevOps Engineer with 6–8 years of experience in cloud infrastructure, automation, and CI/CD practices. The role requires strong expertise in AWS, Terraform, container platforms, and large-scale system operations. You will collaborate closely with developers and QA teams to deliver secure, scalable, and reliable infrastructure. Key Responsibilities Develop automation tooling and scripts using C# or Python for performance tuning and troubleshooting. Design and manage AWS multi-account environments with cross-account IAM roles, VPC networking, and S3 gateway endpoints. Implement secure cloud patterns using KMS encryption and least-privilege access. Manage Infrastructure as Code using Terraform —reusable modules, workspaces, and pipelines with proper approvals. Deploy and manage .NET workloads on ECS , including task definitions, autoscaling, and ECR image management. Configure and optimize S3 buckets for large-scale migrations (policies, lifecycle rules, replication, flat GUID-based structure). Build and maintain CI/CD pipelines using GitHub Actions or Azure DevOps for infrastructure and applications. Set up monitoring and observability with CloudWatch, SNS/SQS, and integrate with QE for system reliability. Collaborate with cross-functional teams, provide documentation, and ensure smooth operational handovers. Must-Have Skills & Qualifications 6–8 years of hands-on DevOps experience. Strong coding ability in C# or Python . Deep expertise in AWS (IAM, VPC, S3, ECS, KMS, multi-account setup) . Proven skills in Terraform (Cloud/Enterprise, private module registries). Solid experience with ECS & ECR container platforms . Strong understanding of CI/CD (GitHub Actions / Azure DevOps) . Proficiency in CloudWatch monitoring, alerting, and observability tools . Excellent communication and collaboration skills. Must be ready for BGV process .
Posted 1 day ago
5.0 years
0 Lacs
mumbai, maharashtra, india
Remote
🚨 Now Hiring: Site Reliability / DevOps Engineer (Night Shifts – US Client Support) 🚨 At Nebula Tech Solutions , we’re scaling our global DevOps team with a focus on night shifts only to support our US-based enterprise clients. This role is ideal for engineers who thrive in challenging environments and want to own reliability, automation, and performance at scale. You’ll work directly with distributed teams, ensuring systems run smoothly around the clock. 🌎🌙 What You’ll Do ✅ Build & optimize CI/CD pipelines (Harness, FluxCD, Bitbucket, Jenkins) ✅ Improve Kubernetes (AWS EKS) scalability, reliability & security ✅ Strengthen observability with Prometheus, OpenTelemetry, Grafana ✅ Automate infrastructure with Terraform, Helm, Flux ✅ Drive incident response, postmortems & reliability initiatives What We’re Looking For 🔹 5+ years in DevOps / SRE / Platform Engineering 🔹 Strong Kubernetes + AWS expertise 🔹 CI/CD pipeline mastery & automation skills (Python, Go, Bash) 🔹 Knowledge of observability, networking, autoscaling, and service meshes (Istio) Bonus Points 💡 • Experience with .NET / Java / Node.js • Database tuning (MSSQL, MongoDB) • RabbitMQ / Kafka troubleshooting 📍 Remote (India) | Night Shift Only 👥 Client: US-based Enterprise (Global Scale) If you’re passionate about reliability, automation, and supporting mission-critical systems while working US hours , this is for you. 🚀
Posted 2 days ago
3.0 years
0 Lacs
gurugram, haryana, india
On-site
About Us Edifyer is building the next generation of corporate learning for India. We are on a mission to help L&D teams create conversational AI-powered hyper‑personalized coaching and role‑plays that adapt in real time to each learner. As a founding AI Engineer, you will own the development of our core conversational AI engine and the no‑code authoring studio that lets L&D teams design and deploy training in minutes. If you love shipping 0→1 products, sweating latency and quality, and seeing your work in the hands of thousands of learners, this one’s for you. Who We're Looking For We're seeking a Founding AI Engineer who thrives in early-stage chaos, enjoys solving complex technical challenges, and is eager to lead the AI/ML strategy from the ground up. This is more than a job—it's an opportunity to join as a founding member, build and own key technical systems, and directly influence the trajectory of the company. Core Responsibilities Define and evolve the long-term technical vision, architecture, and system design for scalability Drive the transition from MVP prototypes to enterprise-grade, production-ready systems Collaborate closely with product and design leads to rapidly prototype and iterate on user-centric features Fine-tune and serve Large Language Models (LLMs) using Triton and TensorRT Integrate retrieval-augmented generation (RAG) pipelines and AI safety layers (filters, guardrails, etc.) Design real-time pipelines (STT → LLM → TTS ) using WebSockets or gRPC Set up spot-GPU orchestration using Kubernetes (K8s) and Terraform-based Infrastructure as Code (IaC) Build and manage CI/CD pipelines (blue-green deployments); set up monitoring dashboards for cost and latency Implement OAuth2/JWT-based authorization, secure secret management, and rate-limiting Lead security hardening (OWASP) and lay the groundwork for SOC 2 Type I compliance Engage directly with early customers and partners to gather feedback, debug live issues, and validate technical direction. What We’re Offering Opportunity to become a founding member + ESOP - Your contribution deserves the long-term upside in companies' success- this could be your life-changing opportunity for significant wealth creation. Creative and Technical Freedom - You’ll have a blank canvas to build, experiment, and ship without red tape. High-Impact Mission - Your work will lay the foundation for the next-generation of enterprise learning platforms from India that will transform the learning culture of organisations world-wide. Next-gen Tech Stack - You get to work with the cutting-edge LLMs, STT/TTS, ASR, scalable cloud infra, etc.— and build a world class system. Ideal Candidate Profile 3+ years building distributed or real-time ML systems Hands-on LLM or speech experience : Triton, TensorRT, Riva, Whisper, or similar - demonstrated < 1 s latency. Deep Python (FastAPI) expertise; comfort with micro-services, gRPC, WebSockets. Cloud-native engineering : Docker, K8s, autoscaling, Terraform/Pulumi, Prometheus/Grafana. Security mindset : OAuth2, TLS everywhere, moderation gates, GDPR awareness. Entrepreneurial stamina : Persistence and optimistic outlook even in the face of challenge or setback. Apply Send your resume/LinkedIn/GitHub to contact@edifyer.io with the subject line: “Founding AI Engineer — Your Name”. A short note on why this mission resonates with you will go a long way.
Posted 2 days ago
3.0 - 4.0 years
0 Lacs
pune, maharashtra, india
On-site
Role Description Role Proficiency: Acts under minimum guidance of DevOps Architect to set up and manage DevOps tools and pipelines. Outcomes Interpret the DevOps Tool/feature/component design and develop/support the same in accordance with specifications Follow and contribute existing SOPs to trouble shoot issues Adapt existing DevOps solutions for new contexts Code debug test and document; and communicate DevOps development stages/status of DevOps develop/support issues Select appropriate technical options for development such as reusing improving or reconfiguration of existing components Support users onboarding them on existing tools with guidance from DevOps leads Work with diverse teams with Agile methodologies Facilitate saving measures through automation Mentor A1 and A2 resources Involved in the Code Review of the team Measures Of Outcomes Schedule adherence Quality of the code Defect injection at various stages of lifecycle # SLA related to level 1 and level 2 support # of domain certification/ product certification obtained Facilitate saving measures through automation Outputs Expected Automated components: Deliver components that automate parts to install components/configure of software/tools in on-premises and on cloud Deliver components that automate parts of the build/deploy for applications Configured Components Configure a CI/CD pipeline that can be used by application development/support teams Scripts Develop/Support scripts (like Powershell/Shell/Python scripts) that automate installation/ configuration/ build/ deployment tasks Onboard Users Onboard and extend existing tools to new app dev/support teams Mentoring Mentoring and providing guidance to peers Stakeholder Management Guide the team in preparing status updates; keeping management updated regarding the status Data Base Data Insertion Data update Data Delete Data view creations Skill Examples Install configure troubleshoot CI/CD pipelines and software using Jenkins/Bamboo/Ansible/Puppet /Chef/PowerShell /Docker/Kubernetes Integrate with code/test quality analysis tools like Sonarqube/Cobertura/Clover Integrate build/deploy pipelines with test automation tools like Selenium/Junit/NUnit Scripting skills (Python Linux/Shell/Perl/Groovy/PowerShell) Repository Management/Migration Automation – GIT/BitBucket/GitHub/Clearcase Build automation scripts – Maven/Ant Artefact repository management – Nexus/Artifactory Dashboard Management & Automation- ELK/Splunk Configuration of cloud infrastructure (AWS/Azure/Google) Migration of applications from on-premises to cloud infrastructures Working on Azure DevOps/ARM (Azure Resource Manager)/DSC (Desired State Configuration) Strong debugging skill in C#/C Sharp/Dotnet Basic working knowledge of database Knowledge Examples Knowledge of Installation/Config/Build/Deploy tools and knowledge of DevOps processes Knowledge of IAAS - Cloud providers (AWS/Azure/Google etc.) and their tool sets Knowledge of the application development lifecycle Knowledge of Quality Assurance processes Knowledge of Quality Automation processes & tools Knowledge of Agile methodologies Knowledge of security policies and tools Additional Comments A DevOps engineer with 3-4 years of experience. Here are some of the typical responsibilities of a DevOps Harness engineer: Harness Expertise: Possess a strong understanding of the Harness platform and its capabilities, including pipelines, deployments, configurations, and security features. CI/CD Pipeline Management: Design, develop, and manage CI/CD pipelines using Harness. This involves automating tasks such as code building, testing, deployment, and configuration management. Automation Playbook Creation: Create reusable automation scripts (playbooks) for deployments, configuration control, infrastructure provisioning, and other repetitive tasks. Scalability and Standards: Ensure scalability of the CI/CD pipelines and adherence to organizational standards for deployment processes. DevOps Technologies: Be familiar with various DevOps technologies such as Docker, Kubernetes, and Jenkins, especially in the context of cloud platforms. Security: Integrate security best practices into the CI/CD pipelines (SecDevOps). 1. Candidate must have strong working experience in Kubernetes core concepts like autoscaling, RBAC, Pod placements as well as advanced concepts like Karpenter, service mesh etc 2. Candidate must have strong working experience in AWS services like cloudwatch, EKS, ECS, DynamoDB etc. 3. Candidate must have strong working experience in IAC especially in Terraform and Terragrunt, should be able to create modules. Must have experience in infrastructure provisioning with AWS 4. Candidate must have strong working experience in scripting languages like shell, Powershell or Python. 5. Candidate must have strong working experience in CICD concepts like creating pipelines, automating the deployments. Skills Iac,Jenkins,Ansible,Aws
Posted 3 days ago
3.0 - 5.0 years
0 Lacs
pune, maharashtra, india
On-site
About Citco JOB DESCRIPTION Since the 1940s Citco has provided specialist financial services to alternative investment funds, investors, multinationals and private clients worldwide. With over 6,000 employees in 45 countries we pioneer innovative solutions that meet our clients’ evolving needs, and deliver exceptional service. Our continuous investment in learning means our people are among the best in the industry. And our corporate social responsibility programs provide meaningful and fulfilling work in the community. A career at Citco isn’t just a job – it’s an opportunity to excel in an environment that genuinely supports your personal and professional development. About The Role As a Cloud DevOps Engineer, you will be working in a cross-functional team that will be responsible for designing and implementing re-usable frameworks, APIs, CI/CD pipelines, infrastructure automation , test automation leveraging modern cloud native designs/patterns and AWS services. You will be part of a culture of innovation where you’ll use AWS/Azure services to help team solve business challenges such as rapidly releasing products/services to the market or building an elastic, scalable, cost optimized application. You will have the opportunity to shape and execute a strategy to build knowledge and broaden use of public cloud in a dynamic professional environment. Education, Experience and Skill Bachelor’s degree in Engineering, Computer Science, or equivalent. 3 to 5 years in IT or Software Engineering including 1 to 2 years in an cloud environment (AWS preferred). Minimum 2 years of DevOps experience. Experience with AWS Services: CloudFormation, Terraform, EC2, Fargate, ECS, Docker, Autoscaling ,ELB, Jenkins, CodePipeline, CodeDeploy, CodeBuild, CodeCommit / Git, RDS, S3, CloudWatch, Lambda, IAM, Artifactory, ECR Highly proficient in Python. Experience in setting up and troubleshooting AWS production environments. Experience in implementing end to end CI/CD Delivery pipelines. Experience working in an agile environment. Hands-on skills operating in Linux and Windows. Proven knowledge of application architecture, networking, security, reliability and scalability concepts; software design principles and patterns. Must be self-motivated and driven. Job Duties In Brief Implement end to end highly scalable, available and resilient cloud engineering solutions for infrastructure and application components using AWS. Implement CI/CD pipelines for infrastructure and applications. Write infrastructure automation scripts, templates and integrate with DevOps tools. Automate smoke test and integrate test automation scripts such as unit tests, integration tests, performance tests into the CI\CD process. Troubleshoot AWS environments. A challenging and rewarding role in an award-winning global business. The above statements are intended to describe the general nature and level of work being performed. They are not intended to be an exhaustive list of all duties, responsibilities and skills. About You Position reports to the Development Lead under the Hedge Fund Accounting IT (HFAIT) department. HFAIT department manages the core accounting platform( Æxeo ®) and datawarehouse within Citco. The platform is used by clients globally and is the first true straight-through, proprietary front-to-back solution for hedge funds that uses a single database for all activities including order capture, position and P&L reporting and accounting. What We Offer Opportunities for personal and professional career development. Great working environment, competitive salary and benefits, and opportunities for educational support. Be part of an industry leading global team, renowned for excellence. Confidentiality Assured. Citco welcomes and encourages applications from people with disabilities. Accommodations are available on request for candidates taking part in all aspects of the selection process.
Posted 3 days ago
5.0 years
0 Lacs
india
On-site
Responsibilities: Lead & Mentor: Guide a team of developers, ensuring best practices in software development, clean architecture, and performance optimisation. Architect & Scale: Design and build highly scalable and reliable backend services using Node.js, MongoDB, and ElasticSearch, ensuring optimal indexing, sharding, and query performance. Frontend Development: Develop and optimise user interfaces using Vue.js (or React/Angular) for an exceptional customer experience. Event-Driven Systems: Design and implement real-time data processing pipelines using Kafka, RabbitMQ, or ActiveMQ. Optimise Performance: Implement autoscaling, database sharding, and indexing strategies to efficiently handle millions of transactions. Cross-Functional Collaboration: Work closely with Product Managers, Data Engineers, and DevOps teams to align on vision, execution, and business goals. Quality & Security: Implement secure, maintainable, and scalable codebases while adhering to industry best practices. Code Reviews & Standards: Drive high engineering standards, perform code reviews, and enforce best practices across the development team. Ownership & Delivery: Manage timelines, oversee deployments, and ensure smooth product releases with minimal downtime. Requirements: 5+ years of hands-on software development experience with at least 2+ years in a leadership role. Strong proficiency in Node.js, Vue.js (or React/Angular), MongoDB, and Elasticsearch. Experience in real-time data processing, message queues (Kafka, RabbitMQ, or ActiveMQ), and event-driven architectures. Scalability expertise: Proven track record of scaling services to 200k+ MAUs and handling high-throughput systems. Strong understanding of database sharding, indexing, and performance optimisation. Experience with distributed systems, microservices, and cloud infrastructures (AWS, GCP, or Azure). Proficiency in CI/CD pipelines, Git version control, and automated testing. Strong problem-solving, analytical, and debugging skills. Excellent communication and leadership abilities, able to guide engineers while collaborating with stakeholders.
Posted 3 days ago
12.0 years
0 Lacs
trivandrum, kerala, india
On-site
Senior Site Reliability Engineer (SRE II) Own availability, latency, performance, and efficiency for Zafin’s SaaS on Azure. You’ll define and enforce reliability standards, lead high-impact projects, mentor engineers, and eliminate toil at scale. Reports to the Director of SRE. What you’ll do SLIs/SLOs & contracts: Define customer-centric SLIs/SLOs for Tier-0/Tier-1 services. Publish, review quarterly, and align teams to them. Error budgeting (policy & tooling): Run the error-budget policy with multi-window, multi-burn-rate alerts; clear runbooks and paging thresholds. Gate changes by budget status (freeze/relax rules) wired into CI/CD. Maintain SLO/EB dashboards (Azure Monitor, Grafana/Prometheus, App Insights). Run weekly SLO reviews with engineering/product. Drive roadmap tradeoffs when budgets are at risk; land reliability epics. Incidents without drama: Lead SEV1/SEV2, own comms, run blameless postmortems, and make corrective actions stick. Engineer reliability in: Multi-AZ/region patterns (active-active/DR), PDBs/Pod Topology Spread, HPA/VPA/KEDA, resilient rollout/rollback. AKS at scale: Harden clusters (network, identity, policy), optimize node/pod density, ingress (AGIC/Nginx); mesh optional. Observability that works: Metrics/traces/logs with Azure Monitor/App Insights, Log Analytics, Prometheus/Grafana, OpenTelemetry. Alert on symptoms, not noise. IaC & policy: Terraform/Bicep modules, GitOps (Flux/Argo), policy-as-code (Azure Policy/OPA Gatekeeper). No snowflakes. CI/CD reliability: Azure DevOps/GitHub Actions with canary/blue-green, progressive delivery, auto-rollback, Key Vault-backed secrets. Capacity & performance: Load testing, right-sizing, autoscaling; partner with FinOps to reduce spend without hurting SLOs. DR you can trust: Define RTO/RPO, test backups/restore, run game days/chaos drills, validate ASR and multi-region failover. Secure by default: Entra ID (Azure AD), managed identities, Key Vault rotation, VNets/NSGs/Private Link, shift-left checks in CI. Reduce toil: Automate recurring ops, build self-service runbooks/chatops, publish golden paths for product teams. Customer escalations: Be the technical owner on calls; communicate tradeoffs and recovery plans with authority. Document to scale: Architectures, runbooks, postmortems, SLIs/SLOs—kept current and discoverable. (If applicable) Streaming/ETL reliability: Apply SRE practices (SLOs, backpressure, idempotency, replay) to NiFi/Flink/Kafka/Redpanda data flows. Minimum qualifications Bachelor’s in CS/Engineering (or equivalent experience). 12+ years in production ops/platform/SRE, including 5+ years on Azure . PostgreSQL (must-have): Deep operational expertise incl. HA/DR, logical/physical replication, performance tuning (indexes/EXPLAIN/ANALYZE, pg_stat_statements), autovacuum strategy, partitioning, backup/restore testing, and connection pooling (pgBouncer). Prefer experience with Azure Database for PostgreSQL – Flexible Server . Azure core: AKS (must-have) ; Front Door/App Gateway, API Management, VNets/NSGs/Private Link, Storage, Key Vault, Redis, Service Bus/Event Hubs. Observability: Azure Monitor/App Insights, Log Analytics, Prometheus/Grafana; SLO design and error-budget operations. IaC/automation: Terraform and/or Bicep; PowerShell and Python; GitOps (Flux/Argo). Pipelines in Azure DevOps or GitHub Actions. Proven incident leadership at scale, blameless postmortems, and SLO/error-budget governance with change gating. Mentorship and crisp written/verbal communication. Preferred (nice to have) Apache NiFi , Apache Flink , Apache Kafka or Redpanda (self-managed on AKS or managed equivalents); schema management, exactly-once semantics, backpressure, dead-letter/replay patterns. Azure Solutions Architect Expert , CKA/CKAD. ITSM (ServiceNow), on-call tooling (PagerDuty/Opsgenie). Compliance/SecOps (SOC 2, ISO 27001), policy-as-code, workload identity. OpenTelemetry, eBPF tooling, or service mesh. Multi-tenant SaaS and cost optimization at scale.
Posted 4 days ago
6.0 years
0 Lacs
gurugram, haryana, india
On-site
About Boon Boon (by Swajal) builds IoT-enabled drinking-water systems used by leading hotels, offices, and communities. Our “UltraOsmosis™ + WaterAI™” stack monitors water quality in real time, powers predictive maintenance, and enables plastic-free hydration at scale. The Role We’re hiring an SDE II (Backend) to own and scale our IoT backend: secure device connectivity, telemetry ingestion, time-series data pipelines, APIs for apps/dashboards, and reliability at thousands of edge devices. You’ll design services, write production code, lead small projects end-to-end, and mentor junior engineers. What you’ll do Own services end-to-end: design docs, implementation, tests, IaC, and production rollout. IoT data plane: MQTT topics & policies, device registry/shadows, OTA update flows, secure provisioning. Data pipelines: high-volume ingestion → storage → query (time-series + metadata). APIs & integrations: build/maintain internal & public APIs for apps, dashboards, partner integrations. Reliability & scale: SLOs, error budgets, autoscaling, back-pressure, dead-lettering, retries. Security & compliance: least-privilege IAM, secrets, audit logging, encryption in transit/at rest. Observability: logs/metrics/traces, runbooks, dashboards, on-call participation. Quality: code reviews, unit/contract/integration tests, CI/CD, performance profiling. Tech leadership: break down work, align stakeholders, mentor SDE-I/interns. Our stack (experience with analogous tech is fine) AWS IoT Core , Device Registry/Shadow, API Gateway , Lambda , ECS/Fargate or EKS , S3 , DynamoDB , Timestream (or TimescaleDB/Influx), Kinesis/SQS/SNS , CloudWatch , Cognito , IAM . Protocols: MQTT, HTTP; familiarity with Modbus/LoRaWAN is a plus. Languages: TypeScript/Node.js or Python/Go; SQL/no-SQL data modeling. Infra: Terraform/CDK, GitHub Actions, containerization. Obs: OpenTelemetry, Grafana/CloudWatch. Must-have qualifications 3–6 years’ backend experience, with at least 2 years on AWS in production. Built and operated event-driven or time-series systems handling high-volume telemetry. Strong in API design , data modeling , and distributed systems fundamentals (idempotency, deduplication, retries, ordering, consistency). Hands-on with AWS IoT Core + MQTT (policies, certificates, shadows) OR equivalent IoT platform. Proficient with DynamoDB/Timestream (or similar) , stream processing (Kinesis/SQS/Kafka), and IaC . Solid testing discipline and CI/CD; can debug complex issues across services. Clear communication, product sensibility, and the ability to lead 1–2 engineers on a workstream. Nice-to-have OTA update pipelines, device provisioning at factory, PKI/cert management. Edge computing patterns, offline sync, back-pressure at gateways. Experience with Grafana/Loki/Tempo, Prometheus, or CloudWatch X-Ray. Security reviews/threat modeling for IoT fleets. Exposure to Next.js/React (for internal tools) is a bonus. Example projects you might own in your first 90 days Telemetry pipeline v2: Kinesis → Lambda → Timestream with partition strategy, TTLs, and cost guards. Device shadow service: unify configs/alerts across WaterCube/Refill SKUs; safe rollout guards. Fleet OTA: signed firmware/artifact hosting, staged rollouts, automatic rollback on failure signals. Partner API: rate-limited, token-scoped endpoints for enterprise customers; audit trails & webhooks. How we measure success Uptime & latency SLOs met; clear runbooks and dashboards. Cost per device and data retention within targets. Safe releases with automated rollbacks; low MTTR on incidents. Developer experience: faster CI, better tests, fewer regressions. Mentorship: peers learn from your reviews and docs. Why Boon Mission with tangible impact: clean water, less plastic, happier guests. Ownership of meaningful systems at real-world scale. Pragmatic engineering culture: clarity, outcomes, and craftsmanship.
Posted 4 days ago
6.0 - 8.0 years
0 Lacs
hyderabad, telangana, india
On-site
Job Title: AWS Cloud Security Engineer Location: Hyderabad, Pune, Coimbatore Experience: 6 - 8 years of experience Workind Mode: 5 Days Work From Office Job Summary: We are looking for a Cloud Security Engineer with a minimum of 6 years of experience in Amazon Web Services (AWS) to join our dynamic team. The ideal candidate will have a deep understanding of cloud infrastructure and architecture, coupled with expertise in deploying, managing, and optimizing AWS services. As a Cloud Platform Engineer, you will play a crucial role in designing, implementing, and maintaining our cloud-based solutions to meet the evolving needs of organization and client. Responsibilities: Following are the day-to-day work activities: Using a broad range of AWS services (VPC, EC2, RDS, ELB, S3, AWS CLI, Cloud Watch, Cloud Trail, AWS Config, Kinesis, Route 53, Dynamo DB, and SNS) to develop and maintain an Amazon AWS based cloud solution. Implementing identity and access management (IAM) controls to manage user and system access securely. Collaborating with cloud architects and developers to create security solutions for cloud environments (e.g., AWS, Azure, GCP) by designing security controls and ensuring they are integrated into cloud platforms and by ensuring that cloud infrastructure adheres to relevant compliance standards (e.g., GDPR, HIPAA, PCI-DSS). Monitoring cloud environments for suspicious activities and threats using tools like SIEM (Security Information and Event Management) systems. Implementing security governance policies and maintaining audit logs for regulatory requirements. Automating cloud security processes using tools such as CloudFormation, Terraform, or Ansible. Implementing infrastructure as code (IaC) to ensure secure deployment and configuration. Building custom Terraform modules to provision cloud infrastructure and maintain them for the enhancements with the latest versions Collaborating with DevOps, network, and software development teams to promote secure cloud practices and training and educating employees about cloud security best practices. Securing and encrypting data by providing secret management solutions with versioning enabled. Building backup solutions for running application’s downtime and maintaining a parallelly running disaster recovery environment in the backend and Implementing Disaster Recovery strategies. Designed and delivered a scalable and highly available solution for the applications migrating to the Cloud with the launch configuration, Autoscaling group, Scaling policies, Cloud watch alarms, load balancer, Route53. Enabling an extra layer of security for cloud root accounts. Working with the data-based application migration teams for strategically and securely moving data from On-premises data centers to the cloud storage within an isolated environment Working with the source code management pipelines and debug issues caused by the failed IT development deployment. Remediate findings from the cybersecurity tools used for Cloud-Native Application Security and implementing resource/cloud services tagging strategies by enforcing compliance standards. Experience performing AWS operations within these areas: Threat Detection Threat Prevention Incident Mgmt Cloud Specific Technologies Control Tower and Service Control Policies AWS Security tools (AWS IAM, Detective, Inspector, Security Hub, etc) General understanding: Identity and Least Privilege Networking in AWS IaaS ITSM (Ticketing systems/process) Requirements: Candidates are required to have these mandatory skills to get the eligibility of their profile assessed. The must have requirements are: Bachelor’s degree in computer science, Engineering, or a related field (or equivalent work experience). Minimum of 6 years of hands-on experience as a Cloud Platform Engineer, with a strong focus on AWS. In-depth knowledge of AWS services such as EC2, S3, VPC, IAM, RDS, Lambda, ECS, and others. Proficiency in scripting and programming languages such as Python, Bash, or PowerShell. Experience with infrastructure as code (IaC) tools like Terraform, CloudFormation, or AWS CDK. Strong understanding of networking concepts, security best practices, and compliance standards in cloud environments. Hands-on experience with containerization technologies (Docker, Kubernetes) and serverless computing. Excellent problem-solving skills and the ability to troubleshoot complex issues in distributed systems. Strong communication skills with the ability to collaborate effectively with cross-functional teams. AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer) are a plus. About the Company: ValueMomentumis amongst the fastest growing insurance-focused IT services providers in North America. Leading insurers trust ValueMomentum with their core, digital and data transformation initiatives. Having grown consistently every year by 24%, we have now grown to over 4000 employees. ValueMomentum is committed to integrity and to ensuring that each team and employee is successful. We foster an open work culture where employees' opinions are valued. We believe in teamwork and cultivate a sense of fun, fellowship, and pride among our employees. Benefits: We at ValueMomentum offer you the opportunity to grow by working alongside the experts. Some of the benefits you can avail are: Competitive compensation package comparable to the best in the industry. Career Advancement: Individual Career Development, coaching and mentoring programs for professional and leadership skill development. Comprehensive training and certification programs. Performance Management: Goal Setting, continuous feedback and year-end appraisal. Reward & recognition for the extraordinary performers. Benefits: Comprehensive health benefits, wellness and fitness programs. Paid time off and holidays. Culture: A highly transparent organization with an open-door policy and a vibrant culture If you are interested in the above role, kindly fill in the details below or share your updated resume to Suresh.Tadi@valuemomentum.com Full Name: Overall Experience: Relevant Experience: Notice Period: Current CTC (Cost to Company): Expected CTC: Are you open to working 5 days a week from the office? (Yes/No): Preferred Location (if applicable): Are you currently employed? (Yes/No): Reason for Looking for a Change :
Posted 4 days ago
2.0 years
0 Lacs
india
On-site
WHO WE ARE Sapaad (http://www.sapaad.com) is a global leader in all-in-one unified commerce platforms, dedicated to delivering world-class software solutions. Its flagship product, Sapaad, has seen tremendous success in the last decade, with thousands of customers worldwide, and many more signing on. Driven by a team of passionate developers and designers, Sapaad is constantly innovating, introducing cutting-edge features that reshape the industry. Headquartered in Singapore, with offices across five countries, Sapaad is backed by technology veterans with deep expertise in web, mobility, and e-commerce, making it a key player in the tech landscape. THE OPPORTUNITY Sapaad Software Private Limited is seeking passionate and reliable Associate Systems Engineers to join our Application Health and Monitoring team. If you thrive in a fast-paced environment and want to be a part of a world-class team maintaining global software systems, this role is for you. You will collaborate with engineers and developers across various layers of applications and infrastructure, ensuring high availability and performance of our suite of solutions. This is your chance to be a part of an established, global, and rapidly growing SaaS company that is shaping the future of software systems. Role: Associate Systems Engineer Experience: 2+ Years KEY RESPONSIBILITIES Cloud Infrastructure Management: Design, deploy, and manage scalable, secure, and cost-effective infrastructure on AWS. Hands-on experience with AWS services including EC2, S3, IAM, VPC, RDS, Lambda, EBS, CloudWatch, WAF and API Gateway Familiar with ECS/EKS, CloudFormation and CloudTrail. Ensure high availability, autoscaling, performance monitoring, and cost optimization across environments. Security & Compliance: Implement security best practices across infrastructure, applications, and APIs. Manage IAM roles and policies, enforce least privilege access and regularly audit permissions. Configure security groups, NACLs and VPC flow logs to control and monitor traffic. Manage Cloudflare for DNS management, global CDN, SSL termination, bot protection and DDoS mitigation. Conduct regular vulnerability assessments, patching, and security group audits. Manage WAF for protection on APIs and web applications. Monitoring and Troubleshooting: Perform log analysis and implement monitoring tools to ensure application health. Troubleshoot production issues and provide on-call support. Creating dashboards and setting SLIs/SLOs/SLAs. Defining and managing alerts and thresholds for proactive detection. Database Management: Administer Postgres and Redis databases, including upgrades, backups, and performance tuning. Participate in rotating schedules and perform monthly maintenance activities Incident Management & RCA: Lead incident response efforts and implement preventive actions to reduce MTTR. Familiarity with incident response tools like PagerDuty or Opsgenie. Ensure clear postmortem documentation and RCA timelines. Collaboration and Support: Work closely with development teams to optimize application infrastructure. Manage incidents, minimize downtime, and provide escalation support as required. Documentation: Maintain detailed technical documentation, procedures, and guides. Research & Development: Actively participate in R&D to improve systems and processes. KEY SKILLS & EXPERTISE Experience with IaaS and PaaS cloud platforms (AWS, Azure, Heroku). Expertise in monitoring tools and log analysis (AWS CloudWatch, DataDog, Logtail). Familiarity with DevOps concepts, including Docker and Kubernetes. Strong troubleshooting and problem-solving abilities. Excellent time management and communication skills. Proactive ownership and a willingness to learn new skills. PREFERRED QUALIFICATIONS Certifications in AWS or other cloud platforms. Familiarity with security best practices and compliance requirements. Experience with CI/CD pipelines. Scripting skills in Python, Ruby, or Shell. Educational Qualification: B.Tech/BE/M.Sc in Computer Science.
Posted 4 days ago
3.0 - 5.0 years
0 Lacs
pune, maharashtra, india
Remote
Job Description Job Description DevOps Engineer Business Group: Technology Primary Work Location: EEEC Pune, India Job Summary We are seeking a highly skilled and hardworking DevOps Engineer to join our dynamic team. Candidates should have a minimum of 3 to 5 years’ recent experience in Docker, Docker Compose, Kubernetes (AKS), MongoDB, SQL Server, Linux, and Azure cloud and platform administration. In This Role, Your Responsibilities Will Be Responsible for app containerization & deployments for various sprint/QA/validated releases, Customer specific deployments, Installation support for customers, support for Dev/QA/Val/Performance testing. Responsible for building and maintaining Repos/CI/CD pipelines. Maintain the enterprise infrastructure platforms on Azure cloud. Who You Are You anticipate customer needs and provide services that are beyond customer expectations. You readily distinguish between what is relevant and what is unimportant to make sense of complex situations. You achieve and are consistently known as a top performer. For This Role, You Will Need Bachelor’s degree or equivalent in Computer Science, Computer Engineering, MSc / MCA (Computer Science/ IT) is required. Overall relevant experience of around 3 to 5 Years Work experience in designing, deploying and maintaining Kubernetes clusters in production environments Implement autoscaling, rolling updates. Build secure, scalable CI/CD pipelines and enforce container security best practices across environments. Containerize applications using Docker and automate infrastructure provisioning with tools like Terraform Diagnose and resolve infrastructure issues, optimize resource usage, and implement disaster recovery strategies. Work closely with development, QA, and security teams while maintaining clear documentation of systems and processes. Working experience in Microservices based Architecture. Working experience in Platform & Cloud Administration Preferred Qualifications That Set You Apart Experience working in Agile development methodology. Working experience in enterprise SSO integration (SAML, OAuth) & Azure DevOps administration Configuration and managing Neo4j Clusters Building and deploying MLOps Solutions A drive towards automating repetitive tasks (e.g., scripting via Bash, Python etc.) Good interpersonal and communication skills Our Offer To You By joining Emerson, you will be given the opportunity to make a difference through the work you do. Emersons compensation and benefits programs are designed to be competitive within the industry and local labor markets . We also offer a comprehensive medical and insurance coverage to meet the needs of our employees. We are committed to creating a global workplace that supports diversity, equity and embraces inclusion . We welcome foreign nationals to join us through our Work Authorization Sponsorship . We attract, develop, and retain exceptional people in an inclusive environment, where all employees can reach their greatest potential . We are dedicated to the ongoing development of our employees because we know that it is critical to our success as a global company. We have established our Remote Work Policy for eligible roles to promote Work-Life Balance through a hybrid work set up where our team members can take advantage of working both from home and at the office. Safety is paramount to us, and we are relentless in our pursuit to provide a Safe Working Environment across our global network and facilities. Through our benefits, development opportunities, and an inclusive and safe work environment, we aim to create an organization our people are proud to represent. Our Commitment to Diversity, Equity & Inclusion At Emerson, we are committed to fostering a culture where every employee is valued and respected for their unique experiences and perspectives. We believe a diverse and inclusive work environment contributes to the rich exchange of ideas and diversity of thoughts, that inspires innovation and brings the best solutions to our customers. This philosophy is fundamental to living our company’s values and our responsibility to leave the world in a better place. Learn more about our Culture & Values and about Diversity, Equity & Inclusion at Emerson . If you have a disability and are having difficulty accessing or using this website to apply for a position, please contact: idisability.administrator@emerson.com . WHY EMERSON Our Commitment to Our People At Emerson, we are motivated by a spirit of collaboration that helps our diverse, multicultural teams across the world drive innovation that makes the world healthier, safer, smarter, and more sustainable. And we want you to join us in our bold aspiration. We have built an engaged community of inquisitive, dedicated people who thrive knowing they are welcomed, trusted, celebrated, and empowered to solve the world’s most complex problems — for our customers, our communities, and the planet. You’ll contribute to this vital work while further developing your skills through our award-winning employee development programs. We are a proud corporate citizen in every city where we operate and are committed to our people, our communities, and the world at large. We take this responsibility seriously and strive to make a positive impact through every endeavor. At Emerson, you’ll see firsthand that our people are at the center of everything we do. So, let’s go. Let’s think differently. Learn, collaborate, and grow. Seek opportunity. Push boundaries. Be empowered to make things better. Speed up to break through. Let’s go, together. About Emerson Emerson is a global leader in automation technology and software. Through our deep domain expertise and legacy of flawless execution, Emerson helps customers in critical industries like life sciences, energy, power and renewables, chemical and advanced factory automation operate more sustainably while improving productivity, energy security and reliability. With global operations and a comprehensive portfolio of software and technology, we are helping companies implement digital transformation to measurably improve their operations, conserve valuable resources and enhance their safety. We offer equitable opportunities, celebrate diversity, and embrace challenges with confidence that, together, we can make an impact across a broad spectrum of countries and industries. Whether you’re an established professional looking for a career change, an undergraduate student exploring possibilities, or a recent graduate with an advanced degree, you’ll find your chance to make a difference with Emerson. Join our team – let’s go! Job Details Role Level: Mid-Level Work Type: Full-Time Country: India City: Pune ,Maharashtra Company Website: http://www.emerson.com Job Function: Engineering Company Industry/ Sector: Automation Machinery Manufacturing What We Offer About The Company Searching, interviewing and hiring are all part of the professional life. The TALENTMATE Portal idea is to fill and help professionals doing one of them by bringing together the requisites under One Roof. Whether you're hunting for your Next Job Opportunity or Looking for Potential Employers, we're here to lend you a Helping Hand. Report Similar Jobs Software Developer Full Stack And Angular Expert Talentmate SQA Engineer Talentmate Product UX Designer Talentmate Process Engineer - Piping Talentmate Global OEM Technical Consultant Talentmate Sr System Design Engineer Talentmate Disclaimer: talentmate.com is only a platform to bring jobseekers & employers together. Applicants are advised to research the bonafides of the prospective employer independently. We do NOT endorse any requests for money payments and strictly advice against sharing personal or bank related information. We also recommend you visit Security Advice for more information. If you suspect any fraud or malpractice, email us at abuse@talentmate.com.
Posted 5 days ago
3.0 - 5.0 years
0 Lacs
pune, maharashtra, india
Remote
Job Description DevOps Engineer Business Group: Technology Primary Work Location: EEEC Pune, India Job Summary: We are seeking a highly skilled and hardworking DevOps Engineer to join our dynamic team. Candidates should have a minimum of 3 to 5 years’ recent experience in Docker, Docker Compose, Kubernetes (AKS), MongoDB, SQL Server, Linux, and Azure cloud and platform administration. In this Role, Your Responsibilities Will Be: Responsible for app containerization & deployments for various sprint/QA/validated releases, Customer specific deployments, Installation support for customers, support for Dev/QA/Val/Performance testing. Responsible for building and maintaining Repos/CI/CD pipelines. Maintain the enterprise infrastructure platforms on Azure cloud. Who You Are: You anticipate customer needs and provide services that are beyond customer expectations. You readily distinguish between what is relevant and what is unimportant to make sense of complex situations. You achieve and are consistently known as a top performer. For This Role, You Will Need: Bachelor’s degree or equivalent in Computer Science, Computer Engineering, MSc / MCA (Computer Science/ IT) is required. Overall relevant experience of around 3 to 5 Years Work experience in designing, deploying and maintaining Kubernetes clusters in production environments Implement autoscaling, rolling updates. Build secure, scalable CI/CD pipelines and enforce container security best practices across environments. Containerize applications using Docker and automate infrastructure provisioning with tools like Terraform Diagnose and resolve infrastructure issues, optimize resource usage, and implement disaster recovery strategies. Work closely with development, QA, and security teams while maintaining clear documentation of systems and processes. Working experience in Microservices based Architecture. Working experience in Platform & Cloud Administration Preferred Qualifications that Set You Apart: Experience working in Agile development methodology. Working experience in enterprise SSO integration (SAML, OAuth) & Azure DevOps administration Configuration and managing Neo4j Clusters Building and deploying MLOps Solutions A drive towards automating repetitive tasks (e.g., scripting via Bash, Python etc.) Good interpersonal and communication skills Our Offer to You: By joining Emerson, you will be given the opportunity to make a difference through the work you do. Emerson's compensation and benefits programs are designed to be competitive within the industry and local labor markets . We also offer a comprehensive medical and insurance coverage to meet the needs of our employees. We are committed to creating a global workplace that supports diversity, equity and embraces inclusion . We welcome foreign nationals to join us through our Work Authorization Sponsorship . We attract, develop, and retain exceptional people in an inclusive environment, where all employees can reach their greatest potential . We are dedicated to the ongoing development of our employees because we know that it is critical to our success as a global company. We have established our Remote Work Policy for eligible roles to promote Work-Life Balance through a hybrid work set up where our team members can take advantage of working both from home and at the office. Safety is paramount to us, and we are relentless in our pursuit to provide a Safe Working Environment across our global network and facilities. Through our benefits, development opportunities, and an inclusive and safe work environment, we aim to create an organization our people are proud to represent. Our Commitment to Diversity, Equity & Inclusion At Emerson, we are committed to fostering a culture where every employee is valued and respected for their unique experiences and perspectives. We believe a diverse and inclusive work environment contributes to the rich exchange of ideas and diversity of thoughts, that inspires innovation and brings the best solutions to our customers. This philosophy is fundamental to living our company’s values and our responsibility to leave the world in a better place. Learn more about our Culture & Values and about Diversity, Equity & Inclusion at Emerson . If you have a disability and are having difficulty accessing or using this website to apply for a position, please contact: idisability.administrator@emerson.com . WHY EMERSON Our Commitment to Our People At Emerson, we are motivated by a spirit of collaboration that helps our diverse, multicultural teams across the world drive innovation that makes the world healthier, safer, smarter, and more sustainable. And we want you to join us in our bold aspiration. We have built an engaged community of inquisitive, dedicated people who thrive knowing they are welcomed, trusted, celebrated, and empowered to solve the world’s most complex problems — for our customers, our communities, and the planet. You’ll contribute to this vital work while further developing your skills through our award-winning employee development programs. We are a proud corporate citizen in every city where we operate and are committed to our people, our communities, and the world at large. We take this responsibility seriously and strive to make a positive impact through every endeavor. At Emerson, you’ll see firsthand that our people are at the center of everything we do. So, let’s go. Let’s think differently. Learn, collaborate, and grow. Seek opportunity. Push boundaries. Be empowered to make things better. Speed up to break through. Let’s go, together. About Emerson Emerson is a global leader in automation technology and software. Through our deep domain expertise and legacy of flawless execution, Emerson helps customers in critical industries like life sciences, energy, power and renewables, chemical and advanced factory automation operate more sustainably while improving productivity, energy security and reliability. With global operations and a comprehensive portfolio of software and technology, we are helping companies implement digital transformation to measurably improve their operations, conserve valuable resources and enhance their safety. We offer equitable opportunities, celebrate diversity, and embrace challenges with confidence that, together, we can make an impact across a broad spectrum of countries and industries. Whether you’re an established professional looking for a career change, an undergraduate student exploring possibilities, or a recent graduate with an advanced degree, you’ll find your chance to make a difference with Emerson. Join our team – let’s go!
Posted 5 days ago
3.0 years
3 - 6 Lacs
india
On-site
Job Title: DevOps Engineer (3+ Years Experience) Location: Delhi Job Type: Full-time Experience Required: Minimum 3 Years --- Job Summary: We are seeking a highly skilled and motivated DevOps Engineer with a minimum of 3 years of hands-on experience in managing CI/CD pipelines, cloud infrastructure (preferably AWS), container orchestration, configuration management, and infrastructure monitoring. You will work closely with the development, QA, and IT teams to streamline deployments, ensure system reliability, and automate operational tasks. --- Key Responsibilities: Design, implement, and maintain CI/CD pipelines using GitLab CI or similar tools. Manage and scale cloud infrastructure on AWS (EC2, S3, IAM, RDS, Route53, Lambda, CloudWatch, etc.). Containerize applications using Docker and orchestrate using Kubernetes (EKS preferred). Implement Infrastructure as Code (IaC) using Ansible, Terraform, or CloudFormation. Maintain secure and scalable Linux server environments, ensuring optimal performance and uptime. Write and maintain shell scripts or Python scripts for automation and monitoring tasks. Setup and manage monitoring, alerting, and logging systems using tools like Grafana, Prometheus, ELK Stack (Elasticsearch, Logstash, Kibana), or CloudWatch. Implement robust backup and disaster recovery strategies. Collaborate with development teams for efficient DevSecOps practices including secrets management and vulnerability scans. Troubleshoot and resolve production issues, performing root cause analysis and preventive planning. --- Required Skills and Experience: 3+ years of experience as a DevOps Engineer or similar role. Proficient in GitLab (or GitHub Actions, Jenkins), including runners and CI/CD pipelines. Strong hands-on experience with AWS services (EC2, RDS, S3, VPC, EKS, etc.). Proficient with Docker and Kubernetes, including Helm, volumes, services, autoscaling. Solid experience with Ansible for configuration management and automation. Good understanding of Linux systems administration and troubleshooting. Strong scripting skills in Bash, Shell, or Python. Experience with monitoring and alerting tools such as Grafana, Prometheus, or Zabbix. Familiar with log management tools (ELK Stack, Fluentd, or CloudWatch Logs). Familiarity with SSL/TLS, DNS, load balancers (Nginx/HAProxy), and firewall/security configurations. Knowledge of version control systems (Git), branching strategies, and GitOps practices. --- Good to Have (Optional but Preferred): Experience with Terraform or Pulumi for cloud infrastructure provisioning. Knowledge of security compliance standards (ISO, SOC2, PCI DSS). Experience with Kafka, RabbitMQ, or Redis. Familiarity with service meshes like Istio or Linkerd. Experience with cost optimization and autoscaling strategies on AWS. Exposure to incident management tools (PagerDuty, Opsgenie). Certification (e.g., AWS Certified DevOps Engineer, CKA, RHCE) is a plus. Job Type: Full-time Pay: ₹30,000.00 - ₹50,000.00 per month Work Location: In person
Posted 5 days ago
7.0 - 9.0 years
5 - 6 Lacs
ahmedabad
On-site
7 - 9 Years 1 Opening Ahmedabad, Pune Role description Role Proficiency: Act under guidance of DevOps; leading more than 1 Agile team. Outcomes: Interprets the DevOps Tool/feature/component design to develop/support the same in accordance with specifications Adapts existing DevOps solutions and creates relevant DevOps solutions for new contexts Codes debugs tests and documents and communicates DevOps development stages/status of DevOps develop/support issues Selects appropriate technical options for development such as reusing improving or reconfiguration of existing components Optimises efficiency cost and quality of DevOps process tools and technology development Validates results with user representatives; integrates and commissions the overall solution Helps Engineers troubleshoot issues that are novel/complex and are not covered by SOPs Design install and troubleshoot CI/CD pipelines and software Able to automate infrastructure provisioning on cloud/in-premises with the guidance of architects Provides guidance to DevOps Engineers so that they can support existing components Good understanding of Agile methodologies and is able to work with diverse teams Knowledge of more than 1 DevOps toolstack (AWS Azure GCP opensource) Measures of Outcomes: Quality of Deliverables Error rate/completion rate at various stages of SDLC/PDLC # of components/reused # of domain/technology certification/ product certification obtained SLA/KPI for onboarding projects or applications Stakeholder Management Percentage achievement of specification/completeness/on-time delivery Outputs Expected: Automated components : Deliver components that automates parts to install components/configure of software/tools in on premises and on cloud Deliver components that automates parts of the build/deploy for applications Configured components: Configure tools and automation framework into the overall DevOps design Scripts: Develop/Support scripts (like Powershell/Shell/Python scripts) that automate installation/configuration/build/deployment tasks Training/SOPs : Create Training plans/SOPs to help DevOps Engineers with DevOps activities and to in onboarding users Measure Process Efficiency/Effectiveness: Deployment frequency innovation and technology changes. Operations: Change lead time/volume Failed deployments Defect volume and escape rate Meantime to detection and recovery Skill Examples: Experience in design installation and configuration to to troubleshoot CI/CD pipelines and software using Jenkins/Bamboo/Ansible/Puppet /Chef/PowerShell /Docker/Kubernetes Experience in Integrating with code quality/test analysis tools like Sonarqube/Cobertura/Clover Experience in Integrating build/deploy pipelines with test automation tools like Selenium/Junit/NUnit Experience in Scripting skills (Python Linux/Shell Perl Groovy PowerShell) Experience in Infrastructure automation skill (ansible/puppet/Chef/Poweshell) Experience in repository Management/Migration Automation – GIT BitBucket GitHub Clearcase Experience in build automation scripts – Maven Ant Experience in Artefact repository management – Nexus/Artifactory Experience in Dashboard Management & Automation- ELK/Splunk Experience in configuration of cloud infrastructure (AWS Azure Google) Experience in Migration of applications from on-premises to cloud infrastructures Experience in Working on Azure DevOps ARM (Azure Resource Manager) & DSC (Desired State Configuration) & Strong debugging skill in C# C Sharp and Dotnet Setting and Managing Jira projects and Git/Bitbucket repositories Skilled in containerization tools like Docker & Kubernetes Knowledge Examples: Knowledge of Installation/Config/Build/Deploy processes and tools Knowledge of IAAS - Cloud providers (AWS Azure Google etc.) and their tool sets Knowledge of the application development lifecycle Knowledge of Quality Assurance processes Knowledge of Quality Automation processes and tools Knowledge of multiple tool stacks not just one Knowledge of Build and release Branching/Merging Knowledge about containerization Knowledge of Agile methodologies Knowledge of software security compliance (GDPR/OWASP) and tools (Blackduck/ veracode/ checkmarxs) Additional Comments: A DevOps engineer with 8-12 years of experience. Here are some of the typical responsibilities of a DevOps Harness engineer: Harness Expertise: Possess a strong understanding of the Harness platform and its capabilities, including pipelines, deployments, configurations, and security features. CI/CD Pipeline Management: Design, develop, and manage CI/CD pipelines using Harness. This involves automating tasks such as code building, testing, deployment, and configuration management. Automation Playbook Creation: Create reusable automation scripts (playbooks) for deployments, configuration control, infrastructure provisioning, and other repetitive tasks. Scalability and Standards: Ensure scalability of the CI/CD pipelines and adherence to organizational standards for deployment processes. DevOps Technologies: Be familiar with various DevOps technologies such as Docker, Kubernetes, and Jenkins, especially in the context of cloud platforms. Security: Integrate security best practices into the CI/CD pipelines (SecDevOps). 1. Candidate must have strong working experience in Kubernetes core concepts like autoscaling, RBAC, Pod placements as well as advanced concepts like Karpenter, service mesh etc 2. Candidate must have strong working experience in AWS services like cloudwatch, EKS, ECS, DynamoDB etc. 3. Candidate must have strong working experience in IAC especially in Terraform and Terragrunt, should be able to create modules. Must have experience in infrastructure provisioning with AWS 4. Candidate must have strong working experience in scripting languages like shell, Powershell or Python. 5. Candidate must have strong working experience in CICD concepts like creating pipelines, automating the deployments. Skills Kubernetes,Iac,Devops,Aws Cloud About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 5 days ago
0.0 - 3.0 years
0 - 0 Lacs
pandav nagar, delhi, delhi
On-site
Job Title: DevOps Engineer (3+ Years Experience) Location: Delhi Job Type: Full-time Experience Required: Minimum 3 Years --- Job Summary: We are seeking a highly skilled and motivated DevOps Engineer with a minimum of 3 years of hands-on experience in managing CI/CD pipelines, cloud infrastructure (preferably AWS), container orchestration, configuration management, and infrastructure monitoring. You will work closely with the development, QA, and IT teams to streamline deployments, ensure system reliability, and automate operational tasks. --- Key Responsibilities: Design, implement, and maintain CI/CD pipelines using GitLab CI or similar tools. Manage and scale cloud infrastructure on AWS (EC2, S3, IAM, RDS, Route53, Lambda, CloudWatch, etc.). Containerize applications using Docker and orchestrate using Kubernetes (EKS preferred). Implement Infrastructure as Code (IaC) using Ansible, Terraform, or CloudFormation. Maintain secure and scalable Linux server environments, ensuring optimal performance and uptime. Write and maintain shell scripts or Python scripts for automation and monitoring tasks. Setup and manage monitoring, alerting, and logging systems using tools like Grafana, Prometheus, ELK Stack (Elasticsearch, Logstash, Kibana), or CloudWatch. Implement robust backup and disaster recovery strategies. Collaborate with development teams for efficient DevSecOps practices including secrets management and vulnerability scans. Troubleshoot and resolve production issues, performing root cause analysis and preventive planning. --- Required Skills and Experience: 3+ years of experience as a DevOps Engineer or similar role. Proficient in GitLab (or GitHub Actions, Jenkins), including runners and CI/CD pipelines. Strong hands-on experience with AWS services (EC2, RDS, S3, VPC, EKS, etc.). Proficient with Docker and Kubernetes, including Helm, volumes, services, autoscaling. Solid experience with Ansible for configuration management and automation. Good understanding of Linux systems administration and troubleshooting. Strong scripting skills in Bash, Shell, or Python. Experience with monitoring and alerting tools such as Grafana, Prometheus, or Zabbix. Familiar with log management tools (ELK Stack, Fluentd, or CloudWatch Logs). Familiarity with SSL/TLS, DNS, load balancers (Nginx/HAProxy), and firewall/security configurations. Knowledge of version control systems (Git), branching strategies, and GitOps practices. --- Good to Have (Optional but Preferred): Experience with Terraform or Pulumi for cloud infrastructure provisioning. Knowledge of security compliance standards (ISO, SOC2, PCI DSS). Experience with Kafka, RabbitMQ, or Redis. Familiarity with service meshes like Istio or Linkerd. Experience with cost optimization and autoscaling strategies on AWS. Exposure to incident management tools (PagerDuty, Opsgenie). Certification (e.g., AWS Certified DevOps Engineer, CKA, RHCE) is a plus. Job Type: Full-time Pay: ₹30,000.00 - ₹50,000.00 per month Work Location: In person
Posted 5 days ago
0.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
Job Information Job Type Permanent Date Opened 09/10/2025 Work Shift 24/7 Work Experience 7 - 10 years Industry IT Services Work Location Chennai - OMR State/Province Tamil Nadu City Chennai Zip/Postal Code 600113 Country India Job Description Job Description Role summary Own and operate our AWS/Linux estate for the SaaS solutions hosted by the business. Apply strong information security practices (least-privilege IAM, patching, secrets management, TLS, vulnerability remediation). Bring serverless awareness to choose the right architecture per use case. Collaborate with engineering to plan safe releases, improve observability, and drive reliability and cost efficiency. Responsibilities Design, provision, and operate AWS infrastructure (EC2/ASG, ELB, MySQL, S3, IAM, VPC, CloudWatch). Build and maintain Terraform modules, backends, and promotion workflows; review infra changes via PRs. Manage Linux systems (Rocky): hardening, patching, systemd, storage/filesystems. Own Puppet configuration management (modules, manifests, environments). Containerize with Docker; maintain images, registries, and runtime configuration. Operate MySQL users/roles, backups/restores, parameter tuning, slow-query triage. Ability to build replicable slaves and troubleshoot Engineer networking: VPC/subnets, routing, SGs/NACLs, DNS (Route 53), VPN/Direct Connect, TLS/PKI. Apply information security controls: IAM least privilege, secrets (SSM/Secrets Manager), patch compliance, vuln remediation, access reviews. Awareness of ISO 27001. Introduce serverless where it fits (Lambda jobs, event processing, light APIs); integrate with existing services. Observability: metrics/logs/traces, dashboards/alerts, runbooks, on-call participation and incident reviews. Cost stewardship: rightsizing, autoscaling policies, storage lifecycle, monthly reviews and actions. Ability to script with Bash and Python. Administer TeamCity & Jenkins: jobs/pipelines for build, test, packaging, and controlled deployments; agent fleet hygiene and backups. Participate in on-call rota. Documentation & knowledge transfer: playbooks, handover notes, and KT sessions. Required skills & experience Expert AWS and Linux administration in production. Strong networking fundamentals (TCP/IP, HTTP/TLS, routing, load balancing, firewalls). Solid MySQL operations (backup/restore, replication basics, performance). Terraform (modules, backends, testing, code review). Puppet (module design, environments). Docker (image authoring, multi-stage builds, registry usage). Scripting for automation (Bash plus one of Python). Experience supporting Java services alongside LAMP components. Demonstrable information security mindset and practice. Nice to have Ansible for ad-hoc/host lifecycle tasks. Containers orchestration basics (ECS/EKS/Kubernetes). Nginx/Apache tuning, reverse proxying. HashiCorp toolchain (Packer, Vault), SSM Parameter Store/Secrets Manager patterns. Monitoring stacks (Nagios, APM) and alert design.
Posted 5 days ago
0 years
0 Lacs
noida, uttar pradesh, india
On-site
Title : Python support Engineer Must-have Skills Monitor and maintain the availability and GKE based applications in high-pressure production environment. Respond to and resolve incidents and service requests related to application functionality and performance. Collaborate with development teams to troubleshoot and resolve technical issues in a timely manner. Document support processes , procedures, and troubleshooting steps for future reference. Participate in on-call rotation as well as in off-hours to provide after-hours support as needed. Communicate effectively with stakeholders to provide updates on issue resolution and status. Should have experience with monitoring tools and incident management systems. Ability to analyze logs, identify patterns, and trace system failures. Solid experience in SQL and database querying for debugging and reporting. Experience in monitoring/alerting tools on GCP. Good To Have Strong in Python, with production-level experience. Strong in FastAPI development and deployment practices. Worked in Google Kubernetes Engine (GKE) – including workload deployment, autoscaling, and tuning. Must have GCP exp in -> Cloud Functions, Pub/Sub, Dataflow, Composer, Bigtable and Bigquery.
Posted 6 days ago
1.0 - 3.0 years
0 Lacs
hyderabad, telangana, india
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. AWS Cloud Engineer What You Will Do The AWS Cloud Engineer will be responsible for maintaining scalable, secure, and reliable AWS cloud infrastructure. This is a hands-on engineering role requiring deep expertise in Infrastructure as Code (IaC), automation, cloud networking, and security . The ideal candidate should have strong AWS knowledge and be capable of writing and maintaining Terraform, CloudFormation, and CI/CD pipelines to streamline cloud deployments. AWS Infrastructure Design & Implementation Implement, and manage highly available AWS cloud environments. Maintain VPCs, Subnets, Security Groups, and IAM policies to enforce security best practices. Optimize AWS costs using reserved instances, savings plans, and auto-scaling. Infrastructure as Code (IaC) & Automation Maintain, and enhance Terraform & CloudFormation templates for cloud provisioning. Automate deployment, scaling, and monitoring using AWS-native tools & scripting. Implement and manage CI/CD pipelines for infrastructure and application deployments. Cloud Security & Compliance Enforce best practices in IAM, encryption, and network security. Ensure compliance with SOC2, ISO27001, and NIST standards. Implement AWS Security Hub, GuardDuty, and WAF for threat detection and response. Monitoring & Performance Optimization Set up AWS CloudWatch, Prometheus, Grafana, and logging solutions for proactive monitoring. Implement autoscaling, load balancing, and caching strategies for performance optimization. Troubleshoot cloud infrastructure issues and conduct root cause analysis. Collaboration & DevOps Practices Work closely with software engineers, SREs, and DevOps teams to support deployments. Maintain GitOps standard processes for cloud infrastructure versioning. Support on-call rotation for high-priority cloud incidents. What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 1 to 3 years of computer science, IT, or related field experience OR Bachelor’s degree and 3 to 5 years of computer science, IT, or related field experience OR Diploma and 7 to 9 years of computer science, IT, or related field experience Hands-on experience with AWS (EC2, S3, RDS, Lambda, VPC, IAM, ECS/EKS, API Gateway, etc.). Expertise in Terraform & CloudFormation for AWS infrastructure automation. Strong knowledge of AWS networking (VPC, Direct Connect, Transit Gateway, VPN, Route 53). Experience with Linux administration, scripting (Python, Bash), and CI/CD tools (Jenkins, GitHub Actions, CodePipeline, etc.). Troubleshooting and debugging skills in cloud networking, storage, and security. Preferred Qualifications: Experience with Kubernetes (EKS) and service mesh architectures. Knowledge of AWS Lambda and event-driven architectures. Familiarity with AWS CDK, Ansible, or Packer for cloud automation. Exposure to multi-cloud environments (Azure, GCP). Familiarity with HPC, DGX Cloud. Professional Certifications (preferred): AWS Certified Solutions Architect – Associate or Professional AWS Certified DevOps Engineer – Professional Soft Skills: Strong analytical and problem-solving skills. Ability to work effectively with global, virtual teams Effective communication and collaboration with multi-functional teams. Ability to work in a fast-paced, cloud-first environment. Shift Information: This position is required to be onsite and participate in 24/5 and weekend on call in rotation fashion and may require you to work a later shift. Candidates must be willing and able to work off hours, as required based on business requirements. What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 6 days ago
8.0 - 12.0 years
0 Lacs
ahmedabad, gujarat, india
On-site
Role Description Role Proficiency: Act under guidance of DevOps; leading more than 1 Agile team. Outcomes Interprets the DevOps Tool/feature/component design to develop/support the same in accordance with specifications Adapts existing DevOps solutions and creates relevant DevOps solutions for new contexts Codes debugs tests and documents and communicates DevOps development stages/status of DevOps develop/support issues Selects appropriate technical options for development such as reusing improving or reconfiguration of existing components Optimises efficiency cost and quality of DevOps process tools and technology development Validates results with user representatives; integrates and commissions the overall solution Helps Engineers troubleshoot issues that are novel/complex and are not covered by SOPs Design install and troubleshoot CI/CD pipelines and software Able to automate infrastructure provisioning on cloud/in-premises with the guidance of architects Provides guidance to DevOps Engineers so that they can support existing components Good understanding of Agile methodologies and is able to work with diverse teams Knowledge of more than 1 DevOps toolstack (AWS Azure GCP opensource) Measures Of Outcomes Quality of Deliverables Error rate/completion rate at various stages of SDLC/PDLC # of components/reused # of domain/technology certification/ product certification obtained SLA/KPI for onboarding projects or applications Stakeholder Management Percentage achievement of specification/completeness/on-time delivery Outputs Expected Automated components : Deliver components that automates parts to install components/configure of software/tools in on premises and on cloud Deliver components that automates parts of the build/deploy for applications Configured Components Configure tools and automation framework into the overall DevOps design Scripts Develop/Support scripts (like Powershell/Shell/Python scripts) that automate installation/configuration/build/deployment tasks Training/SOPs Create Training plans/SOPs to help DevOps Engineers with DevOps activities and to in onboarding users Measure Process Efficiency/Effectiveness Deployment frequency innovation and technology changes. Operations Change lead time/volume Failed deployments Defect volume and escape rate Meantime to detection and recovery Skill Examples Experience in design installation and configuration to to troubleshoot CI/CD pipelines and software using Jenkins/Bamboo/Ansible/Puppet /Chef/PowerShell /Docker/Kubernetes Experience in Integrating with code quality/test analysis tools like Sonarqube/Cobertura/Clover Experience in Integrating build/deploy pipelines with test automation tools like Selenium/Junit/NUnit Experience in Scripting skills (Python Linux/Shell Perl Groovy PowerShell) Experience in Infrastructure automation skill (ansible/puppet/Chef/Poweshell) Experience in repository Management/Migration Automation – GIT BitBucket GitHub Clearcase Experience in build automation scripts – Maven Ant Experience in Artefact repository management – Nexus/Artifactory Experience in Dashboard Management & Automation- ELK/Splunk Experience in configuration of cloud infrastructure (AWS Azure Google) Experience in Migration of applications from on-premises to cloud infrastructures Experience in Working on Azure DevOps ARM (Azure Resource Manager) & DSC (Desired State Configuration) & Strong debugging skill in C# C Sharp and Dotnet Setting and Managing Jira projects and Git/Bitbucket repositories Skilled in containerization tools like Docker & Kubernetes Knowledge Examples Knowledge of Installation/Config/Build/Deploy processes and tools Knowledge of IAAS - Cloud providers (AWS Azure Google etc.) and their tool sets Knowledge of the application development lifecycle Knowledge of Quality Assurance processes Knowledge of Quality Automation processes and tools Knowledge of multiple tool stacks not just one Knowledge of Build and release Branching/Merging Knowledge about containerization Knowledge of Agile methodologies Knowledge of software security compliance (GDPR/OWASP) and tools (Blackduck/ veracode/ checkmarxs) Additional Comments A DevOps engineer with 8-12 years of experience. Here are some of the typical responsibilities of a DevOps Harness engineer: Harness Expertise: Possess a strong understanding of the Harness platform and its capabilities, including pipelines, deployments, configurations, and security features. CI/CD Pipeline Management: Design, develop, and manage CI/CD pipelines using Harness. This involves automating tasks such as code building, testing, deployment, and configuration management. Automation Playbook Creation: Create reusable automation scripts (playbooks) for deployments, configuration control, infrastructure provisioning, and other repetitive tasks. Scalability and Standards: Ensure scalability of the CI/CD pipelines and adherence to organizational standards for deployment processes. DevOps Technologies: Be familiar with various DevOps technologies such as Docker, Kubernetes, and Jenkins, especially in the context of cloud platforms. Security: Integrate security best practices into the CI/CD pipelines (SecDevOps). 1. Candidate must have strong working experience in Kubernetes core concepts like autoscaling, RBAC, Pod placements as well as advanced concepts like Karpenter, service mesh etc 2. Candidate must have strong working experience in AWS services like cloudwatch, EKS, ECS, DynamoDB etc. 3. Candidate must have strong working experience in IAC especially in Terraform and Terragrunt, should be able to create modules. Must have experience in infrastructure provisioning with AWS 4. Candidate must have strong working experience in scripting languages like shell, Powershell or Python. 5. Candidate must have strong working experience in CICD concepts like creating pipelines, automating the deployments. Skills Kubernetes,Iac,Devops,Aws Cloud
Posted 6 days ago
0 years
0 Lacs
india
Remote
Granica is redefining how enterprises prepare and optimize data at the most fundamental layer of the AI stack—where raw information becomes usable intelligence. Our technology operates deep in the data infrastructure layer, making data efficient, secure, and ready for scale. We eliminate the hidden inefficiencies in modern data platforms—slashing storage and compute costs, accelerating pipelines, and boosting platform efficiency. The result: 60%+ lower storage costs, up to 60% lower compute spend, 3× faster data processing, and 20% overall efficiency gains. Why It Matters Massive data should fuel innovation, not drain budgets. We remove the bottlenecks holding AI and analytics back—making data lighter, faster, and smarter so teams can ship breakthroughs, not babysit storage and compute bills. Who We Are World renowned researchers in compression, information theory, and data systems Elite engineers from Google, Pure Storage, Cohesity and top cloud teams Enterprise sellers who turn ROI into seven‑figure wins. Powered by World-Class Investors & Customers $65M+ raised from NEA, Bain Capital, A* Capital, and operators behind Okta, Eventbrite, Tesla, and Databricks. Our platform already processes hundreds of petabytes for industry leaders Our Mission: We’re building the default data substrate for AI, and a generational company built to endure. What We’re Looking For You’ve built systems where petabyte-scale performance, resilience, and clarity of design all matter . You thrive at the intersection of infrastructure engineering and applied research, and care deeply about both how something works and how well it works at scale. We're looking for someone with experience in: Lakehouse and Transactional Data Systems Proven expertise with formats like Delta Lake or Apache Iceberg, including ACID-compliant table design, schema evolution, and time-travel mechanics. Columnar Storage Optimization Deep knowledge of Parquet, including techniques like column ordering, dictionary encoding, bit-packing, bloom filters, and zone maps to reduce scan I/O and improve query efficiency. Metadata and Indexing Systems Experience building metadata-driven services—compaction, caching, pruning, and adaptive indexing that accelerate query planning and eliminate manual tuning. Distributed Compute at Scale Production-grade Spark/Scala pipeline development across object stores like S3, GCS, and ADLS, with an eye toward autoscaling, resilience, and observability. Programming for Scale and Longevity Strong coding skills in Java, Scala, or Go, with a focus on clean, testable code and a documented mindset that enables future engineers to build on your work, not rewrite it. Resilient Systems and Observability You’ve designed systems that survive chaos drills, avoid pager storms, and surface the right metrics to keep complex infrastructure calm and visible. Latency as a Product Metric You think in terms of human latency—how fast a dashboard feels to the analyst, not just the system. You take pride in chasing down every unnecessary millisecond. Mentorship and Engineering Rigor You publish your breakthroughs, mentor peers, and contribute to a culture of engineering excellence and continuous learning. WHY JOIN GRANICA If you’ve helped build the modern data stack at a large company—Databricks, Snowflake, Confluent, or similar—you already know how critical lakehouse infrastructure is to AI and analytics at scale. At Granica, you’ll take that knowledge and apply it where it matters most…at the most fundamental layer in the data ecosystem. Own the product, not just the feature. At Granica, you won’t be optimizing edge cases or maintaining legacy systems. You’ll architect and build foundational components that define how enterprises manage and optimize data for AI. Move faster, go deeper. No multi-month review cycles or layers of abstraction—just high-agency engineering work where great ideas ship weekly. You’ll work directly with the founding team, engage closely with design partners, and see your impact hit production fast. Work on hard, meaningful problems. From transaction layer design in Delta and Iceberg, to petabyte-scale compaction and schema evolution, to adaptive indexing and cost-aware query planning—this is deep systems engineering at scale. Join a team of expert builders. Our engineers have designed the core internals of cloud-scale data systems, and we maintain a culture of peer-driven learning, hands-on prototyping, and technical storytelling. Core Differentiation: We’re focused on unlocking a deeper layer of AI infrastructure. By optimizing the way data is stored, processed, and retrieved, we make platforms like Snowflake and Databricks faster, more cost-efficient, and more AI-native. Our work sits at the most fundamental layer of the AI stack: where raw data becomes usable intelligence. Be part of something early—without the chaos. Granica has already secured $65M+ from NEA, Bain Capital Ventures, A* Capital, and legendary operators from Okta, Tesla, and Databricks. Grow with the company. You’ll have the chance to grow into a technical leadership role, mentor future hires, and shape both the engineering culture and product direction as we scale. Benefits: Highly competitive compensation with uncapped commissions and meaningful equity Immigration sponsorship and counseling Premium health, dental, and vision coverage Flexible remote work and unlimited PTO Quarterly recharge days and annual team off-sites Budget for learning, development, and conferences Help build the foundational infrastructure for the AI era Granica is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Posted 6 days ago
6.0 years
0 Lacs
gurugram, haryana, india
On-site
🚀 We’re Hiring: DevOps Engineer (Cloud & MLOps) 📍 Gurugram (On-site) | 🕒 Full-Time | 💼 2–6 Years Experience 🗓️ Mon–Sat | ⏰ 10:30 AM – 8:00 PM | 1st & 3rd Saturdays off About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, redefining enterprise sales and customer engagement with real-time conversational intelligence. We are building high-performing teams, and strong financial operations are key to keeping our growth engine running smoothly. Role Overview We’re looking for a DevOps Engineer to design, build, and scale cloud-native infrastructure powering our AI and ML services. You’ll be the backbone of our MLOps workflows , ensuring secure, automated, and highly available systems for model training, deployment, and real-time inference at scale. What You’ll Work On 1. Cloud Infrastructure Architect and manage scalable AWS environments (EC2, S3, IAM, Lambda, SageMaker, EKS). Implement best practices for infrastructure security, secrets management & high availability. 2. CI/CD & Automation Build and maintain automated pipelines with GitHub Actions, Docker, and Terraform . Automate deployment, monitoring, and scaling workflows. 3. MLOps & Model Lifecycle Manage ML model deployments, versioning, and inference workflows using MLflow/DVC. Collaborate with data scientists to operationalize training pipelines and artifact storage. 4. Kubernetes & Workload Orchestration Deploy and monitor workloads in Amazon EKS (managed & self-managed). Optimize cluster performance, autoscaling, and failover automation. 5. Monitoring & Reliability Ensure system uptime, logging, and performance monitoring for ML services. Drive incident response playbooks and disaster recovery readiness. 6. Collaboration & Delivery Work closely with backend engineers & data teams. Align DevOps workflows with Agile sprints & product milestones. Who You Are ✅ 2–6 years of experience in DevOps / MLOps ✅ Strong hands-on experience with AWS (EC2, S3, IAM, Lambda, SageMaker, EKS) ✅ Skilled in Kubernetes orchestration & EKS deployment ✅ Hands-on with CI/CD pipelines, GitHub Actions, Terraform & Docker ✅ Proficient in scripting/automation ( Python, Shell ) ✅ Familiar with MLflow, DVC , or model lifecycle tools ✅ Strong fundamentals in cloud security, monitoring & infrastructure reliability Why Join Us? 🌟 Competitive salary + ESOPs + performance bonuses 🌟 Work with AI at the frontier of real-time voice, vision & NLP systems 🌟 Access to GPUs, cloud credits & cutting-edge DevOps/MLOps stacks 🌟 Direct collaboration with founders, IIT/BITS/IIM alums & global advisors 🌟 Ownership of critical infrastructure powering AI for 100+ enterprises globally 📩 How to Apply Send your resume to career@darwix.ai 📌 Subject: DevOps Engineer – [Your Name]
Posted 6 days ago
0 years
0 Lacs
india
On-site
About the Role: The objectives of this assignment are to onboard a Microsoft Azure Integration and Container Specialist who focuses on designing, implementing, and managing cloud-based integration solutions and containerized applications. This role blends expertise in Azure services, containers, and integration technologies to enable scalable, secure, and efficient services. To achieve this Right Integration Pattern will be used, this primarily consists of Microsoft APIM, Azure ServiceBus and Azure Container Apps. Additional integration software components include Azure Data Factory, temporal.io and Spring workflow. The primary aim of this assignment is to develop and implement using the Right Integration Pattern (RIP) using Azure Container applications. This project is strategically important as it aims to leverage Azure’s scalable and reliable services to create a robust integration framework. The broader context of this project involves enhancing the efficiency and reliability of cloud-based applications using advanced Azure services. Key focus areas include: Development of Azure Container Apps (ACA): Utilize ACA to maintain containerized applications and microservices in a highly scalable and cost-effective manner. This involves managing server configurations, clusters, and container orchestration using open-source technologies like Distributed Application Runtime (DApR), KEDA, and Envoy. Implementation of Azure Service Bus (ASB): Employ ASB to decouple applications and services, ensuring scalable and reliable communication through message queues and publish-subscribe topics. This will facilitate point-to-point communication and event-driven architecture, enhancing the overall system’s responsiveness and reliability. API Management: Use Azure API Management (APIM) to create, publish, secure, monitor, and analyze APIs. This will act as a gateway between backend services and API consumers, ensuring efficient management of the entire API lifecycle. The significant achievements expected from this assignment include the successful deployment of a highly scalable and reliable integration pattern, improved resource management through Azure Resource Groups, and enhanced application performance and reliability through the use of ACA and ASB. There will be a focus on the development of common services using the RIP architecture. Scope of Work/Responsibilities The RIP Developer will be responsible for designing, developing, and implementing robust and scalable RIP (Right Integration Platform) solutions to support digital transformation initiatives. The consultant will work closely with the project team to understand requirements, develop technical specifications, and ensure the delivery of high-quality RIP systems. Key activities include: Analyzing project requirements and translating them into technical specifications. Designing and developing RIP software components and modules. Conducting thorough testing and debugging of RIP systems to ensure functionality and performance. Collaborating with cross-functional teams to integrate RIP solutions with existing systems. Providing technical support and troubleshooting for RIP-related issues. Documenting all development processes, code changes, and system configurations. Ensuring compliance with standards and best practices in software development. Training and mentoring junior developers on RIP technologies and methodologies. The consultant is expected to deliver a fully functional RIP system that meets the project objectives, along with comprehensive documentation and support materials. Detailed Tasks/Expected Output Requirement Analysis and Planning: Conduct a thorough analysis of the project requirements for developing RIP Azure container applications. Design and Architecture: Create detailed design documents that include the logical grouping of Azure services within the Azure Resource Group (RG) for particular projects or initiative. Development and Configuration: Develop and configure ACA to host containerized applications and microservices, ensuring the use of dapr for state management, service invocation, and event-driven architecture. Implement KEDA for event-driven autoscaling of Kubernetes workloads. Set up and configure ASB for message queues and publish-subscribe topics to ensure reliable and scalable communication between applications. API Management: Utilize Azure API Management (APIM) to create, publish, secure, monitor, and analyze APIs. Ensure the APIs are well-documented and accessible to API consumers, providing a seamless integration with backend services. Testing and Quality Assurance: Conduct relevant testing of the developed applications and services to ensure they meet the specified requirements and performance standards. Implement automated testing procedures to validate the functionality and reliability of the integration patterns. Deployment and Monitoring: Set-up infrastructure using Infra-as-Code (IaC) tools (e.g., Terraform, Ansible, CloudFormation). Deploy the developed applications and services to the Azure platform, ensuring proper configuration and resource allocation within the Azure Resource Group. Set up monitoring and logging mechanisms to track the performance and health of the applications and services (e.g., Datadog, Prometheus, Grafana, ELK Stack). Documentation and Training: Prepare detailed documentation covering the design, development, configuration, and deployment processes. Provide training sessions and support to the project team and stakeholders to ensure a smooth transition and effective use of the developed solutions. Deliverables The individual shall deliver the following key outputs: 1) Contribute to project plan detailing timelines, milestones, and resource allocation. 2) Detailed design documents and technical specifications for project implementation. 3) Develop and configure microservices and API related deliverables using RIP and other provided tool Requirement and Qualification (Education & Work Experience) The ideal candidate for the RIP Developer role must possess the following qualifications: Essential Skills and Experience: Proven experience in developing and deploying applications using Azure Container Apps (ACA) and Azure Service Bus (ASB). Strong understanding of serverless container services, microservices architecture, and event-driven architecture. Proficiency in using Distributed Application Runtime (dapr), Kubernetes-based Event Driven Autoscaling (KEDA), and Envoy. Demonstrated ability to manage resource provisioning, access controls, and cost management within Azure Resource Groups. Experience with API Management (APIM) for creating, publishing, securing, monitoring, and analyzing APIs. Educational Qualifications: A bachelor’s degree in Computer Science, Information Technology, or a related field is required. Desirable - Advanced certifications in Azure technologies, such as Microsoft Certified: Azure Developer Associate or Microsoft Certified: Azure Solutions Architect Expert, are highly desirable. Additional Experience: Experience with state management for stateful applications and services invocation to other microservices. Familiarity with message queues and publish-subscribe topics for scalable and reliable cloud-based applications. Knowledge of retry mechanisms and dead letter queues (DLQ) for message processing. These qualifications and background ensure that the candidate is well-equipped to develop and manage highly scalable and reliable integration patterns using Azure services. Work Arrangement Hybrid, requiring contractors to report onsite three times a week. Work Schedule is from 8:00AM – 5:00PM Manila Time.
Posted 6 days ago
10.0 years
0 Lacs
hyderabad, telangana, india
On-site
Job Title: Site Reliability Engineering (SRE) Manager Location : Hyderabad Employment Type: Full-Time Work Model - 3 Days from office (Hybrid) Summary: The SRE Manager at TechBlocks India will lead the reliability engineering function, ensuring infrastructure resiliency and optimal operational performance. This hybrid role blends technical leadership with team mentorship and cross-functional coordination. Roles and Responsibilities: Establish and lead the implementation of organizational reliability strategies, aligning SLAs, SLOs, and Error Budgets with business goals and customer expectations. Develop and institutionalize incident response frameworks, including escalation policies, on-call scheduling, service ownership mapping, and RCA process governance. Lead technical reviews for infrastructure reliability design, high-availability architectures, and resiliency patterns across distributed cloud services. Champion observability and monitoring culture by standardizing tooling, alert definitions, dashboard templates, and telemetry data schemas across all product teams. Drive continuous improvement through operational maturity assessments, toil elimination initiatives, and SRE OKRs aligned with product objectives. Collaborate with cloud engineering and platform teams to introduce self-healing systems, capacity-aware autoscaling, and latency-optimized service mesh patterns. Act as the principal escalation point for reliability-related concerns and ensure incident retrospectives lead to measurable improvements in uptime and MTTR. Own runbook standardization, capacity planning, failure mode analysis, and production readiness reviews for new feature launches. Mentor and develop a high-performing SRE team, fostering a proactive ownership culture, encouraging cross-functional knowledge sharing, and establishing technical career pathways. Collaborate with leadership, delivery, and customer stakeholders to define reliability goals, track performance, and demonstrate ROI on SRE investments Experience Required: 10+ years total experience, with 3+ years in a leadership role in SRE or Cloud Operations. Technical Knowledge and Skills: Mandatory: Deep understanding of Kubernetes, GKE, Prometheus, Terraform Cloud: Advanced GCP administration CI/CD: Jenkins, Argo CD, GitHub Actions Incident Management: Full lifecycle, tools like OpsGenie Nice to Have : Knowledge of service mesh and observability stacks Strong scripting skills (Python, Bash) Big Query /Dataflow exposure for telemetry Scope: Build and lead a team of SREs Standardize practices for reliability, alerting, and response Engage with Engineering and Product leaders About TechBlocks TechBlocks is a global digital product engineering company with 16+ years of experience helping Fortune 500 enterprises and high-growth brands accelerate innovation, modernize technology, and drive digital transformation. From cloud solutions and data engineering to experience design and platform modernization, we help businesses solve complex challenges and unlock new growth opportunities. At TechBlocks, we believe technology is only as powerful as the people behind it. We foster a culture of collaboration, creativity, and continuous learning, where big ideas turn into real impact. Whether you're building seamless digital experiences, optimizing enterprise platforms, or tackling complex integrations, you'll be part of a dynamic, fast-moving team that values innovation and ownership. Join us and shape the future of digital transformation.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |