Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
Uttar Pradesh
On-site
We are looking for a highly skilled and motivated DevOps Engineer to join our dynamic team. As a DevOps Engineer, you will be responsible for managing our infrastructure, CI/CD pipelines, and automating processes to ensure smooth deployment cycles. The ideal candidate will have a strong understanding of cloud platforms (AWS, Azure, GCP), version control tools (GitHub, GitLab), CI/CD tools (GitHub Actions, Jenkins, Azure DevOps, and Agro CD (GitOps methodologies)), and the ability to work in a fast-paced environment. RESPONSIBILITIES Design, implement, and manage CI/CD pipelines using GitHub Actions, Jenkins, Azure DevOps, and Agro CD (GitOps methodologies). Manage and automate the deployment of applications on cloud platforms such as AWS, GCP, and Azure. Maintain and optimize cloud-based infrastructure, ensuring high availability, scalability, and performance. Utilize GitHub and GitLab for version control, branching strategies, and managing code repositories. Collaborate with development, QA, and operations teams to streamline the software delivery process. Monitor system performance and resolve issues related to automation, deployments, and infrastructure. Implement security best practices across CI/CD pipelines, cloud resources, and other environments. Troubleshoot and resolve infrastructure issues, including scaling, outages, and performance degradation. Automate routine tasks and infrastructure management to improve system reliability and developer productivity. Stay up to date with the latest DevOps practices, tools, and technologies. REQUIRED SKILLS At least 8 years’ experience as DevOps Engineer. Proven experience as a DevOps Engineer, Cloud Engineer, or similar role. Expertise in CI/CD tools, including GitHub Actions, Jenkins, Azure DevOps, and Agro CD (GitOps methodologies). Strong proficiency with GitHub and GitLab for version control, repository management, and collaborative development. Extensive experience working with cloud platforms such as AWS, Azure, and Google Cloud Platform (GCP). Solid understanding of infrastructure-as-code (IaC) tools like Terraform or CloudFormation. Experience with containerization technologies like Docker and orchestration tools like Kubernetes. Knowledge of monitoring, logging, and alerting systems (e.g., Prometheus, Grafana, ELK stack). Experience in scripting languages such as Python, Bash, or PowerShell. Strong knowledge of networking, security, and performance optimization in cloud environments. Familiarity with Agile development methodologies and collaboration tools. Education B.Tech/M Tech/MBA/BE/MCA Degree
Posted 3 weeks ago
0 years
0 Lacs
Noida
On-site
Join our Team About this opportunity: We are seeking a Senior OpenShift Engineer to lead the migration, modernization, and management of enterprise container platforms using Red Hat OpenShift. This role involves migrating legacy applications to OpenShift, optimizing workloads, and ensuring high availability across hybrid and multi-cloud environments. The ideal candidate will be skilled in container orchestration, DevOps automation, and cloud-native transformations. What you will do: Lead migration projects to move workloads from legacy platforms ( on-prem running on KVM/VMware/Openstack, on-prem Kubernetes, OpenShift 3.x) to OpenShift 4.x. Assess and optimize monolithic applications for containerization and microservices architecture. Develop strategies for stateful and stateless application migrations with minimal downtime. Work with developers and architects to refactor or replatform applications for cloud-native environments. Implement migration automation using Ansible, Helm, or OpenShift GitOps (ArgoCD/FluxCD). Design, deploy, and manage scalable, highly available OpenShift clusters across on-prem and cloud. Implement multi-cluster, hybrid cloud, and multi-cloud OpenShift architectures. Define resource quotas, auto-scaling policies, and workload optimizations for performance tuning. Oversee OpenShift upgrades, patching, and lifecycle management. The skills you bring: Deep hands-on experience with Red Hat OpenShift (OCP 4.x+), Kubernetes, and Docker. Strong knowledge of application migration strategies (Lift & Shift, Replatforming, Refactoring). Proficiency in cloud-native application development and microservices. Expertise in Cloud Platforms (AWS, Azure, GCP) with OpenShift deployments. Advanced scripting and automation using Bash, Python, Ansible, or Terraform. Experience with GitOps methodologies (ArgoCD, FluxCD) and Infrastructure as Code (IaC). Certifications (Preferred but not Mandatory): Red Hat Certified Specialist in OpenShift Administration (EX280) Certified Kubernetes Administrator (CKA) AWS/Azure/GCP Kubernetes/OpenShift-related certifications Strong problem-solving skills with a strategic mindset for complex migrations. Experience in leading technical projects and mentoring engineers. Excellent communication and documentation skills. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Noida Req ID: 766908
Posted 3 weeks ago
12.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: VP-Digital Expert Support Lead Experience : 12 + Years Location : Pune Position Overview The Digital Expert Support Lead is a senior-level leadership role responsible for ensuring the resilience, scalability, and enterprise-grade supportability of AI-powered expert systems deployed across key domains like Wholesale Banking, Customer Onboarding, Payments, and Cash Management . This role requires technical depth, process rigor, stakeholder fluency , and the ability to lead cross-functional squads that ensure seamless operational performance of GenAI and digital expert agents in production environments. The candidate will work closely with Engineering, Product, AI/ML, SRE, DevOps, and Compliance teams to drive operational excellence and shape the next generation of support standards for AI-driven enterprise systems. Role-Level Expectations Functionally accountable for all post-deployment support and performance assurance of digital expert systems. Operates at L3+ support level , enabling L1/L2 teams through proactive observability, automation, and runbook design. Leads stability engineering squads , AI support specialists, and DevOps collaborators across multiple business units. Acts as the bridge between operations and engineering , ensuring technical fixes feed into product backlog effectively. Supports continuous improvement through incident intelligence, root cause reporting, and architecture hardening . Sets the support governance framework (SLAs/OLAs, monitoring KPIs, downtime classification, recovery playbooks). Position Responsibilities Operational Leadership & Stability Engineering Own the production health and lifecycle support of all digital expert systems across onboarding, payments, and cash management. Build and govern the AI Support Control Center to track usage patterns, failure alerts, and escalation workflows. Define and enforce SLAs/OLAs for LLMs, GenAI endpoints, NLP components, and associated microservices. Establish and maintain observability stacks (Grafana, ELK, Prometheus, Datadog) integrated with model behavior. Lead major incident response and drive cross-functional war rooms for critical recovery. Ensure AI pipeline resilience through fallback logic, circuit breakers, and context caching. Review and fine-tune inference flows, timeout parameters, latency thresholds, and token usage limits. Engineering Collaboration & Enhancements Drive code-level hotfixes or patches in coordination with Dev, QA, and Cloud Ops. Implement automation scripts for diagnosis, log capture, reprocessing, and health validation. Maintain well-structured GitOps pipelines for support-related patches, rollback plans, and enhancement sprints. Coordinate enhancement requests based on operational analytics and feedback loops. Champion enterprise integration and alignment with Core Banking, ERP, H2H, and transaction processing systems. Governance, Planning & People Leadership Build and mentor a high-caliber AI Support Squad – support engineers, SREs, and automation leads. Define and publish support KPIs , operational dashboards, and quarterly stability scorecards. Present production health reports to business, engineering, and executive leadership. Define runbooks, response playbooks, knowledge base entries, and onboarding plans for newer AI support use cases. Manage relationships with AI platform vendors, cloud ops partners, and application owners. Must-Have Skills & Experience 12+ years of software engineering, platform reliability, or AI systems management experience. Proven track record of leading support and platform operations for AI/ML/GenAI-powered systems . Strong experience with cloud-native platforms (Azure/AWS), Kubernetes , and containerized observability . Deep expertise in Python and/or Java for production debugging and script/tooling development. Proficient in monitoring, logging, tracing, and alerts using enterprise tools (Grafana, ELK, Datadog). Familiarity with token economics , prompt tuning, inference throttling, and GenAI usage policies. Experience working with distributed systems, banking APIs, and integration with Core/ERP systems . Strong understanding of incident management frameworks (ITIL) and ability to drive postmortem discipline . Excellent stakeholder management, cross-functional coordination, and communication skills. Demonstrated ability to mentor senior ICs and influence product and platform priorities. Nice-to-Haves Exposure to enterprise AI platforms like OpenAI, Azure OpenAI, Anthropic, or Cohere. Experience supporting multi-tenant AI applications with business-driven SLAs. Hands-on experience integrating with compliance and risk monitoring platforms. Familiarity with automated root cause inference or anomaly detection tooling. Past participation in enterprise architecture councils or platform reliability forums
Posted 3 weeks ago
4.0 years
0 Lacs
Thane, Maharashtra, India
On-site
DevOps Engineer - Kubernetes Specialist Experience: 4 - 8 Years Exp Salary : Competitive Preferred Notice Period : Within 30 Days Opportunity Type: Hybrid (Mumbai) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Kubernetes , CI/CD , Google Cloud Ripplehire (One of Uplers' Clients) is Looking for: DevOps Engineer - Kubernetes Specialist who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description We are seeking an experienced DevOps Engineer with deep expertise in Kubernetes primarily Google Kubernetes Engine (GKE) to join our dynamic team. The ideal candidate will be responsible for designing, implementing, and maintaining scalable containerized infrastructure, with a strong focus on cost optimization and operational excellence. Key Responsibilities & Required Skills Kubernetes Infrastructure & Deployment Responsibilities: Design, deploy, and manage production-grade Kubernetes clusters Perform cluster upgrades, patching, and maintenance with minimal downtime Deploy and manage multiple microservices with ingress controllers and networking Configure storage solutions and persistent volumes for stateful applications Required Skills: 3+ years of hands-on Kubernetes experience in production environments, primarily on Google Kubernetes Engine (GKE) Strong experience with Google Cloud Platform (GCP) and GKE-specific features Deep understanding of Docker, container orchestration, and GCP networking concepts Knowledge of Helm charts, YAML/JSON configuration, and service mesh technologies CI/CD, Monitoring & Automation Responsibilities: Design and implement robust CI/CD pipelines for Kubernetes deployments Implement comprehensive monitoring, logging, and alerting solutions Leverage AI tools and automation to improve team efficiency and task speed Create dashboards and implement GitOps workflows Required Skills: Proficiency with Jenkins, GitLab CI, GitHub Actions, or similar CI/CD platforms Experience with Prometheus, Grafana, ELK stack, or similar monitoring solutions Knowledge of Infrastructure as Code tools (Terraform, Ansible) Familiarity with AI/ML tools for DevOps automation and efficiency improvements Cost Optimization & Application Management Responsibilities: Analyze and optimize resource utilization across Kubernetes workloads Implement right-sizing strategies for services and batch jobs Deploy and manage Java-based applications and MySQL databases Configure horizontal/vertical pod autoscaling and resource management Required Skills: Experience with resource management, capacity planning, and cost optimization Understanding of Java application deployment and MySQL database administration Knowledge of database operators, StatefulSets, and backup/recovery solutions Proficiency in scripting languages (Bash, Python, or Go) Preferred Qualifications Experience with additional Google Cloud Platform services (Compute Engine, Cloud Storage, Cloud SQL, Cloud Build) Knowledge of GKE advanced features (Workload Identity, Binary Authorization, Config Connector) Experience with other cloud Kubernetes services (AWS EKS, Azure AKS) is a plus Knowledge of container security tools and chaos engineering Experience with multi-cluster GKE deployments and service mesh (Istio, Linkerd) Familiarity with AI-powered monitoring and predictive analytics platforms Key Competencies Strong problem-solving skills with innovative mindset toward AI-driven solutions Excellent communication and collaboration abilities Ability to work in fast-paced, agile environments with attention to detail Proactive approach to identifying issues using modern tools and AI assistance Ability to mentor team members and promote AI adoption for team efficiency Join our team and help shape the future of our DevOps practices with cutting-edge containerized infrastructure. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Ripplehire is a recruitment SaaS for companies to identify correct candidates from employees' social networks and gamify the employee referral program with contests and referral bonus to engage employees in the recruitment process. Developed and managed by Trampoline Tech Private Limited. Recognized by InTech50 as one of the Top 50 innovative enterprise software companies coming out of India and; NHRD (HR Association) Staff Pick for the most innovative social recruiting tool in India. Used by 7 clients as of July 2014. It is a tool available on the subscription-based pricing model. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job description Role: Devops Specialist Experience:7 Year+ Location: Hyderabad Primary Skills: Strong experience with Docker Kubernetes for container orchestration. Configure and maintain Kubernetes deployments, services, ingresses, and other resources using YAML manifests or GitOps workflows. Experience in Microservices based architecture design. Understanding of SDLC including CI and CD pipeline architecture. Experience with configuration management (Ansible). Experience with Infrastructure as code(Teraform/Pulumi/CloudFormation). Experience with Git and version control systems. Secondary Skills: Experience with CI/CD pipeline using Jankins or AWS CodePipeline or Github actions. Experience with building and maintaining Dev, Staging, and Production environments. Familiarity with scripting languages (e. g., Python, Bash) for automation. Monitoring and logging tools like Prometheus, Grafana. Knowledge of Agile and DevOps methodologies. Incidence management and root cause analysis. Excellent problem solving and analytical skills. Excellent communication skills. Mandatory work from Office 24x5 Support
Posted 3 weeks ago
15.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad (Work from Office) Experience: 15+ Years Notice Period: Immediate-15 Days Only Product Manager for an API Developer Portal Platform with a focus on AI demands evaluating both platform/product expertise and technical fluency , alongside strategic thinking and cross-functional leadership . 1. Product Management Skills 5+ years of experience in product management, preferably in developer platforms, APIs, or AI. Experience owning a product roadmap and lifecycle (discovery → MVP → GA → continuous iteration). Strong customer-centric mindset and user research abilities. Proven experience in cross-functional collaboration (Engineering, Design, Marketing, Developer Relations, etc.) 2. Domain Expertise Experience with API lifecycle (design, publishing, documentation, analytics, monetization). Familiarity with Developer Portal platforms (Apigee, MuleSoft, WSO2, Kong, Postman, Stoplight, etc.) Understanding of OpenAPI/Swagger specs and API-first development. Experience with authentication/authorization (OAuth2, API Keys, JWT, etc.) 3. AI/ML Integration Knowledge Understanding of LLMs, embeddings, and how GenAI can enhance developer experience. Experience integrating AI into developer tools (e.g., AI-assisted API documentation, smart search, chatbot integration). Awareness of prompt engineering, fine-tuning vs RAG strategies, and ethical AI use. 4. Technical Literacy Ability to read and understand OpenAPI specs and JSON/YAML. Understanding of backend architecture and deployment models (SaaS, hybrid, on-prem). Experience with developer workflows (CI/CD, GitOps, Postman Collections, etc.) 5. Metrics & Business Acumen Defining success metrics: MAUs, time-to-first-call, API adoption, retention. Experience with monetization models and developer engagement KPIs. Knowledge of pricing models (tiered usage, pay-per-call, etc.) for APIs.
Posted 3 weeks ago
0.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
Noida,Uttar Pradesh,India +1 more Job ID 766908 Join our Team About this opportunity: We are seeking a Senior OpenShift Engineer to lead the migration, modernization, and management of enterprise container platforms using Red Hat OpenShift. This role involves migrating legacy applications to OpenShift, optimizing workloads, and ensuring high availability across hybrid and multi-cloud environments. The ideal candidate will be skilled in container orchestration, DevOps automation, and cloud-native transformations. What you will do: Lead migration projects to move workloads from legacy platforms ( on-prem running on KVM/VMware/Openstack, on-prem Kubernetes, OpenShift 3.x) to OpenShift 4.x. Assess and optimize monolithic applications for containerization and microservices architecture. Develop strategies for stateful and stateless application migrations with minimal downtime. Work with developers and architects to refactor or replatform applications for cloud-native environments. Implement migration automation using Ansible, Helm, or OpenShift GitOps (ArgoCD/FluxCD). Design, deploy, and manage scalable, highly available OpenShift clusters across on-prem and cloud. Implement multi-cluster, hybrid cloud, and multi-cloud OpenShift architectures. Define resource quotas, auto-scaling policies, and workload optimizations for performance tuning. Oversee OpenShift upgrades, patching, and lifecycle management. The skills you bring: Deep hands-on experience with Red Hat OpenShift (OCP 4.x+), Kubernetes, and Docker. Strong knowledge of application migration strategies (Lift & Shift, Replatforming, Refactoring). Proficiency in cloud-native application development and microservices. Expertise in Cloud Platforms (AWS, Azure, GCP) with OpenShift deployments. Advanced scripting and automation using Bash, Python, Ansible, or Terraform. Experience with GitOps methodologies (ArgoCD, FluxCD) and Infrastructure as Code (IaC). Certifications (Preferred but not Mandatory): Red Hat Certified Specialist in OpenShift Administration (EX280) Certified Kubernetes Administrator (CKA) AWS/Azure/GCP Kubernetes/OpenShift-related certifications Strong problem-solving skills with a strategic mindset for complex migrations. Experience in leading technical projects and mentoring engineers. Excellent communication and documentation skills. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
Job Title: Platform & DevOps Engineer Location: Hyderabad / Pune Job Type: Contract to Hire Domain: Banking and Finance About the Client: Our client is a leading tech firm specializing in digital transformation, cloud solutions, and automation. They focus on innovation, efficiency, and continuous improvement through advanced technology and community-driven initiatives. About the Role: iO associates is seeking an experienced Platform & DevOps Engineer to manage DevOps tools, implement CI/CD pipelines, and maintain infrastructure on Google Cloud Platform (GCP). This role requires expertise in automation, infrastructure management, and cloud-based solutions within the Banking and Finance sector. Responsibilities: Develop and maintain CI/CD pipelines using Jenkins. Manage Terraform-based Infrastructure as Code (IaC). Work with GCP services: GCE, GKE, BigQuery, Pub/Sub, Monitoring. Implement GitOps best practices and automation. Requirements: 5+ years" experience with Jenkins, Terraform, Docker, and Kubernetes (GKE). Strong understanding of security, monitoring, and compliance. Experience with image management using Packer and Docker. Banking industry experience is a plus. If you're a skilled DevOps professional with a passion for automation and cloud infrastructure, apply now or send your CVs at .,
Posted 3 weeks ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant - MLOps Engineer! In this role, lead the automation and orchestration of our machine learning infrastructure and CI/CD pipelines on public cloud (preferably AWS). This role is essential for enabling scalable, secure, and reproducible deployments of both classical AI/ML models and Generative AI solutions in production environments. Responsibilities Develop and maintain CI/CD pipelines for AI/ GenAI models on AWS using GitHub Actions and CodePipeline . (Not Limited to) Automate infrastructure provisioning using IAC. (Terraform, Bicep Etc) Any cloud platform - Azure or AWS Package and deploy AI/ GenAI models on (SageMaker, Lambda, API Gateway). Write Python scripts for automation, deployment, and monitoring. Engaging in the design, development and maintenance of data pipelines for various AI use cases Active contribution to key deliverables as part of an agile development team Set up model monitoring, logging, and alerting (e.g., drift, latency, failures). Ensure model governance, versioning, and traceability across environments. Collaborating with others to source, analyse , test and deploy data processes Experience in GenAI project Qualifications we seek in you! Minimum Qualifications experience with MLOps practices. Degree/qualification in Computer Science or a related field, or equivalent work experience Experience developing, testing, and deploying data pipelines Strong Python programming skills. Hands-on experience in deploying 2 - 3 AI/ GenAI models in AWS. Familiarity with LLM APIs (e.g., OpenAI, Bedrock) and vector databases. Clear and effective communication skills to interact with team members, stakeholders and end users Preferred Qualifications/ Skills Experience with Docker-based deployments. Exposure to model monitoring tools (Evidently, CloudWatch). Familiarity with RAG stacks or fine-tuning LLMs. Understanding of GitOps practices. Knowledge of governance and compliance policies, standards, and procedures Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 3 weeks ago
3.0 - 8.0 years
5 - 10 Lacs
Bengaluru
Work from Office
Job Summary: We are looking for a highly experienced Technical Architect with a deep understanding of cloud-native architecture, scalable microservices, and secure SaaS platforms. The ideal candidate will have strong expertise in AWS (primary) and a solid command of backend and full-stack technologies such as Java, Python, Node.js, and TypeScript. You will drive the architectural vision across development, deployment, monitoring, and optimization, ensuring high performance, resilience, and scalability. Key Responsibilities: Design and define end-to-end technical architecture for scalable, cloud-native SaaS applications. Lead the development of microservices, REST/GraphQL APIs, serverless solutions, and event-driven systems. Architect and implement secure and efficient CI/CD pipelines using GitOps and Infrastructure as Code (YAML, Terraform). Guide full-cycle engineering efforts: from design and development to deployment, observability, and optimization. Collaborate with cross-functional teams (DevOps, Product, QA) to ensure alignment with business goals. Promote adoption of security best practices, including OAuth, JWT, and data encryption. Stay ahead of technology trends, including AI-powered development tools like GitHub Copilot or Amazon CodeWhisperer. Provide architectural guidance for media processing workflows (image, 3D, video) and project collaboration platforms when applicable. Maintain system observability and performance using tools like CloudWatch, Prometheus, and Grafana. Qualifications: 8+ years of professional experience in backend and cloud-native application architecture. Strong proficiency in AWS services and infrastructure; exposure to Azure/GCP is a plus. Expertise in Java, Python, Node.js, and TypeScript. Proven experience in working with PostgreSQL, MongoDB, and DynamoDB. Deep understanding of serverless computing, microservices, and containerized platforms (e.g., Docker, Kubernetes). Familiarity with GitOps workflows, YAML, Terraform, and Infrastructure as Code practices. Strong knowledge of security protocols and authentication mechanisms. Preferred Qualifications : Cloud certifications such as AWS Solutions Architect or equivalent. Hands-on experience with GitHub Copilot, Amazon CodeWhisperer, or similar AI-based developer tools. Exposure to media processing workflows (image, 3D, video) or collaboration/project management tools. Experience with GraphQL and event-driven architectures. Knowledge of system monitoring tools such as CloudWatch, Prometheus, and Grafana.
Posted 3 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing… You will be part of the Network Planning group in GNT organization supporting development of deployment automation pipelines and other tooling for the Verizon Cloud Platform. You will be supporting a highly reliable infrastructure running critical network functions. You will be responsible for solving issues that are new and unique, which will provide the opportunity to innovate. You will have a high level of technical expertise and daily hands-on implementation working in a planning team designing and developing automation. This entitles programming and orchestrating the deployment of feature sets into the Kubernetes CaaS platform along with building containers via a fully automated CI/CD pipeline utilizing Ansible playbooks, Python and CI/CD tools and process like JIRA, GitLab, ArgoCD, or any other scripting technologies. Leverage monitoring tools such as Redfish, Splunk, and Grafana to monitor system health, detect issues, and proactively resolve them. Design and configure alerts to ensure timely responses to critical events. Work with the development and Operations teams to design, implement, and optimize CI/CD pipelines using ArgoCD for efficient, automated deployment of applications and infrastructure. Implement security best practices for cloud and containerized services and ensure adherence to security protocols. Configure IAM roles, VPC security, encryption, and compliance policies. Continuously optimize cloud infrastructure for performance, scalability, and cost-effectiveness. Use tools and third-party solutions to analyze usage patterns and recommend cost-saving strategies. Work closely with the engineering and operations teams to design and implement cloud-based solutions. Provide mentorship and support to team members while sharing best practices for cloud engineering. Maintain detailed documentation of cloud architecture and platform configurations and regularly provide status reports, performance metrics, and cost analysis to leadership. What We’re Looking For... You’ll need to have: Bachelor’s degree or four or more years of work experience. Four or more years of relevant work experience. Four or more years of work experience in Kubernetes administration. Hands-on experience with one or more of the following platforms: EKS, Red Hat OpenShift, GKE, AKS, OCI GitOps CI/CD workflows (ArgoCD, Flux) and Very Strong Expertise in the following: Ansible, Terraform, Helm, Jenkins, Gitlab VSC/Pipelines/Runners, Artifactory Strong proficiency with monitoring/observability tools such as New Relic, Prometheus/Grafana, logging solutions (Fluentd/Elastic/Splunk) to include creating/customizing metrics and/or logging dashboards Backend development experience with languages to include Golang (preferred), Spring Boot, and Python Development Experience with the Operator SDK, HTTP/RESTful APIs, Microservices Familiarity with Cloud cost optimization (e.g. Kubecost) Strong experience with infra components like Flux, cert-manager, Karpenter, Cluster Autoscaler, VPC CNI, Over-provisioning, CoreDNS, metrics-server Familiarity with Wireshark, tshark, dumpcap, etc., capturing network traces and performing packet analysis Demonstrated expertise with the K8S ecosystem (inspecting cluster resources, determining cluster health, identifying potential application issues, etc.) Strong Development of K8S tools/components which may include standalone utilities/plugins, cert-manager plugins, etc. Development and working experience with Service Mesh lifecycle management and configuring, troubleshooting applications deployed on Service Mesh and Service Mesh related issues Expertise in RBAC and Pod Security Standards, Quotas, LimitRanges, OPA & Gatekeeper Policies Working experience with security tools such as Sysdig, Crowdstrike, Black Duck, etc. Demonstrated expertise with the K8S security ecosystem (SCC, network policies, RBAC, CVE remediation, CIS benchmarks/hardening, etc.) Networking of microservices, solid understanding of Kubernetes networking and troubleshooting Certified Kubernetes Administrator (CKA) Demonstrated very strong troubleshooting and problem-solving skills Excellent verbal communication and written skills Even better if you have one or more of the following: Certified Kubernetes Application Developer (CKAD) Red Hat Certified OpenShift Administrator Familiarity with creating custom EnvoyFilters for Istio service mesh and integrating with existing web application portals Experience with OWASP rules and mitigating security vulnerabilities using security tools like Fortify, Sonarqube, etc. Database experience (RDBMS, NoSQL, etc.) Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.
Posted 3 weeks ago
7.0 years
10 - 20 Lacs
Hyderābād
Remote
Job Title: Senior DevOps Engineer Experience Required: 7+ Years Location: [Insert Location or Remote] Employment Type: Full-time Mandatory skills: Gitlab and Terraform Job Summary: We are seeking a highly skilled and experienced Senior DevOps Engineer with over 7 years of hands-on experience in designing, implementing, and managing scalable DevOps solutions. The ideal candidate should have strong expertise in GitLab , Terraform , and a solid background in scripting, container orchestration (Kubernetes), CI/CD, cloud platforms, and other modern DevOps tools and practices. Key Responsibilities: Design, implement, and manage robust CI/CD pipelines using GitLab . Infrastructure provisioning and management using Terraform . Collaborate with development and operations teams to streamline release cycles and ensure reliable deployment processes. Maintain and improve cloud infrastructure (AWS, Azure, or GCP). Manage containerized applications using Docker and Kubernetes . Automate operational processes using shell scripting, Python, or other relevant languages. Monitor system performance, availability, and reliability; respond to production incidents as needed. Maintain Infrastructure as Code (IaC) and Configuration Management standards. Implement best practices for security, scalability, and availability. Mentor junior DevOps engineers and help evolve the DevOps culture within the organization. Required Skills & Qualifications: 7+ years of experience in DevOps or related roles. Strong hands-on experience with GitLab (CI/CD pipelines, runners, etc.). Proven expertise in Terraform for IaC. Proficiency in scripting languages such as Shell , Python , or Bash . Solid understanding and working experience with Kubernetes and Docker . Experience with cloud platforms like AWS , Azure , or Google Cloud . Strong background in system administration, networking, and security best practices. Experience with monitoring/logging tools (e.g., Prometheus, Grafana, ELK). Knowledge of configuration management tools (e.g., Ansible, Chef, or Puppet) is a plus. Excellent problem-solving skills and a collaborative mindset. Preferred Qualifications: Relevant certifications (e.g., AWS DevOps Engineer, CKAD, Terraform Associate). Experience with GitOps or progressive delivery tools like ArgoCD or Flux. Familiarity with agile software development lifecycle. Job Types: Full-time, Contractual / Temporary Contract length: 12 months Pay: ₹1,000,000.00 - ₹2,000,000.00 per year Benefits: Food provided Health insurance Paid sick time Work Location: In person Speak with the employer +91 8106291379
Posted 3 weeks ago
3.0 years
0 Lacs
Gurgaon
On-site
Job Summary: We are looking for a highly skilled DevOps Engineer with 3+ years of experience to take ownership of our infrastructure automation, CI/CD pipelines, cloud operations, and overall system reliability. You will work closely with development, QA, and IT teams to enable faster releases, ensure high availability, and improve system performance and scalability. Roles & Responsibilities: Design, develop, and maintain scalable CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or Azure DevOps Deploy and manage cloud infrastructure on AWS / Azure / GCP, ensuring cost-efficiency and security Implement Infrastructure as Code (IaC) using Terraform, Ansible, or CloudFormation Monitor and manage application and infrastructure health using tools like Prometheus, Grafana, ELK, New Relic, or Datadog Containerize applications using Docker and manage orchestration using Kubernetes / ECS / AKS / EKS Implement security best practices, including secrets management, IAM policies, and network configurations Automate repetitive tasks and enhance workflows using shell scripts, Python, or other scripting languages Collaborate with developers and QA teams to troubleshoot issues and streamline release processes Perform root cause analysis of production errors and implement improvements to prevent recurrence Qualifications: 3+ years of proven experience as a DevOps Engineer or in a similar role Expertise in CI/CD tools and automation practices Strong hands-on experience with Docker and Kubernetes Proficient in scripting (Shell, Bash, Python) In-depth experience with cloud services (AWS / Azure / GCP) Familiarity with GitOps practices and version control systems (Git, Bitbucket) Experience in monitoring, logging, and alerting systems Understanding of networking concepts, DNS, firewalls, load balancers, and VPNs Knowledge of DevSecOps principles and compliance tools is a plus Candidate Source: Referral Experience Level: 3-5 Years
Posted 3 weeks ago
5.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Groww We are a passionate group of people focused on making financial services accessible to every Indian through a multi-product platform. Each day, we help millions of customers take charge of their financial journey. Customer obsession is in our DNA. Every product, every design, every algorithm down to the tiniest detail is executed keeping the customers’ needs and convenience in mind. Our people are our greatest strength. Everyone at Groww is driven by ownership, customer-centricity, integrity and the passion to constantly challenge the status quo. Are you as passionate about defying conventions and creating something extraordinary as we are? Let’s chat. Our Vision Every individual deserves the knowledge, tools, and confidence to make informed financial decisions. At Groww, we are making sure every Indian feels empowered to do so through a cutting-edge multi-product platform offering a variety of financial services. Our long-term vision is to become the trusted financial partner for millions of Indians. Our Values Our culture enables us to be what we are — India’s fastest-growing financial services company. It fosters an environment where collaboration, transparency, and open communication take center-stage and hierarchies fade away. There is space for every individual to be themselves and feel motivated to bring their best to the table, as well as craft a promising career for themselves. The values that form our foundation are: Radical customer centricity Ownership-driven culture Keeping everything simple Long-term thinking Complete transparency EXPERTISE AND QUALIFICATIONS What youʼll do: Providing 24X7 infra & platform support for the Data Platform infrastructure setup hosting the workloads for the Data engineering teams and also building processes and documenting “tribal” knowledge around the same time. Managing application deployment & GKE platforms - automate and improve development and release processes. Creating, managing and maintaining datastores & data platform infra using IaC. Owning the end-to-end Availability, Performance, Capacity of applications and their infrastructure and creating/maintaining the respective observability with Prometheus/New Relic/ELK/Loki. Owning and onboarding new applications with the production readiness review process. Managing the SLO/Error Budgets/Alerts and performing root cause analysis for production errors. Working with Core Infra, Dev and Product teams to define SLO/Error Budgets/Alerts. Working with the Dev team to have an in-depth understanding of the application architecture and its bottlenecks. Identifying observability gaps in application & infrastructure and working with stakeholders to fix them. Managing outages and doing detailed RCA with developers and identifying ways to avoid that situation. Automate toil and repetitive work. What We're Looking For: 5-8 Years of experience in managing high traffic, large scale microservices and infrastructure with excellent troubleshooting skills. Has handled and worked on distributed processing engines , distributed databases and messaging queues ( Kafka , PubSub or RabbitMQ etc Experienced in setting up , working on data platforms, data lakes, and data ingestion systems that work at scale. Write core libraries (in python and golang) to interact with various internal data stores. Define and support internal SLAs for common data infrastructure Good to have familiarity with BigQuery or Trino , Pinot , Airflow , and Superset or similar ones ( good to have familiarity with Mongo and Redis ) Experience in troubleshooting, managing and deploying containerized environments using Docker/container, Kubernetes is a must. Extensive experience in DNS, TCP/IP, UDP, GRPC, Routing and Load Balancing. Expertise in GitOps, Infrastructure as a Code tool such as Terraform etc.. and Configuration Management Tools such as Chef, Puppet, Saltstack, Ansible. Expertise in Google Cloud (GCP) and/or other relevant Cloud Infrastructure solutions like AWS or Azure. Experience in building the CI/CD pipelines with any one the tools such as Jenkins, GitLab, Spinnaker, Argo etc.
Posted 3 weeks ago
5.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Key Requirements Experience & Skills: 3–5 years in DevOps or Platform Engineering roles. Strong understanding of cloud architecture, infrastructure code, and automation. Experience working with Azure services (VMs, Blob Storage, Functions, AKS). Hands-on with Docker, Helm, Bash, PowerShell, and Azure DevOps Pipelines. Familiarity with IaC tools such as Terraform, Bicep, or ARM templates. Knowledge of modern DevOps, GitOps, and SecOps practices. Competent in using Git and Markdown for version control and documentation. Familiarity with performance, reliability, and security design patterns in cloud environments. Preferred Qualifications (Nice to Have): Experience with Pulumi, Python scripting, Azure Cost Management, or multi-cloud setups. Microsoft certifications (e.g., AZ-400: DevOps Engineer Expert). Personal Attributes: Passionate about platform engineering and continuous improvement. High level of motivation, adaptability, and problem-solving capability. Strong communicator and collaborator. Willing to learn, experiment, and adopt new tools and methodologies. Outcome-oriented with attention to detail and delivery quality. Education: • Bachelor's degree in computer science, Engineering, or related field, or equivalent practical experience.
Posted 3 weeks ago
0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
On-site
Software Engineer – Integration (Cloud) Skills To be successful in this role as a Cloud focused Integration “Software Engineer – OSS Platform Engineering", you should possess the following skillsets: Deep Expertise in Cloud platforms (AWS, Azure or GCP) infrastructure design and cost optimization. An expert in containerization and Orchestration using dockers and Kubernetes (deployments, service mesh etc.) Hands-on expertise with platform engineering and productization (for other app consumption as tenants) of opensource monitoring/logging tools (Prometheus, Grafana, ELK and similar) and cloud-native tools based. Strong Knowledge and demonstrable hands-on experience with middleware technologies (Kafka, API gateways etc) and Data Engineering tools/frameworks like Apache Spark, Airflow, Flink and Hadoop ecosystems. Some Other Highly Valued Skills Include Expertise building ELT pipelines and cloud/storage integrations - data lake/warehouse integrations (redshift, BigQuery, Snowflake etc). Solid understanding of DevOps tooling, GitOps, CI/CD, config management, Jenkins, build pipelines and source control systems. Working knowledge of cloud infrastructure services: compute, storage, networking, hybrid connectivity, monitoring/logging, security and IAM. SRE Experience. Expertise building and defining KPI’s (SLI/SLO’s) using open-source tooling like ELK, Prometheus and various other instrumentation, telemetry, and log analytics. You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in our Pune office.
Posted 3 weeks ago
25.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Responsibilities We are seeking a highly experienced and hands-on VP/AVP of CloudOps, SRE, and DevOps to lead our Cloud Infrastructure, Reliability Engineering, and DevOps functions. This role is crucial in driving the scalability, security, and high availability of our Enterprise SaaS platform. The ideal candidate is a strong technical leader with deep expertise in cloud platforms (Azure, AWS, GCP), Kubernetes, Infrastructure as Code (IaC), Site Reliability Engineering (SRE), and CI/CD automation. They must have a passion for engineering excellence, automation, and operational efficiency, coupled with leadership skills to manage and mentor large teams. Key Responsibilities Leadership & Strategy Lead and scale CloudOps, SRE, and DevOps teams (50+ members) to support global enterprise SaaS infrastructure. Define and execute a cloud-native, highly available, and resilient platform strategy. Drive a culture of automation, reliability, and DevSecOps adoption across engineering teams. Establish SLOs, SLIs, and KPIs for platform performance, uptime, and security. Cloud & Infrastructure Management Own and optimize multi-cloud (Azure, AWS, GCP) architecture and governance. Lead the design and automation of cloud infrastructure using Terraform, PowerShell. Ensure disaster recovery (DR), high availability (HA), and fault tolerance best practices. Site Reliability Engineering (SRE) & Observability Establish proactive monitoring, alerting, and observability strategies. Drive incident management, root cause analysis (RCA), and post-mortem processes. Implement auto-healing, self-recovery mechanisms, and chaos engineering for system resilience. DevOps & Automation Lead CI/CD pipeline design and automation using GitHub Actions, GitLab CI, or Jenkins. Improve deployment velocity and reliability through progressive delivery, blue-green deployments, and feature flags. Champion IaC (Infrastructure as Code) and GitOps best practices. Security & Compliance Ensure cloud security, zero-trust principles, and compliance with industry standards (SOC2, ISO 27001, GDPR, HIPAA). Drive secrets management, access control (IAM), and cloud security best practices. Qualifications Technical Expertise Must Have 18 – 25 years of experience in Cloud, DevOps, and SRE roles, with leadership exposure. Proven track record of leading CloudOps/SRE/DevOps teams (50+ members) in SaaS environments. Deep hands-on experience in: Multi-cloud infrastructure management and specifically worked at least 3 to 4 year in Azure cloud Kubernetes and container orchestration Infrastructure as Code (Terraform, PowerShell) Observability and monitoring CI/CD pipelines, automation, and GitOps Security and compliance in cloud environments Good to Have Worked on fedRAMP and .GOV cloud ecosystem Worked on large SQL DB systems Worked with global team in different time zones Leadership & Mindset Strong technical problem-solving skills and hands-on engineering approach. Experience managing global teams, setting technical direction, and scaling CloudOps/SRE functions. Passion for building engineering-driven cultures and fostering DevOps excellence. About Us Icertis is the global leader in AI-powered contract intelligence. The Icertis platform revolutionizes contract management, equipping customers with powerful insights and automation to grow revenue, control costs, mitigate risk, and ensure compliance - the pillars of business success. Today, more than one third of the Fortune 100 trust Icertis to realize the full intent of millions of commercial agreements in 90+ countries. About The Team Who we a re: Icertis is the only contract intelligence platform companies trust to keep them out in front, now and in the future. Our unwavering commitment to contract intelligence is grounded in our FORTE values—Fairness, Openness, Respect, Teamwork and Execution—which guide all our interactions with employees, customers, partners, and stakeholders. Because in our mission to be the contract intelligence platform of the world, we believe how we get there is as important as the destination. Icertis, Inc. provides Equal Employment Opportunity to all employees and applicants for employment without regard to race, color, religion, gender identity or expression, sex, sexual orientation, national origin, age, disability, genetic information, marital status, amnesty, or status as a covered veteran in accordance with applicable federal, state and local laws. Icertis, Inc. complies with applicable state and local laws governing non-discrimination in employment in every location in which the company has facilities. If you are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to careers@icertis.com or get in touch with your recruiter.
Posted 3 weeks ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Who We Are. Newfold Digital (with over $1b in revenue) is a leading web technology company serving nearly seven million customers globally. Established in 2021 through the combination of leading web services providers Endurance Web Presence and Web.com Group, our portfolio of brands includes: Bluehost, Crazy Domains, HostGator, Network Solutions, Register.com, Web.com and many others. We help customers of all sizes build a digital presence that delivers results. With our extensive product offerings and personalized support, we take pride in collaborating with our customers to serve their online presence needs. We’re hiring for our Developer Platform team at Newfold Digital — a team focused on building the internal tools, infrastructure, and systems that improve how our engineers develop, test, and deploy software. In this role, you’ll help design and manage CI/CD pipelines, scale Kubernetes-based infrastructure, and drive adoption of modern DevOps and GitOps practices. You’ll work closely with engineering teams across the company to improve automation, deployment velocity, and overall developer experience. We’re looking for someone who can take ownership, move fast, and contribute to a platform that supports thousands of deployments across multiple environments. What You'll Do & How You'll Make Your Mark. Build and maintain scalable CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI Manage and improve Kubernetes clusters (Helm, Kustomize) used across environments Implement GitOps workflows using Argo CD or Argo Workflows Automate infrastructure provisioning and configuration with Terraform and Ansible Develop scripts and tooling in Bash, Python, or Go to reduce manual effort and improve reliability Work with engineering teams to streamline and secure the software delivery process Deploy and manage services across cloud platforms (AWS, GCP, Azure, OCI). Who You Are & What You'll Need To Succeed. Strong understanding of core DevOps concepts including CI/CD, GitOps, and Infrastructure as Code Hands-on experience with Docker, Kubernetes, and container orchestration Proficiency with at least one major cloud provider (AWS, Azure, GCP, or OCI) Experience writing and managing Jenkins pipelines or similar CI/CD tools Comfortable working with Terraform, Ansible, or other configuration management tools Strong scripting skills (Bash, Python, Go) and a mindset for automation Familiarity with Linux-based systems and cloud-native infrastructure Ability to work independently and collaboratively across engineering and platform teams Good to Have Experience with build tools like Gradle or Maven Familiarity with Bitbucket or Git-based workflows Prior experience with Argo CD or other GitOps tooling Understanding of internal developer platforms and shared libraries Prior experience with agile development and project management. Why you’ll love us. We’ve evolved; we provide three work environment scenarios. You can feel like a Newfolder in a work-from-home, hybrid, or work-from-the-office environment. Work-life balance. Our work is thrilling and meaningful, but we know balance is key to living well. We celebrate one another’s differences. We’re proud of our culture of diversity and inclusion. We foster a culture of belonging. Our company and customers benefit when employees bring their authentic selves to work. We have programs that bring us together on important issues and provide learning and development opportunities for all employees. We have 20 + affinity groups where you can network and connect with Newfolders globally. We care about you. . At Newfold, taking care of our employees is our top priority. we make sure that cutting edge benefits are in place to for you. Some of the benefits you will have: We have partnered with some of the best insurance providers to provide you excellent Health Insurance options, Education/ Certification Sponsorships to give you a chance to further your knowledge, Flexi-leaves to take personal time off and much more. Building a community one domain at a time, one employee at a time. All our employees are eligible for a free domain and WordPress blog as we sponsor the domain registration costs. Where can we take you? We’re fans of helping our employees learn different aspects of the business, be challenged with new tasks, be mentored, and grow their careers. Unfold new possibilities with #teamnewfold This Job Description includes the essential job functions required to perform the job described above, as well as additional duties and responsibilities. This Job Description is not an exhaustive list of all functions that the employee performing this job may be required to perform. The Company reserves the right to revise the Job Description at any time, and to require the employee to perform functions in addition to those listed above.
Posted 3 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Summary: We are seeking a skilled DevOps Engineer to join our team. The ideal candidate will have proven experience in a DevOps engineering role, with a strong background in software development and system administration. You will be responsible for implementing and managing CI/CD pipelines, container orchestration, and cloud services to enhance our software development lifecycle. How You’ll Make an Impact (key responsibilities of role) Collaborate with development and operations teams to streamline processes and improve deployment efficiency. Implement and manage CI/CD tools such as GitLab CI, Jenkins, or CircleCI. Utilize Docker and Kubernetes (k8s) for containerization and orchestration of applications. Write and maintain scripts in at least one scripting language (e.g., Python, Bash) to automate tasks. Manage and deploy applications using cloud services (e.g. AWS, Azure, GCP) and their respective management tools. Understand and apply network protocols, IP networking, load balancing, and firewalling concepts. Implement infrastructure as code (IaC) practices to automate infrastructure provisioning and management. Utilize logging and monitoring tools (e.g., ELK stack, OpenSearch, Prometheus, Grafana) to ensure system reliability and performance. Familiarize with GitOps practices using tools like Flux or ArgoCD for continuous delivery. Work with Helm and Flyte for managing Kubernetes applications and workflows. What You Bring (required Qualifications And Skills) Bachelor’s or master’s degree in computer science, or a related field. Proven experience in a DevOps engineering role. Strong background in software development and system administration. Experience with CI/CD tools and practices. Proficiency in Docker and Kubernetes. Familiarity with cloud services and their management tools. Understanding of networking concepts and protocols. Experience with infrastructure as code (IaC) practices. Familiarity with logging and monitoring tools. Knowledge of GitOps practices and tools. Experience with Helm and Flyte is a plus. Preferred Qualifications: Experience with cloud-native architectures and microservices. Knowledge of security best practices in DevOps and cloud environments. Understanding database management and optimization (e.g., SQL, NoSQL). Familiarity with Agile methodologies and practices. Experience with performance tuning and optimization of applications. Knowledge of backup and disaster recovery strategies. Familiarity with emerging DevOps tools and technologies
Posted 3 weeks ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Details: Job Description HCI DevOps Cloud Engineer DESCRIPTION______________________________________________ The ideal candidate will have a strong background in Google Cloud Platform along with experience with CI/CD practices focus on Infrastructure as Code: GitHub, Terraform, Jenkins and other DevOps tools. The DevOps Cloud Engineer will play a critical role in automating and optimizing our infrastructure operations, ensuring high availability, and implementing scalable and secure solutions. Job Responsibilities Manage, design and support GCP cloud architecture for the customer environment Develop proof of concept to validate proposed solutions Support operational activities and maintenance activities for customer cloud infrastructure Write documentation for the existing solutions Design, implement, and manage CI/CD pipelines using GitHub or similar tools. Develop and maintain scalable and resilient cloud infrastructure using Infrastructure as Code (focus Terraform) Automate operational processes as much as possible, adhering to the principles of DevOps. Monitor and improve system performance, reliability, and scalability, cost optimization and follow best practices. Ensure security best practices are implemented and maintained across all cloud services. Troubleshoot and resolve complex infrastructure issues. Stay updated with emerging technologies and industry trends and apply this knowledge to improve our customers' infrastructure. Qualification Requirements Bachelor's degree in computer science, Information Technology, Engineering, or a related field Minimum of 7 years of experience in a DevOps Engineer role or similar. Strong knowledge of GCP services and management. (GCP AppEngine, GCP Cloud Functions, GCP Kubernetes Engine, GCP Firebase) Good understanding or proficiency with CI/CD tools. Experience with infrastructure automation using Terraform. Familiarity with containerization and orchestration technologies (e.g Kubernetes). Solid understanding of networking, security, and database concepts. Experience with scripting languages such PowerShell. Experience with Github, Jenkins, New Relic, Nagios, AppWorks and Active Batch, Rundeck Excellent problem-solving skills and the ability to work under pressure. Strong communication and collaboration skills. Preferred Skills: Certifications and/or Proven experience with GCP is mandatory. Experience with monitoring tools. Familiarity with Agile methodologies and GitOps principles. Terraform certification or experience Job Requirements Qualification Requirements Bachelor's degree in computer science, Information Technology, Engineering, or a related field Minimum of 7 years of experience in a DevOps Engineer role or similar. Strong knowledge of GCP services and management. (GCP AppEngine, GCP Cloud Functions, GCP Kubernetes Engine, GCP Firebase) Good understanding or proficiency with CI/CD tools. Experience with infrastructure automation using Terraform. Familiarity with containerization and orchestration technologies (e.g Kubernetes). Solid understanding of networking, security, and database concepts. Experience with scripting languages such PowerShell. Experience with Github, Jenkins, New Relic, Nagios, AppWorks and Active Batch, Rundeck. Excellent problem-solving skills and the ability to work under pressure. Strong communication and collaboration skills. Preferred Skills: Certifications and/or Proven experience with GCP is mandatory. Experience with monitoring tools. Familiarity with Agile methodologies and GitOps principles. Terraform certification or experience
Posted 3 weeks ago
5.0 - 8.0 years
4 - 7 Lacs
Gurgaon
Remote
Job description About this role Team Overview: Data is at the core of the Aladdin platform, and increasingly, our ability to consume, store, analyze, and gain insight from data is a key component of our competitive advantage. The Data Engineering team is responsible for the data ecosystem within BlackRock. We engineer high performance data pipelines, provide a fabric to discover and consume data, and continually evolve our data storage capabilities. We believe in writing small, testable code with a focus on innovation. We are committed to open source, and we regularly contribute our work back to the community. We are seeking top tier Cloud Native DevOps Platform Engineers to augment our Enterprise Data Platform team. Our objective is to extend our data lifecycle management practices to include structured, semi structured and unstructured data. This role requires a breadth of individual technical capabilities and competencies, though, most important, is a willingness and openness to learning new things across multiple technology disciplines. This role is for practitioners and not researchers. As a Data Platform Cloud/DevOps Engineer in the Data Engineering team you will: - Work alongside our systems engineers and UI developers to help design and build scalable, automated CI/CD pipelines. Help prove out and productionize infrastructure and tooling to support scalable cloud-based applications Working/Unlocking myriad generative AI/ML use cases for Aladdin Data and thus for BlackRock Have fun as part of an awesome team Specific Responsibilities: Working as part of a multi-disciplinary squad to establish our next generation of data pipelines and tools Be involved from inception of projects, understanding requirements, designing & developing solutions, and incorporating them into the designs of our platforms Mentor team members on technology and best practices Build and maintain strong relationships between DataOps Engineering and our Technology teams Contribute to the open source community and maintain excellent knowledge of the technical landscape for data & cloud tooling Assist in troubleshooting issues, support the operation of production software Write technical documentation Desirable Skills Data Operations and Engineering Comfortable reading and writing python code for data acquisition, ETL/ELT Experience orchestrating data pipelines with AirFlow and/or Argo Workflows Experience implementing and operating telemetry-based monitoring, alerting, and incident response systems. We aim to follow Site Reliability Engineering (SRE) best practices. Experience supporting database or datastores e.g. MongoDB, Redis, Cassandra, Ignite, Hadoop, S3, Azure Blob Store; and various messaging and streaming platforms such as NATS or Kafka Cloud Native DevOps Platform Engineering Knowledge of the Kubernetes (K8s) APIs with a strong focus on stateful workloads Templating with Helm, ArgoCD, Ansible, and Terraform Understanding of the K8s Operator Pattern - comfort and courage to wade into (predominantly golang based) operator implementation code bases Comfortable building atop K8s native frameworks including service mesh (Istio), secrets management (cert-manager, HashiCorp Vault), log management (Splunk), observability (Prometheus, Grafana, AlertManager). Experience in creating and evolving CI/CD pipelines with GitLab or Github following GitOps principles Natural/Large Language Models (Good to have) Experience with NLP coding tasks like tokenization, chunking, tagging, embedding, and indexing supporting subsequent retrieval and enrichment Experience with basic prompt engineering, LLM fine tuning, and chatbot implementations in modern python SDKs like langchain and/or transformers Overall 5-8 years of hands-on experience in DevOps Cloud or related Engineering practices. Our benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Job Requisition # R254580
Posted 3 weeks ago
1.0 years
0 Lacs
Calcutta
On-site
We are seeking a DevOps Engineer with 1-2 years of experience specializing in AWS, Git, and VPS management. The ideal candidate will be responsible for automating deployments, managing cloud infrastructure, and optimizing CI/CD pipelines for seamless development and operations. Key Responsibilities: ✅ AWS Infrastructure Management – Deploy, configure, and optimize AWS services (EC2, S3, RDS, Lambda, etc.). ✅ Version Control & GitOps – Manage repositories, branching strategies, and workflows using Git/GitHub/GitLab. ✅ VPS Administration – Configure, maintain, and optimize VPS servers for high availability and performance. ✅ CI/CD Pipeline Development – Implement automated Git-based CI/CD workflows for smooth software releases. ✅ Containerization & Orchestration – Deploy applications using Docker and Kubernetes. ✅ Infrastructure as Code (IaC) – Automate deployments using Terraform or CloudFormation. ✅ Monitoring & Security – Implement logging, monitoring, and security best practices. Required Skills & Experience: 1+ years of experience in AWS, Git, and VPS management. Strong knowledge of AWS services (EC2, VPC, IAM, S3, CloudWatch, etc.). Expertise in Git and GitOps workflows. Hands-on experience with VPS hosting, Nginx, Apache, and server management. Experience with CI/CD tools (Jenkins, GitHub Actions, GitLab CI). Knowledge of Infrastructure as Code (Terraform, CloudFormation). Strong scripting skills (Bash, Python, or Go). Preferred Qualifications: Experience with server security hardening on VPS servers. Familiarity with AWS Lambda & Serverless architecture. Knowledge of DevSecOps best practices. Bring your updated resume and be in formal attire. Job Types: Full-time, Permanent, Contractual / Temporary Benefits: Provident Fund Schedule: Day shift Work Location: In person
Posted 3 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary: We are hiring a Senior DevOps Engineer with 3–5 years of experience to join our growing engineering team. The ideal candidate is proficient in AWS and Azure , has a solid development background (Python preferred) , and demonstrates strong experience in infrastructure design, automation, and DevOps practices . Exposure to GCP is a plus. You will be responsible for building, managing, and optimizing robust, secure, and scalable infrastructure solutions from scratch. Key Responsibilities: Design and implement cloud infrastructure using AWS , Azure , and optionally GCP Build and manage Infrastructure-as-Code using Terraform Develop and maintain CI/CD pipelines using tools such as GitHub Actions , Jenkins , or GitLab CI Deploy and manage containerized applications using Docker and Kubernetes (EKS/AKS) Set up and manage Kafka for distributed streaming and event processing Build monitoring, logging, and alerting solutions using tools like Prometheus , Grafana , ELK , CloudWatch , Azure Monitor Ensure cost optimization and security best practices across all cloud environments Collaborate with developers to debug application issues and improve system performance Lead infrastructure architecture discussions and implement scalable, resilient solutions Automate operational processes and drive DevOps culture and best practices across teams Required Skills: 3–5 years of hands-on experience in DevOps/Site Reliability Engineering Strong experience in multi-cloud environments (AWS + Azure) ; GCP exposure is a bonus Proficient in Terraform for IaC; experience with ARM Templates or CloudFormation is a plus Solid experience with Kubernetes (EKS & AKS) and container orchestration Proficient in Docker and container lifecycle management Hands-on experience with Kafka (setup, scaling, and monitoring) Experience implementing monitoring, logging, and alerting solutions Expertise in cloud security , IAM , RBAC , and cost optimization Development experience in Python or any backend language Excellent problem-solving and troubleshooting skills Nice to Have: Certifications: AWS DevOps Engineer, Azure DevOps Engineer, CKA/CKAD Experience with GitOps, Helm, and service mesh (Istio/Linkerd) Familiarity with serverless architecture and event-driven systems Education: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field
Posted 3 weeks ago
6.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Post :- Team Lead - DevOps Experience: 6+ years Location :- Ahmedabad (Work from office) Role Summary: Key Responsibilities: Manage, mentor, and grow a team of DevOps engineers. Oversee the deployment and maintenance of applications like: Odoo (Python/PostgreSQL) Magento (PHP/MySQL) Node.js (JavaScript/TypeScript) Design and manage CI/CD pipelines for each application using tools like Jenkins, GitHub Actions, GitLab CI. Handle environment-specific configurations (staging, production, QA). Containerize legacy and modern applications using Docker and deploy via Kubernetes (EKS/AKS/GKE) or Docker Swarm. Implement and maintain Infrastructure as Code using Terraform , Ansible , or CloudFormation . Monitor application health and infrastructure using Prometheus, Grafana, ELK, Datadog , or equivalent tools. Ensure systems are secure , resilient , and compliant with industry standards. Optimize cloud cost and infrastructure performance. Collaborate with development, QA, and IT support teams for seamless delivery. Troubleshoot performance, deployment, or scaling issues across tech stacks. Must-Have Skills: 6+ years in DevOps/Cloud/System Engineering roles with real hands-on experience. 2+ years managing or leading DevOps teams. Experience supporting and deploying: Odoo on Ubuntu/Linux with PostgreSQL Magento with Apache/Nginx, PHP-FPM, MySQL/MariaDB Node.js with PM2/Nginx or containerized setups Experience with AWS / Azure / GCP infrastructure in production. Strong scripting skills: Bash, Python, PHP CLI , or Node CLI. Deep understanding of Linux system administration and networking fundamentals . Experience with Git, SSH, reverse proxies (Nginx), and load balancers. Good communication skills and having exposure in managing clients. Preferred Certifications (Highly Valued): AWS Certified DevOps Engineer – Professional Azure DevOps Engineer Expert Google Cloud Professional DevOps Engineer Bonus: Magento Cloud DevOps or Odoo Deployment Experience Bonus Skills (Nice to Have): Experience with multi-region failover , HA clusters , or RPO/RTO-based design . Familiarity with MySQL/PostgreSQL optimization and Redis , RabbitMQ , or Celery . Previous experience with GitOps, ArgoCD, Helm, or Ansible Tower. Knowledge of VAPT 2.0 , WCAG compliance , and infrastructure security best practices .
Posted 3 weeks ago
20.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Good day, We have immediate opportunity for Senior Cloud Engineer. Job Role: Senior Cloud Engineer Job Location: Pune Experience- 4 Yrs+ Notice Period : Immediate to 30 Days About Company: At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron’s progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honoured with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,700+ and has 55 offices in 20 countries within key global markets. For more information on the company, please visit our website or LinkedIn community. Diversity, Equity, and Inclusion : Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and an affirmative-action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law. Job Description: We are seeking a highly skilled Senior Cloud Engineer with extensive experience in software development and DevOps to lead our efforts in automating Cloud infrastructure. The ideal candidate will focus on building automation for Cloud Landing Zones and collaborate with cross-functional teams to ensure the successful implementation and maintenance of cloud solutions. Responsibilities: Design, implement, and maintain scalable and efficient cloud-based solutions on AWS and Azure. Lead initiatives to automate cloud infrastructure. Collaborate with teams to integrate best practices in development, code quality, and automation. Guide and mentor development teams, providing expertise in DevOps and automation practices. Contribute to the design and implementation of cloud applications using serverless architectures, Kubernetes, and event-driven patterns. Develop and maintain CI/CD pipelines to streamline deployments, utilizing GitOps methodologies. Apply security best practices to design and implement secure authentication and authorization mechanisms. Monitor and optimize the performance, scalability, and reliability of cloud applications. Stay updated with the latest cloud technologies and development trends, applying new tools and frameworks as needed. Ensure software systems meet functional and non-functional requirements while adhering to best practices in software design, testing, and security. Foster continuous improvement by sharing knowledge, conducting team reviews, and mentoring junior developers. Requirements: Proven experience as a Cloud engineer or similar role, with a strong focus on AWS (Azure is a plus). Solid experience in software development and DevOps practices. Expertise in AWS/Azure infrastructure automation. Proficiency in programming languages such as Python, Golang, or JavaScript. Experience with serverless architectures, Kubernetes, and event-driven patterns. Knowledge of CI/CD pipelines and GitOps methodologies. Strong understanding of cloud security best practices. Excellent problem-solving skills and ability to work collaboratively in a team environment. Strong communication skills and the ability to convey complex technical concepts to non-technical stakeholders. Preferred Qualifications: Experience in designing and working with No-SQL databases such as DynamoDB. Experience in leading and mentoring development teams. Expertise in software architecture, development, and systems testing with a strong focus on cloud technologies. Strong technical guidance and decision-making abilities to shape solutions and enforce development best practices. Proficient in applying quality gates, including code reviews, pair programming, and team review meetings. Experience in code management and release processes, with familiarity in Monorepo and Multirepo strategies. Solid understanding of functional programming principles, including list/map/reduce/compose techniques and familiarity with monads. Knowledge of SDLC, and adherence to DRY, KISS, and SOLID design principles. Proficient in managing security protocols such as ABAC, RBAC, JWT, SAML, AAD, and OIDC for authentication and authorization. Expertise in event-driven architecture, including queues, streams, batches, and pub/sub systems. Strong understanding of scalability, concurrency, and distributed systems. Experience with cloud networking and proxies. Expertise in CI/CD pipelines, GitFlow, and GitOps frameworks like Flux and ArgoCD. Polyglot programmer with expert-level proficiency in at least two languages (e.g., Python, TypeScript, GoLang). Experience in operating Kubernetes clusters from a developer’s perspective, including custom CRDs, operators, and controllers. Experience in building serverless cloud applications. Strong team player with the ability to communicate and collaborate well in a fast-paced, collaborative environment. Proficient in using GitHub for version control, code reviews, and collaborative development. Experience working in agile teams, participating in sprints, and collaborating effectively in cross-functional teams. Fluency in UI development using React, Hooks, and TypeScript is a plus. Deep knowledge of AWS cloud services, with a basic understanding of Azure as a plus. Experience in developing and managing cloud infrastructures using Crossplane.io is a plus. Knowledge equivalent to AWS Certified DevOps Engineer – Professional is a plus If you find this this opportunity interesting kindly share your below details (Mandatory) Total Experience- Experience in Cloud - Experience in Scripting – Current CTC- Expected CTC- Notice period- Current Location- If you had gone through any interviews in Synechron before? If yes when Kind Regards, Talent Acquisition , Pune.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough