Jobs
Interviews

1162 Prometheus Jobs - Page 24

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

9.0 - 14.0 years

15 - 22 Lacs

Pune, Chennai, Bengaluru

Work from Office

Greetings from You & I Consulting! About the Role: We are seeking a detail-oriented and results-driven Test Automation Engineer (Java/Selenium) to join a leading multinational organization. The ideal candidate will have hands-on experience in designing, developing, and maintaining automated test scripts, with a strong focus on ensuring the quality and reliability of web applications. Position: Test Automation Engineer (Java/Selenium) Experience: 5 years and above Location: Bengaluru, Pune, Chennai (Work From Office) CTC: Up to 22 LPA Shift: 24*7 shifts To schedule an interview ONLY WhatsApp our HR Specialist @Srijita (8016499764) Roles and responsibilities: Design and develop test packs for various test cycles by selecting the right strategy and tools. Design, manage and report for all test phases that are part of Software test lifecycle (STLC) such as Continuous Integration, Regression, End-to-End (E2E), Functional, and Production parallel and Non-functional. Schedule, execute & manage various test cycles. Understand operational processes in order to provide solutions which improve operational efficiency. Skill Requirement: Proven experience in Quality Assurance and in managing test automation frameworks, release & defect management in the past and proven experience in development of frameworks using java spring Solid understanding of the Software Development Life Cycle (SDLC), Software Test Life Cycle (STLC) and hands on agile development methodologies Solid understanding and experience in efficient Defect management, test coverage analysis to identify gaps and provide improvements Solid experience developing UI automation frameworks like Selenium, Playwright or BDD. Nice to have operational knowledge in Java or any coding languages, UNIX commands and SQL queries Must have excellent verbal and written skills being able to communicate effectively on both a technical and business level, working in team with multiple locations/Time Zones To Apply: Contact: Srijita ONLY WhatsApp: [8016499764] Note: Due to high call volumes, our lines may be busy at times. If you're unable to reach us directly, please WhatsApp your details in the format below: Full Name: Mobile Number: Email ID: Highest Qualification: University Name (Highest Qualification): Total Work Experience: Date of Birth: Current Organization: Preferred Location: Last CTC: Expected CTC: Referral Alert: Know someone who fits the role? Feel free to refer friends or relatives who meet the mentioned criteria. For interview coordination, ONLY WhatsApp our HR Specialist: Srijita [8016499764]

Posted 1 month ago

Apply

4.0 - 9.0 years

15 - 30 Lacs

Chennai

Hybrid

ACV Auctions is looking for an experienced Site Reliability Engineer III with a systems and software engineering background to focus on site reliability. We believe in taking a software engineers approach to operations by providing standards and software tools to all engineering projects. As a Site Reliability Engineer, you will split your time between developing software that improves overall reliability and providing operational support for production systems. What you will do: Maintain reliability and performance for your particular infrastructure area while working with software engineers to improve service quality and health. Develop, design, and review new software tools in Python & Java to improve infrastructure reliability and provide services with better monitoring, automation, and product delivery. Practice efficient incident response through on-call rotations alongside software engineers and document incidents through postmortems. Support service development with capacity plans, launch/deployment plans, scalable system design, and monitoring plans. What you will need: BS degree in Computer Science or a related technical discipline or equivalent practical experience. Experience building/managing infrastructure deployments on Google Cloud Platform 3+ years managing cloud infrastructure. Experience programming in at least one of the following: Python or Java You are experienced in Linux/Unix systems administration, configuration management, monitoring, and troubleshooting. You are comfortable with production systems including load balancing, distributed systems, microservice architecture, service meshes, and continuous delivery. Experience building and delivering software tools for monitoring, management, and automation that support production systems. Comfortable working with teams across multiple time -zones and working flexible hours as needed. Preferred Qualifications Experience maintaining and scaling Kubernetes clusters for production workloads is a plus

Posted 1 month ago

Apply

6.0 - 11.0 years

12 - 19 Lacs

Pune

Work from Office

Job Description:- Preferred Qualifications: • Experience: 6-8 years of experience in software development. • Real-Time Monitoring: Familiarity with real-time monitoring solutions. • Team Collaboration: Ability to work effectively as part of a cross-functional team Mandatory Skills: • Design, develop, and maintain Grafana dashboards to visualize data from applications developed with Go lang, Flutter, and Python, etc • Integrate Grafana with various data sources, including Prometheus, InfluxDB, Elasticsearch, and other relevant systems Good to have Skills : • Database Knowledge: Strong understanding of Elasticsearch and other databases. • Core Java Knowledge: Basic knowledge of Core Java is a plus. CI/CD Processes: Experience with Continuous Integration/Continuous Deployment (CI/CD) processes is beneficial. Detailed JD Position Overview: We are seeking a skilled Grafana Developer to join our team. The ideal candidate will be responsible for designing, developing, and maintaining Grafana dashboards to visualize operational and business data. This role requires a deep understanding of data integration, performance optimization, and user-centric design. Key Responsibilities: • Design, develop, and maintain Grafana dashboards to visualize data from applications developed with Go lang, Flutter, and Python. • Integrate Grafana with various data sources, including Prometheus, InfluxDB, Elasticsearch, and other relevant systems • Performance Optimization: Optimize dashboards for performance, scalability, and real-time insights. • Stakeholder Collaboration: Work closely with stakeholders to understand their data visualization requirements and ensure dashboards meet their needs. • User-Friendly Design: Ensure dashboards are user-friendly, intuitive, and aligned with organizational goals. Required Skills: • Grafana Expertise: Proven experience with Grafana and other data visualization tools. • Data Integration: Proficiency in integrating various data sources into Grafana. • Database Knowledge: Strong understanding of Elasticsearch and other databases. • Core Java Knowledge: Basic knowledge of Core Java is a plus. • CI/CD Processes: Experience with Continuous Integration/Continuous Deployment (CI/CD) processes is beneficial.

Posted 1 month ago

Apply

7.0 - 12.0 years

5 - 15 Lacs

Pune, Mumbai (All Areas)

Work from Office

Role & responsibilities Kubernetes Platform Engineer SRE - Provide L1-L3 level support for Payments services deployed on IKP clusters. - Monitor platform health using Datadog, Splunk, Prometheus, and Grafana, act on alerts to maintain SLOs and contribute to the reduction of overall alerts. - Troubleshoot and resolve production incidents, ensuring timely communication and documentation. - Participate in on-call rotations and provide support during planned upgrades or regulatory releases. - Develop andimplement automation scripts and self-healing solutions to reduce incident recurrence. - Support CI/CD pipelines and enhance deployment reliability. - Maintain up-to-date runbooks, incident retrospectives, and SOPs. Required Skills - Strong hands-on experience with Kubernetes (preferably GKE/Anthos). - Experience with monitoring/observability tools: Datadog, Splunk, Prometheus, Grafana. - Strong understanding of incident management processes, RCA, and SLO-based alerting. - Ability to handle on-call duties and perform in high-pressure production environments. - GCP certification is preferred. - Experience working in regulated environments e.g. Banking or Financial Services is preferred.

Posted 1 month ago

Apply

3.0 - 5.0 years

3 - 7 Lacs

Gurugram

Work from Office

Senior Software Engineer --> --> Location, Designation --> LocationGurugram DesignationSenior Software Engineer Experience3 - 5 Years Key Responsibilities: 5G Core Network Testing: Strong Hands-on knowledge of 3G/4G Signalling and Userplane flows. Conduct functional, performance, and regression testing for 5G Core Network elements such as AMF, SMF, UPF, AUSF, NRF, UDM, PCF, NSSF, and NWDAF. Validate 3GPP compliance for protocols including HTTP/2, PFCP, SCTP, NGAP, and N2/N3 interfaces. Automation and Test Frameworks: Develop, execute, and maintain automated test scripts using tools like Robot Framework, Selenium, or custom Python/Go-based solutions. Collaborate with the DevOps team to integrate automated test suites into CI/CD pipelines. Test Lab Setup and Maintenance: Set up and configure 5G Core test environments, including simulators, real-time network elements, and protocol analyzers. Manage lab resources, including test servers, virtual machines, Kubernetes clusters, and network traffic generators. Performance and Load Testing: Utilize tools like Spirent, IXIA, or Keysight for performance testing and load generation. Analyze and troubleshoot issues related to scalability, throughput, latency, and resilience. Defect Management and Reporting: Identify, document, and track defects using tools like JIRA. Provide comprehensive test reports and metrics to stakeholders. Collaboration and Standards: Work closely with development, operations, and QA teams to ensure alignment on testing requirements. Stay updated with 3GPP specifications, testing standards, and industry trends. Required Skills and Qualifications: Bachelors or Masters degree in Computer Science, Telecommunications, or a related field. Experience: 3-5 years in testing and validating telecom networks, preferably 5G Core or LTE EPC. Technical Expertise: Strong understanding of 3GPP specifications, especially TS 23.501, TS 23.502, TS 23.503, and TS 29.xxx series. Hands-on experience with protocol testing tools such as Wireshark, DS Tester, or TShark. Proficiency in scripting languages (Python, Bash) and/or programming (Golang). Tools and Platforms: Experience with Kubernetes, Docker, and cloud platforms (AWS, Azure, or GCP). Familiarity with Grafana, Prometheus, or other monitoring tools. Soft Skills: Strong analytical and troubleshooting skills.Effective communication and documentation skills. Feel Free To Contact Us...!!! Submit

Posted 1 month ago

Apply

5.0 - 8.0 years

15 - 18 Lacs

Hyderabad, Coimbatore

Work from Office

To join our team that can help to design, configure, implementing & maintaining the ServiceHub Network & Servers. Communicate on current processes & suggest if needed. Fix the issues escalated by L1/L2 teams.

Posted 1 month ago

Apply

4.0 - 6.0 years

4 - 8 Lacs

Gurugram

Work from Office

Software Engineer --> --> Location, Designation --> LocationGurugram DesignationSenior Software Engineer (Python) Experience4 - 6 Years : Technical Expertise: Strong experience with OpenStack architecture and services (Nova, Neutron, Cinder, Keystone, Glance, etc.). Knowledge of NFV architecture, ETSI standards, and VIM (Virtualized Infrastructure Manager). Hands-on experience with containerization platforms like Kubernetes or OpenShift. Familiarity with SDN (Software-Defined Networking) solutions such as OpenDaylight or Tungsten Fabric. Experience with Linux-based systems and scripting languages (Python, Bash, etc.). Understanding of networking protocols (e.g., VXLAN, BGP, OVS, SR-IOV). Knowledge of Ceph or other distributed storage solutions. Tools: Experience with monitoring and logging tools (Prometheus, Grafana, ELK Stack, etc.). Configuration management tools like Ansible, Puppet, or Chef. Proficiency in CI/CD tools (Jenkins, GitLab CI, etc.). Certifications (Preferred): OpenStack Certified Administrator (COA). Red Hat Certified Engineer (RHCE). VMware Certified Professional (VCP). Kubernetes certifications like CKA or CKAD. Feel Free To Contact Us...!!! Submit

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad

Work from Office

About the Role: We are seeking a skilled and detail-oriented Data Engineer with deep expertise in PostgreSQL and SQL to design, maintain, and optimize our database systems. As a key member of our data infrastructure team, you will work closely with developers, DevOps, and analysts to ensure data integrity, performance, and scalability of our applications. Key Responsibilities: Design, implement, and maintain PostgreSQL database systems for high availability and performance. Write efficient, well-documented SQL queries, stored procedures, and database functions. Analyze and optimize slow-performing queries and database structures. Collaborate with software engineers to support schema design, indexing, and query optimization. Perform database migrations, backup strategies, and disaster recovery planning. Ensure data security and compliance with internal and regulatory standards. Monitor database performance and proactively address bottlenecks and anomalies. Automate routine database tasks using scripts and monitoring tools. Contribute to data modeling and architecture discussions for new and existing systems. Support ETL pipelines and data integration processes as needed. Required Qualifications: Bachelor's degree in Computer Science, Information Systems, or related field. 5 years of professional experience in a database engineering role. Proven expertise with PostgreSQL (version 12+ preferred). Strong SQL skills with the ability to write complex queries and optimize them. Experience with performance tuning, indexing, query plans, and execution analysis. Familiarity with database design best practices and normalization techniques. Solid understanding of ACID principles and transaction management. Preferred Qualifications: Experience with cloud platforms (e.g., AWS RDS, GCP Cloud SQL, or Azure PostgreSQL). Familiarity with other database technologies (e.g., MySQL, NoSQL, MongoDB, Redis). Knowledge of scripting languages (e.g., Python, Bash) for automation. Experience with monitoring tools (e.g., pgBadger, pg_stat_statements, Prometheus/Grafana). Understanding of CI/CD processes and infrastructure as code (e.g., Terraform). Exposure to data warehousing or analytics platforms (e.g., Redshift, BigQuery).

Posted 1 month ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Hyderabad

Work from Office

Required Skills and Experience : Bachelor's degree in Computer Science, Engineering, or a related field. 7+ years of experience in DevOps, with at least 2 years in a leadership role. Proficiency in CI/CD tools like Jenkins, GitLab, Azure DevOps, or similar. Experience with infrastructure-as-code (IaC) tools like Terraform, Ansible, or CloudFormation. Expertise in cloud platforms (AWS, Azure, GCP). Strong knowledge of containerization and orchestration tools (Docker, Kubernetes). Familiarity with monitoring tools (Prometheus, Grafana, ELK Stack, etc.). Excellent problem-solving and communication skills. Preferred Qualifications : Certifications in cloud platforms (e.g., AWS Certified DevOps Engineer, Azure DevOps Expert). Hands-on experience with microservices architecture and serverless frameworks. Knowledge of agile and DevSecOps methodologies.

Posted 1 month ago

Apply

6.0 - 11.0 years

7 - 17 Lacs

Hyderabad

Work from Office

In this role, you will: Manage, coach, and develop a team or teams of experienced engineers and engineering managers in roles with moderate complexity and risk, responsible for building high quality capabilities with modern technology Ensure adherence to the Banking Platform Architecture, and meeting non-functional requirements with each release Partner with, engage and influence architects and experienced engineers to incorporate Wells Fargo Technology technical strategies, while understanding next generation domain architecture and enable application migration paths to target architecture; for example cloud readiness, application modernization, data strategy Function as the technical representative for the product during cross-team collaborative efforts and planning Identify and recommend opportunities for driving escalated resolution of technology roadblocks including code, build and deployment while also managing overall software development cycle and security standards Determine appropriate strategy and actions to act as an escalation partner for scrum masters and the teams to meet moderate to high risk deliverables and help remove impediments, obstacles, and friction while encouraging constant learning, experimentation, and continual improvement Build engineering skills side-by-side in the codebase, conduct peer reviews to evaluate quality and solution alignment to technical direction, and guide design, as needed Interpret, develop and ensure security, stability, and scalability within functions of technology with moderate complexity, as well as identify, manage and mitigate technology and enterprise risk Collaborate with, partner with and influence Product Managers/Product Owners to drive user satisfaction, influence technology requirements and priorities in the product roadmap, promote innovative and intelligent solutions, generate corporate value and articulate technical strategy while being a solid advocate of agile and DevOps practices Interact directly with third party vendors and technology service providers Manage allocation of people and financial resources to ensure commitments are met and align with strategic objectives in technology engineering Hire, build and guide a culture of talent development to have the skills required to effectively design and deliver innovative solutions for product areas and products to meet business objectives and strategy, as well as conduct performance management for engineers and managers Required Qualifications: 6+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education 3+ years of management or leadership experience Desired Qualifications: Strong people management experience Should have proven ability and experience of directly managing a diverse of technology delivery resources with formal line of accountability (at least 30+ team members) Ability to conduct research into emerging technologies and trends, standards, and products as required Ability to present ideas in user-friendly language Able to prioritize and execute tasks in a high-pressure environment Experience working in a team-oriented, collaborative environment Provide consultation on the use of re-engineering techniques to improve process performance and improvements for greater efficiencies. Ability to work in a fast-paced environment Experienced in strategic process design for enterprise scale development organizations, driving speed , stability and quality of delivery Experience of working in a matrix structure across both global and regional stakeholders in an enterprise scale setup Experience working in a team-oriented, collaborative environment. Container Technologies such as Docker, Kubernetes, OpenStack Cloud Monitoring tools like Splunk, AppD, Dynatrace, Prometheus, Grafana, Elastic, Thousand Eyes etc.. Web development technologies and frameworks: Core Java, Java Enterprise Edition (JSP, Restful WebServices), Spring MVC, Spring Boot, Spring Cloud (Configuration bus, Zipkins), Hibernate, Maven, MQ, JUnit, Angular JS, React JS, jQuery, MQ,(woman)HTML, XML, CSS, Oracle, JNDI, JAAS Good understanding and hands-on exposure with Mongo, Kafka, Redis Job Expectations: Experience in Application Development with at least 5+ years in senior roles participating and driving the transformation for global organization 5+ Years in implementing SRE concepts and leading teams towards SRE maturity and reducing toil. 5+ Years in Observability domain with hands on knowledge on Metrics, Traces, Logs and Events 5+ Years in Application Performance Engineering 3+ Years experience on Dockers and Data streaming tools Experienced and well versed in Agile and CI CD tools and practices, Good exposure to tools like JIRA, Jenkins, CI CD Integration Jenkins, Github, Artifactory, Sonar etc Experience designing and implementing APIs, including deep understanding of REST, SOAP, HTTP etc. API lifecycle exposure designing API (openapi, swagger), developer platform and other API gateway capabilities Have a strong Technical background with experience of managing engineering/development teams across geographies Strong knowledge and implementation experience of High Resiliency/High availability applications including scaling, cloud migration, vertical and horizontal scaling will be an advantage Experienced and well versed in Agile and Waterfall project management practices, Good exposure to tools like JIRA and Confluence to drive the project release from inception to deployment

Posted 1 month ago

Apply

4.0 - 6.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Android Middleware/Framework Proficiency in problem solving and troubleshooting technical issues. Willingness to take ownership and strive for the best solutions. Experience in using performance analysis tools, such as Android Profiler, Traceview, perfetto, and Systrace etc. Strong understanding of Android architecture, memory management, and threading. Strong understanding of Android HALs, Car Framework, Android graphics pipeline, DRM, Codecs. Good knowledge in Hardware abstraction layers in Android and/or Linux. Good understanding of the git, CI/CD workflow Experience in agile based projects. Experience with Linux as a development platform and target Extensive experience with Jenkins and Gitlab CI system Hands-on experience with GitLab, Jenkins, Artifactory, Grafana, Prometheus and/or Elastic Search. Experience with different testing frameworks and their implementation in CI system Programming using C/C++, Java/Kotlin, Linux. Yocto and its use in CI Environments Familiarity with ASPICE Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders.

Posted 1 month ago

Apply

6.0 - 10.0 years

6 - 11 Lacs

Mumbai

Work from Office

Primary Skills Google Cloud Platform (GCP) Expertise in Compute (VMs, GKE, Cloud Run), Networking (VPC, Load Balancers, Firewall Rules), IAM (Service Accounts, Workload Identity, Policies), Storage (Cloud Storage, Cloud SQL, BigQuery), and Serverless (Cloud Functions, Eventarc, Pub/Sub). Strong experience in Cloud Build for CI/CD, automating deployments and managing artifacts efficiently. Terraform Skilled in Infrastructure as Code (IaC) with Terraform for provisioning and managing GCP resources. Proficient in Modules for reusable infrastructure, State Management (Remote State, Locking), and Provider Configuration . Experience in CI/CD Integration with Terraform Cloud and automation pipelines. YAML Proficient in writing Kubernetes manifests for deployments, services, and configurations. Experience in Cloud Build Pipelines , automating builds and deployments. Strong understanding of Configuration Management using YAML in GitOps workflows. PowerShell Expert in scripting for automation, managing GCP resources, and interacting with APIs. Skilled in Cloud Resource Management , automating deployments, and optimizing cloud operations. Secondary Skills CI/CD Pipelines GitHub Actions, GitLab CI/CD, Jenkins, Cloud Build Kubernetes (K8s) Helm, Ingress, RBAC, Cluster Administration Monitoring & Logging Stackdriver (Cloud Logging & Monitoring), Prometheus, Grafana Security & IAM GCP IAM Policies, Service Accounts, Workload Identity Networking VPC, Firewall Rules, Load Balancers, Cloud DNS Linux & Shell Scripting Bash scripting, system administration Version Control Git, GitHub, GitLab, Bitbucket

Posted 1 month ago

Apply

4.0 - 9.0 years

9 - 14 Lacs

Mumbai, Pune, Bengaluru

Work from Office

PostgreSQL Database Administrator with over 4+ years of hands-on experience to manage, maintain, and optimize our PostgreSQL database environments. The ideal candidate will be responsible for ensuring high availability, performance, and security of our databases while supporting development and operations teams. Install, configure, and upgrade PostgreSQL database systems. Monitor database performance and implement tuning strategies. Perform regular database maintenance tasks such as backups, restores, and indexing. Ensure database security, integrity, and compliance with internal and external standards. Automate routine tasks using scripting (e.g., Bash, Python). Troubleshoot and resolve database-related issues in a timely manner. Collaborate with development teams to optimize queries and database design. Implement and maintain high availability and disaster recovery solutions. Maintain documentation related to database configurations, procedures, and policies. Participate in on-call rotation and provide support during off-hours as needed. Primary skills 4+ years of experience as a PostgreSQL DBA in production environments. Strong knowledge of PostgreSQL architecture, replication, and performance tuning. Secondary skills Proficiency in writing complex SQL queries and PL/pgSQL procedures. Familiarity with Linux/Unix systems and shell scripting. Experience with monitoring tools like Prometheus , Grafana , or Nagios . Understanding of database security best practices and access control.

Posted 1 month ago

Apply

10.0 - 14.0 years

13 - 18 Lacs

Pune

Work from Office

Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way youd like, where youll be supported and inspired by a collaborative community of colleagues around the world, and where you ll be able to reimagine what s possible. Join us and help the world s leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Your Role Design and manage CI/CD pipelines (Jenkins, GitLab CI, Azure DevOps) Automate infrastructure with Terraform, Ansible, or CloudFormation Implement Docker and Kubernetes for containerization and orchestration Monitor systems using Prometheus, Grafana, and ELK Collaborate with dev teams to embed DevOps best practices Ensure security, compliance, and support production issues Your Profile 614 years in DevOps or related roles Strong CI/CD and infrastructure automation experience Proficient in Docker, Kubernetes, and cloud platforms (AWS, Azure, GCP) Skilled in monitoring tools and problem-solving Excellent team collaboration What youll love about working with us Flexible work optionsremote and hybrid Competitive salary and benefits package Career growth with SAP and cloud certifications Inclusive and collaborative work environment

Posted 1 month ago

Apply

7.0 - 10.0 years

0 Lacs

Pune

Hybrid

Job Description EMS and Observability Consultant Location - Bangalore Job Summary: We are seeking a skilled IT Operations Consultant specializing in Monitoring and Observability to design, implement, and optimize monitoring solutions for our customers. The ideal candidate will have a minimum of 7 years of relevant experience, with a strong background in monitoring, observability and IT service management. The ideal candidate will be responsible for ensuring system reliability, performance, and availability by creating robust observability architectures and leveraging modern monitoring tools. Qualification/Experience needed • Minimum 7 years of working experience in Cyber Security Consulting or Advisory. Primary Responsibilities: • Design end-to-end monitoring and observability solutions to provide comprehensive visibility into infrastructure, applications, and networks. • Implement monitoring tools and frameworks (e.g., Prometheus, Grafana, OpsRamp, Dynatrace, New Relic) to track key performance indicators and system health metrics. • Integration of monitoring and observability solutions with IT Service Management Tools. • Develop and deploy dashboards, s, and reports to proactively identify and address system performance issues. • Architect scalable observability solutions to support hybrid and multi-cloud environments. • Collaborate with infrastructure, development, and DevOps teams to ensure seamless integration of monitoring systems into CI/CD pipelines. • Continuously optimize monitoring configurations and thresholds to minimize noise and improve incident detection accuracy. • Automate ing, remediation, and reporting processes to enhance operational efficiency. • Utilize AIOps and machine learning capabilities for intelligent incident management and predictive analytics. • Work closely with business stakeholders to define monitoring requirements and success metrics. • Document monitoring architectures, configurations, and operational procedures. Required Skills: • Strong understanding of infrastructure and platform development principles and experience with programming languages such as Python, Ansible, for developing custom scripts. • Strong knowledge of monitoring frameworks, logging systems (ELK stack, Fluentd), and tracing tools (Jaeger, Zipkin) along with the OpenSource solutions like Prometheus, Grafana. • Extensive experience with monitoring and observability solutions such as OpsRamp, Dynatrace, New Relic, must have worked with ITSM integration (e.g. integration with ServiceNow, BMC remedy, etc.) • Working experience with RESTful APIs and understanding of API integration with the monitoring tools. • Familiarity with AIOps and machine learning techniques for anomaly detection and incident prediction. • Knowledge of ITIL processes and Service Management frameworks. • Familiarity with security monitoring and compliance requirements. • Excellent analytical and problem-solving skills, ability to debug and troubleshoot complex automation issues About Mphasis Mphasis applies next-generation technology to help enterprises transform businesses globally. Customer centricity is foundational to Mphasis and is reflected in the Mphasis’ Front2Back™ Transformation approach. Front2Back™ uses the exponential power of cloud and cognitive to provide hyper-personalized (C=X2C2TM=1) digital experience to clients and their end customers. Mphasis’ Service Transformation approach helps ‘shrink the core’ through the application of digital technologies across legacy environments within an enterprise, enabling businesses to stay ahead in a changing world. Mphasis’ core reference architectures and tools, speed and innovation with domain expertise and specialization are key to building strong relationships with marquee clients. Skills PRIMARY COMPETENCY : Tools PRIMARY SKILL : Dynatrace PRIMARY SKILL PERCENTAGE : 51 SECONDARY COMPETENCY : Tools SECONDARY SKILL : New Relic SECONDARY SKILL PERCENTAGE : 25 TERTIARY COMPETENCY : Tools TERTIARY SKILL : Automation Tools - Chef/Puppet/Ansible/Salt Stack TERTIARY SKILL PERCENTAGE : 24

Posted 1 month ago

Apply

4.0 - 6.0 years

4 - 7 Lacs

Gurugram

Work from Office

GreensTurn is seeking a highly skilled DevOps Engineer to manage and optimize our cloud infrastructure, automate deployment pipelines, and enhance the security and performance of our web based platform. The ideal candidate will be responsible for ensuring high availability, scalability, and security of the system while working closely with developers, security teams, and product managers. Key Responsibilities: Cloud Infrastructure Management: Deploy, configure, and manage cloud services on AWS or Azure for scalable, cost-efficient infrastructure. CI/CD Implementation: Develop and maintain CI/CD pipelines for automated deployments using GitHub Actions, Jenkins, or GitLab CI/CD . Containerization & Orchestration: Deploy and manage applications using Docker, Kubernetes (EKS/AKS), and Helm . Monitoring & Performance Optimization: Implement real-time monitoring, logging, and alerting using Prometheus, Grafana, CloudWatch, or ELK Stack . Security & Compliance: Ensure best practices for IAM (Identity & Access Management), role-based access control (RBAC), encryption, firewalls, and vulnerability management . Infrastructure as Code (IaC): Automate infrastructure provisioning using Terraform, AWS CloudFormation, or Azure Bicep . Networking & Load Balancing: Set up VPC, security groups, load balancers (ALB/NLB), and CDN (CloudFront/Azure CDN) . Disaster Recovery & Backup: Implement automated backups, failover strategies, and disaster recovery plans . Database Management: Optimize database performance, backup policies, and replication for MongoDB Collaboration & Documentation: Work with development teams to integrate DevOps best practices and maintain proper documentation for infrastructure and deployment workflows. Preferred candidate profile Perks and benefits

Posted 1 month ago

Apply

7.0 - 11.0 years

9 - 12 Lacs

Mumbai, Bengaluru, Delhi

Work from Office

Experience : 7.00 + years Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Must have skills required: DevOps, PowerShell, CLI, Amazon AWS, Java, Scala, Go (Golang), Terraform Opportunity Summary: We are looking for an enthusiastic and dynamic individual to join Upland India as a DevOps Engineer in the Cloud Operations Team. The individual will manage and monitor our extensive set of cloud applications. The successful candidate will possess extensive experience with production systems with an excellent understanding of key SaaS technologies as well as exhibit a high amount of initiative and responsibility. The candidate will participate in technical/architectural discussions supporting Uplands product and influence decisions concerning solutions and techniques within their discipline. What would you do Be an engaged, active member of the team, contributing to driving greater efficiency and optimization across our environments. Automate manual tasks to improve performance and reliability. Build, install, and configure servers in physical and virtual environments. Participate in an on-call rotation to support customer-facing application environments. Monitor and optimize system performance, taking proactive measures to prevent issues and reactive measures to correct them. Participate in the Incident, Change, Problem, and Project Management programs and document details within prescribed guidelines. Advise technical and business teams on tactical and strategic improvements to enhance operational capabilities. Create and maintain documentation of enterprise infrastructure topology and system configurations. Serve as an escalation for internal support staff to resolve issues. What are we looking for Experience: Overall, 7-9 years total experience in DevOps: AWS (solutioning and operations), GitHub/Bitbucket, CI/CD, Jenkins, ArgoCD, Grafana, Prometheus, etc. Technical Skills To be a part of this journey, you should have 7-9 years of overall industry experience managing production systems, an excellent understanding of key SaaS technologies, and a high level of initiative and responsibility. The following skills are needed for this role. Primary Skills: Public Cloud Providers: AWS: Solutioning, introducing new services in existing infrastructure, and maintaining the infrastructure in a production 24x7 SaaS solution. Administer complex Linux-based web hosting configuration components, including load balancers, web, and database servers. Develop and maintain CI/CD pipelines using GitHub Actions, ArgoCD, and Jenkins. EKS/Kubernetes, ECS, Docker Administration/Deployment. Strong knowledge of AWS networking concepts including: Route53, VPC configuration and management, DHCP, VLANs, HTTP/HTTPS and IPSec/SSL VPNs. Strong knowledge of AWS Security concepts: AWS: IAM accounts, KMS managed encryption, CloudTrail, CloudWatch monitoring/alerting. Automating existing manual workload like reporting, patching/updating servers by writing scripts, lambda functions, etc. Expertise in Infrastructure as Code technologies: Terraform is a must. Monitoring and alerting tools like Prometheus, Grafana, PagerDuty, etc. Expertise in Windows and Linux OS is a must. Secondary Skills: It would be advantageous if the candidate also has the following secondary skills: Strong knowledge of scripting/coding with Go, PowerShell, Bash, or Python . Soft Skills: Strong written and verbal communication skills directed to technical and non-technical team members. Willingness to take ownership of problems and seek solutions. Ability to apply creative problem solving and manage through ambiguity. Ability to work under remote supervision and with a minimum of direct oversight. Qualification Bachelors degree in computer science, Engineering, or a related field. Proven experience as a DevOps Engineer with a focus on AWS. Experience with modernizing legacy applications and improving deployment processes. Excellent problem-solving skills and the ability to work under remote supervision. Strong written and verbal communication skills, with the ability to articulate technical information to non-technical team members.

Posted 1 month ago

Apply

4.0 - 8.0 years

10 - 12 Lacs

Pune

Work from Office

We are seeking a skilled and motivated DevOps Engineer to join our dynamic team. The ideal candidate will have a strong background in CI/CD pipelines, cloud infrastructure, containerization, and automation, along with basic programming knowledge.

Posted 1 month ago

Apply

5.0 - 10.0 years

25 - 35 Lacs

Bengaluru

Remote

- Cloud Support Operations - SaaS and AWS (Storage, Databases, IAM, ECS, EKS, and CloudWatch) - Cloud Observability and Monitoring (Datadog, Splunk, Grafana, and Prometheus) - Infrastructure Management - Kubernetes and Containerization

Posted 1 month ago

Apply

6.0 - 11.0 years

11 - 12 Lacs

Hyderabad

Work from Office

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 6 to 11+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 1 month ago

Apply

8.0 - 12.0 years

8 - 18 Lacs

Hyderabad, Bengaluru

Work from Office

**Job Title:** Confluent Kafka Engineer (Azure & GCP Focus) **Location:** [Bangalore or Hyderabad ] **Role Overview** We are seeking an experienced **Confluent Kafka Engineer** with hands-on expertise in deploying, administering, and securing Kafka clusters in **Microsoft Azure** and **Google Cloud Platform (GCP)** environments. The ideal candidate will be skilled in cluster administration, RBAC, cluster linking and setup, and monitoring using Prometheus and Grafana, with a strong understanding of cloud-native best practices. **Key Responsibilities** - **Kafka Cluster Administration (Azure & GCP):** - Deploy, configure, and manage Confluent Kafka clusters on Azure and GCP virtual machines or managed infrastructure. - Plan and execute cluster upgrades, scaling, and disaster recovery strategies in cloud environments. - Set up and manage cluster linking for cross-region and cross-cloud data replication. - Monitor and maintain the health and performance of Kafka clusters, proactively identifying and resolving issues. - **Security & RBAC:** - Implement and maintain security protocols, including SSL/TLS encryption and role-based access control (RBAC). - Configure authentication and authorization (Kafka ACLs) across Azure and GCP environments. - Set up and manage **Active Directory (AD) plain authentication** and **OAuth** for secure user and application access. - Ensure compliance with enterprise security standards and cloud provider best practices. - **Monitoring & Observability:** - Set up and maintain monitoring and alerting using Prometheus and Grafana, integrating with Azure Monitor and GCP-native monitoring as needed. - Develop and maintain dashboards and alerts for Kafka performance and reliability metrics. - Troubleshoot and resolve performance and reliability issues using cloud-native and open-source monitoring tools. - **Integration & Automation:** - Develop and maintain automation scripts (Bash, Python, Terraform, Ansible) for cluster deployment, scaling, and monitoring. - Build and maintain infrastructure as code for Kafka environments in Azure and GCP. - Configure and manage **Kafka connectors** for integration with external systems, including **BigQuery Sync connectors** and connectors for Azure and GCP data services (such as Azure Data Lake, Cosmos DB, BigQuery). - **Documentation & Knowledge Sharing:** - Document standard operating procedures, architecture, and security configurations for cloud-based Kafka deployments. - Provide technical guidance and conduct knowledge transfer sessions for internal teams. **Required Qualifications** - Bachelors degree in Computer Science, Engineering, or related field. - 5+ years of hands-on experience with Confluent Platform and Kafka in enterprise environments. - Demonstrated experience deploying and managing Kafka clusters on **Azure** and **GCP** (not just using pre-existing clusters). - Strong expertise in cloud networking, security, and RBAC in Azure and GCP. - Experience configuring **AD plain authentication** and **OAuth** for Kafka. - Proficiency with monitoring tools (Prometheus, Grafana, Azure Monitor, GCP Monitoring). - Hands-on experience with Kafka connectors, including BQ Sync connectors, Schema Registry, KSQL, and Kafka Streams. - Scripting and automation skills (Bash, Python, Terraform, Ansible). - Familiarity with infrastructure as code practices. - Excellent troubleshooting and communication skills. **Preferred Qualifications** - Confluent Certified Developer/Admin certification. - Experience with cross-cloud Kafka streaming and integration scenarios. - Familiarity with Azure and GCP data services (Azure Data Lake, Cosmos DB, BigQuery). - Experience with other streaming technologies (e.g., Spark Streaming, Flink). - Experience with data visualization and analytics tools.

Posted 1 month ago

Apply

4.0 - 6.0 years

10 - 20 Lacs

Pune

Work from Office

Role Overview We are looking for experienced DevOps Engineers (4+ years) with a strong background in cloud infrastructure, automation, and CI/CD processes. The ideal candidate will have hands-on experience in building, deploying, and maintaining cloud solutions using Infrastructure-as-Code (IaC) best practices. The role requires expertise in containerization, cloud security, networking, and monitoring tools to optimize and scale enterprise-level applications. Key Responsibilities Design, implement, and manage cloud infrastructure solutions on AWS, Azure, or GCP. Develop and maintain Infrastructure-as-Code (IaC) using Terraform, CloudFormation, or similar tools. Implement and manage CI/CD pipelines using tools like GitHub Actions, Jenkins, GitLab CI/CD, BitBucket Pipelines, or AWS CodePipeline. Manage and orchestrate containers using Kubernetes, OpenShift, AWS EKS, AWS ECS, and Docker. Work on cloud migrations, helping organizations transition from on-premises data centers to cloud-based infrastructure. Ensure system security and compliance with industry standards such as SOC 2, PCI, HIPAA, GDPR, and HITRUST. Set up and optimize monitoring, logging, and alerting using tools like Datadog, Dynatrace, AWS CloudWatch, Prometheus, ELK, or Splunk. Automate deployment, configuration, and management of cloud-native applications using Ansible, Chef, Puppet, or similar configuration management tools. Troubleshoot complex networking, Linux/Windows server issues, and cloud-related performance bottlenecks. Collaborate with development, security, and operations teams to streamline the DevSecOps process. Must-Have Skills 3+ years of experience in DevOps, cloud infrastructure, or platform engineering. Expertise in at least one major cloud provider: AWS, Azure, or GCP. Strong experience with Kubernetes, ECS, OpenShift, and container orchestration technologies. Hands-on experience in Infrastructure-as-Code (IaC) using Terraform, AWS CloudFormation, or similar tools. Proficiency in scripting/programming languages like Python, Bash, or PowerShell for automation. Strong knowledge of CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, or BitBucket Pipelines. Experience with Linux operating systems (RHEL, SUSE, Ubuntu, Amazon Linux) and Windows Server administration. Expertise in networking (VPCs, Subnets, Load Balancing, Security Groups, Firewalls). Experience in log management and monitoring tools like Datadog, CloudWatch, Prometheus, ELK, Dynatrace. Strong communication skills to work with cross-functional teams and external customers. Knowledge of Cloud Security best practices, including IAM, WAF, GuardDuty, CVE scanning, vulnerability management. Good-to-Have Skills Knowledge of cloud-native security solutions (AWS Security Hub, Azure Security Center, Google Security Command Center). Experience in compliance frameworks (SOC 2, PCI, HIPAA, GDPR, HITRUST). Exposure to Windows Server administration alongside Linux environments. Familiarity with centralized logging solutions (Splunk, Fluentd, AWS OpenSearch). GitOps experience with tools like ArgoCD or Flux. Background in penetration testing, intrusion detection, and vulnerability scanning. Experience in cost optimization strategies for cloud infrastructure. Passion for mentoring teams and sharing DevOps best practices.

Posted 1 month ago

Apply

2.0 - 4.0 years

6 - 7 Lacs

Mumbai Suburban

Work from Office

We are the PERFECT match if you... Are a graduate with a minimum of 2-4 years of technical product support experience with following skills: Clear logical thinking and good communication skills. We believe in individuals who are high on ownership and like to operate with minimum management An ability to "understand" data and analyze logs to help investigate production issues and incidents Hands on experience of Cloud Platforms (GCP/AWS) Experience creating Dashboards & Alerts with tools like Metabase, Grafana, Prometheus Hands-on experience with writing SQL queries Hands on experience of logs monitoring tool (Kibana, Stackdriver, CloudWatch) Knowledge of Scripting language like Elixir/Python is a plus Experience in Kubernetes/Docker is a plus. Has actively worked on documenting RCA and creating incident reports. Good understanding of APls, with hands-on experience using tools like Postman or Insomnia. Knowledge of ticketing tool such as Freshdesk/Gitlab Here's what your day would look like... Defining monitoring events for IDfy's services and setting up the corresponding alerts Responding to alerts, with triaging, investigating and resolving resolution of issues Learning about various IDfy applications and understanding the events emitted Creating analytical dashboards for service performance and usage monitoring Responding to incidents and customer tickets in a timely manner Occasionally running service recovery scripts Helping improve the IDfy Platform by providing insights based on investigations and analysis root cause analysis Get in touch with ankit.pant@idfy.com

Posted 1 month ago

Apply

5.0 - 9.0 years

16 - 20 Lacs

Pune

Work from Office

Job Summary Synechron is seeking an experienced Site Reliability Engineer (SRE) / DevOps Engineer to lead the design, implementation, and management of reliable, scalable, and efficient infrastructure solutions. This role is pivotal in ensuring optimal performance, availability, and security of our applications and services through advanced automation, continuous deployment, and proactive monitoring. The ideal candidate will collaborate closely with development, operations, and security teams to foster a culture of continuous improvement and technological innovation. Software Required Skills: Proficiency with cloud platforms such as AWS, GCP, or Azure Expertise with container orchestration tools like Kubernetes and Docker Experience with Infrastructure as Code (IaC) tools such as Terraform or CloudFormation Hands-on experience with CI/CD pipelines using Jenkins, GitLab CI, or similar Strong scripting skills in Python, Bash, or similar languages Preferred Skills: Familiarity with monitoring and logging tools like Prometheus, Grafana, ELK stack Knowledge of configuration management tools such as Ansible, Chef, or Puppet Experience implementing security best practices in cloud environments Understanding of microservices architecture and service mesh frameworks like Istio or Linkerd Overall Responsibilities Lead the development, deployment, and maintenance of scalable, resilient infrastructure solutions. Automate routine tasks and processes to improve efficiency and reduce manual intervention. Implement and refine monitoring, alerting, and incident response strategies to maintain high system availability. Collaborate with software development teams to integrate DevOps best practices into product development cycles. Guide and mentor team members on emerging technologies and industry best practices. Ensure compliance with security standards and manage risk through security controls and assessments. Stay abreast of the latest advancements in SRE, cloud computing, and automation technologies to recommend innovative solutions aligned with organizational goals. Technical Skills (By Category) Cloud Technologies: EssentialAWS, GCP, or Azure (both infrastructure management and deployment) PreferredMulti-cloud management, cloud cost optimization Containers and Orchestration: EssentialDocker, Kubernetes PreferredService mesh frameworks like Istio, Linkerd Automation & Infrastructure as Code: EssentialTerraform, CloudFormation, or similar PreferredAnsible, SaltStack Monitoring & Logging: EssentialPrometheus, Grafana, ELK Stack PreferredDataDog, New Relic, Splunk Security & Compliance: Knowledge of identity and access management (IAM), encryption, vulnerability management Development & Scripting: EssentialPython, Bash scripting PreferredGo, PowerShell Experience 5-9 years of experience in software engineering, systems administration, or DevOps/SRE roles. Proven track record in designing and deploying large-scale, high-availability systems. Hands-on experience with cloud infrastructure automation and container orchestration. Past roles leading incident management, performance tuning, and security enhancements. Experience in working with cross-functional teams using Agile methodologies. BonusExperience with emerging technologies like Blockchain, IoT, or AI integrations. Day-to-Day Activities Architect, deploy, and maintain cloud infrastructure and containerized environments. Develop automation scripts and frameworks to streamline deployment and operations. Monitor system health, analyze logs, and troubleshoot issues proactively. Conduct capacity planning and performance tuning. Collaborate with development teams to integrate new features into production with zero downtime. Participate in incident response, post-mortem analysis, and continuous improvement initiatives. Document procedures, guidelines, and best practices for the team. Stay updated on evolving SRE technologies and industry trends, applying them to enhance our infrastructure. Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or related field. Certifications in cloud platforms (AWS Certified Solutions Architect, Azure DevOps Engineer, Google Professional Cloud Engineer) are preferred. Additional certifications in Kubernetes, Terraform, or security are advantageous. Professional Competencies Strong analytical and problem-solving abilities. Excellent collaboration and communication skills. Leadership qualities with an ability to mentor junior team members. Ability to work under pressure and manage multiple priorities. Commitment to best practices around automation, security, and reliability. Eagerness to learn emerging technologies and adapt to evolving workflows. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice

Posted 1 month ago

Apply

10.0 - 15.0 years

15 - 30 Lacs

Thiruvananthapuram

Work from Office

Job Summary: We are seeking an experienced DevOps Architect to drive the design, implementation, and management of scalable, secure, and highly available infrastructure. The ideal candidate should have deep expertise in DevOps practices, CI/CD pipelines, cloud platforms, and infrastructure automation across multiple cloud environments along with strong leadership and mentoring capabilities. Job Duties and Responsibilities Lead and manage the DevOps team to ensure reliable infrastructure and automated deployment processes. Design, implement, and maintain highly available, scalable, and secure cloud infrastructure (AWS, Azure, GCP, etc.). Develop and optimize CI/CD pipelines for multiple applications and environments. Drive Infrastructure as Code (IaC) practices using tools like Terraform, CloudFormation, or Ansible. Oversee monitoring, logging, and alerting solutions to ensure system health and performance. Collaborate with Development, QA, and Security teams to integrate DevOps best practices across the SDLC. Lead incident management and root cause analysis for production issues. Ensure robust security practices for infrastructure and pipelines (secrets management, vulnerability scanning, etc.). Guide and mentor team members, fostering a culture of continuous improvement and technical excellence. Evaluate and recommend new tools, technologies, and processes to improve operational efficiency. Required Qualifications Education Bachelor's degree in Computer Science, IT, or related field; Master's preferred At least two current cloud certifications (e.g., AWS Solutions Architect, Azure Administrator, GCP DevOps Engineer, CKA, Terraform etc.) Experience: 10+ years of relevant experience in DevOps, Infrastructure, or Cloud Operations. 5+ years of experience in a technical leadership or team lead role. Knowledge, Skills & Abilities Expertise in at least two major cloud platform: AWS , Azure , or GCP . Strong experience with CI/CD tools such as Jenkins, GitLab CI, Azure DevOps, or similar. Hands-on experience with Infrastructure as Code (IaC) tools like Terraform, Ansible, or CloudFormation. Proficient in containerization and orchestration using Docker and Kubernetes . Strong knowledge of monitoring, logging, and alerting tools (e.g., Prometheus, Grafana, ELK, CloudWatch). Scripting knowledge in languages like Python , Bash , or Go . Solid understanding of networking, security, and system administration. Experience in implementing security best practices across DevOps pipelines. Proven ability to mentor, coach, and lead technical teams. Preferred Skills Experience with serverless architecture and microservices deployment. Experience with security tools and best practices (e.g., IAM, VPNs, firewalls, cloud security posture management ). Exposure to hybrid cloud or multi-cloud environments. Knowledge of cost optimization and cloud governance strategies. Experience working in Agile teams and managing infrastructure in production-grade environments Relevant certifications (AWS Certified DevOps Engineer, Azure DevOps Expert, CKA, etc.). Working Conditions Work Arrangement: An occasionally hybrid opportunity based out of our Trivandrum office. Travel Requirements: Occasional travel may be required for team meetings, user research, or conferences. On-Call Requirements: Light on-call rotation may be required depending on operational needs. Hours of Work: Monday to Friday, 40 hours per week, with overlap with PST required as needed. Living AOT s Values Our values guide how we work, collaborate, and grow as a team. Every role at AOT is expected to embody and promote these values: Innovation: We pursue true innovation by solving problems and meeting unarticulated needs. Integrity: We hold ourselves to high ethical standards and never compromise. Ownership: We are all responsible for our shared long-term success. Agility: We stay ready to adapt to change and deliver results. Collaboration: We believe collaboration and knowledge-sharing fuel innovation and success. Empowerment: We support our people so they can bring the best of themselves to work every day.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies