Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
6 - 11 Lacs
Mumbai
Work from Office
Primary Skills Google Cloud Platform (GCP) Expertise in Compute (VMs, GKE, Cloud Run), Networking (VPC, Load Balancers, Firewall Rules), IAM (Service Accounts, Workload Identity, Policies), Storage (Cloud Storage, Cloud SQL, BigQuery), and Serverless (Cloud Functions, Eventarc, Pub/Sub). Strong experience in Cloud Build for CI/CD, automating deployments and managing artifacts efficiently. Terraform Skilled in Infrastructure as Code (IaC) with Terraform for provisioning and managing GCP resources. Proficient in Modules for reusable infrastructure, State Management (Remote State, Locking), and Provider Configuration . Experience in CI/CD Integration with Terraform Cloud and automation pipelines. YAML Proficient in writing Kubernetes manifests for deployments, services, and configurations. Experience in Cloud Build Pipelines , automating builds and deployments. Strong understanding of Configuration Management using YAML in GitOps workflows. PowerShell Expert in scripting for automation, managing GCP resources, and interacting with APIs. Skilled in Cloud Resource Management , automating deployments, and optimizing cloud operations. Secondary Skills CI/CD Pipelines GitHub Actions, GitLab CI/CD, Jenkins, Cloud Build Kubernetes (K8s) Helm, Ingress, RBAC, Cluster Administration Monitoring & Logging Stackdriver (Cloud Logging & Monitoring), Prometheus, Grafana Security & IAM GCP IAM Policies, Service Accounts, Workload Identity Networking VPC, Firewall Rules, Load Balancers, Cloud DNS Linux & Shell Scripting Bash scripting, system administration Version Control Git, GitHub, GitLab, Bitbucket
Posted 1 month ago
4.0 - 9.0 years
9 - 14 Lacs
Mumbai, Pune, Bengaluru
Work from Office
PostgreSQL Database Administrator with over 4+ years of hands-on experience to manage, maintain, and optimize our PostgreSQL database environments. The ideal candidate will be responsible for ensuring high availability, performance, and security of our databases while supporting development and operations teams. Install, configure, and upgrade PostgreSQL database systems. Monitor database performance and implement tuning strategies. Perform regular database maintenance tasks such as backups, restores, and indexing. Ensure database security, integrity, and compliance with internal and external standards. Automate routine tasks using scripting (e.g., Bash, Python). Troubleshoot and resolve database-related issues in a timely manner. Collaborate with development teams to optimize queries and database design. Implement and maintain high availability and disaster recovery solutions. Maintain documentation related to database configurations, procedures, and policies. Participate in on-call rotation and provide support during off-hours as needed. Primary skills 4+ years of experience as a PostgreSQL DBA in production environments. Strong knowledge of PostgreSQL architecture, replication, and performance tuning. Secondary skills Proficiency in writing complex SQL queries and PL/pgSQL procedures. Familiarity with Linux/Unix systems and shell scripting. Experience with monitoring tools like Prometheus , Grafana , or Nagios . Understanding of database security best practices and access control.
Posted 1 month ago
3.0 - 5.0 years
4 - 8 Lacs
Hyderabad
Work from Office
3-5 years of experience in IT operations and maintenance. Hands-on experience with Grafana, Zabbix, Azure Monitor, and ELK Log Management. Experience with large-scale monitoring system setup and maintenance. Good exposure to commonly used ITSM tools, including PagerDuty and ServiceNow. Basic understanding of public cloud knowledge, including IaaS, PaaS, and SaaS. Proactive approach to identifying problems, performance bottlenecks, and areas for improvement. Primary Skills Configure and implement end-to-end monitoring solutions for applications and infrastructure. Configure and maintain log analytic tools for applications and infrastructure. Develop mock-up views and build workable dashboards following a defined methodology based on briefings from various stakeholders. Short Description Open to work in 24*7 Shift. Microsoft Azure Monitor PagerDuty ELK Log Management
Posted 1 month ago
6.0 - 11.0 years
5 - 9 Lacs
Mumbai, Hyderabad, Bengaluru
Work from Office
Your Role Manage and administerZabbix server (v4 and above)in enterprise environments. Install, configure, maintain, and upgradeZabbix server,Zabbix proxy, andZabbix endpoints. Write customfunctions/scriptsfor discovery templates to monitor various technologies (Oracle, Linux, Wintel, etc.). Troubleshoot and maintainZabbix server HA environmentsonService Guard clusters. Understand and manageZabbix database schema for performance and reliability along with configure and manageZabbix agents(active/passive),trappers, andhousekeeping processes. Your Profile 6 to 12 years of experience inenterprise monitoringusing Zabbix. Strong hands-on experience withZabbix v4 and above also proficient in writingcustom scripts/functionsfor monitoring automation. Deep understanding ofZabbix server and proxy installation, configuration, and troubleshooting with experience inZabbix HA architectureandService Guard cluster management. Knowledge ofZabbix database schema and performance tuning skilled inZabbix agent configuration, trappers, and housekeeping. Proficient inGrafanasetup, dashboard creation, and alerting. What You Will Love Working at Capgemini Work onenterprise-scale monitoring solutionssupporting mission-critical systems. Expand your expertise inZabbix, Grafana, API scripting, and cross-platform integrations. Clear career progression paths fromL2 support to architecture and consulting roles. Be part ofhigh-impact projectsthat ensure visibility, reliability, and performance forFortune 500 clients. Thrive in adiverse, inclusive, and respectful environmentthat values your voice and ideas. Work inagile, cross-functional teamswith opportunities tolead and mentor.
Posted 1 month ago
10.0 - 14.0 years
13 - 18 Lacs
Pune
Work from Office
Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way youd like, where youll be supported and inspired by a collaborative community of colleagues around the world, and where you ll be able to reimagine what s possible. Join us and help the world s leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Your Role Design and manage CI/CD pipelines (Jenkins, GitLab CI, Azure DevOps) Automate infrastructure with Terraform, Ansible, or CloudFormation Implement Docker and Kubernetes for containerization and orchestration Monitor systems using Prometheus, Grafana, and ELK Collaborate with dev teams to embed DevOps best practices Ensure security, compliance, and support production issues Your Profile 614 years in DevOps or related roles Strong CI/CD and infrastructure automation experience Proficient in Docker, Kubernetes, and cloud platforms (AWS, Azure, GCP) Skilled in monitoring tools and problem-solving Excellent team collaboration What youll love about working with us Flexible work optionsremote and hybrid Competitive salary and benefits package Career growth with SAP and cloud certifications Inclusive and collaborative work environment
Posted 1 month ago
1.0 - 3.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Job Overview: We are looking for a highly skilled AppDynamics Consultant with a strong background in application performance monitoring, cloud-native technologies, and end-to-end observability. The ideal candidate will have hands-on experience with AppDynamics components, instrumentation techniques, and certified expertise. You will be responsible for enabling proactive monitoring solutions in hybrid and multi-cloud environments, working closely with cross-functional teams, including SRE, DevOps, and Engineering. Key Responsibilities: Lead the end-to-end implementation of AppDynamics across enterprise applications and cloud workloads. Instrument and configure AppDynamics agents for Java, .NET, Node.js, PHP, Python, and database tiers. Design and deploy Application Flow Maps, Business Transactions, Health Rules, and Policies. Create and maintain custom dashboards, analytics queries, and synthetic monitoring scripts. Develop SLIs/SLOs and integrate them into AppDynamics Dash Studio and external observability platforms. Tune performance baselines, anomaly detection, and alert thresholds. Collaborate with Cloud Architects and SRE teams to align monitoring with cloud-native best practices. Provide technical workshops, knowledge transfer sessions, and documentation for internal and external stakeholders. Integrate AppDynamics with CI/CD pipelines, incident management tools (e.g., ServiceNow, PagerDuty), and cloud-native telemetry. Required AppDynamics Expertise: Strong hands-on experience in: Controller administration (SaaS or On-Prem) Agent configuration for APM, Infrastructure Visibility, Database Monitoring, and End-User Monitoring. Analytics and Business iQ Service endpoints, data collectors, and custom metrics Experience in AppDynamics Dash Studio and Advanced Dashboards Deep understanding of transaction snapshots, call graphs, errors, and bottleneck analysis Knowledge of AppDynamics APIs for automation and custom integrations Ability to troubleshoot agent issues, data gaps, and controller health Mandatory Cloud & DevOps Skills: Hands-on experience with at least one major cloud platform: AWS (EC2, ECS/EKS, Lambda, CloudWatch, CloudFormation) Azure (App Services, AKS, Functions, Azure Monitor) GCP (GKE, Compute Engine, Cloud Operations Suite) Experience in containerized environments (Kubernetes, Docker) Familiarity with CI/CD pipelines (Jenkins, GitLab, GitHub Actions) Scripting skills (Shell, Python, or PowerShell) for automation and agent deployment Experience with Infrastructure as Code (Terraform, CloudFormation) Preferred Skills: Integration with OpenTelemetry, Grafana, Prometheus, or Splunk Experience with Full-Stack Monitoring (APM + Infra + Logs + RUM + Synthetic) Knowledge of Service Reliability Engineering (SRE) practices and error budgets Familiarity with ITSM tools and alert routing mechanisms Understanding of business KPIs and mapping them to technical metrics Certifications (Preferred or Required): AppDynamics Certified Associate / Professional / Implementation Professional Cloud certifications: AWS Certified Solutions Architect / DevOps Engineer Microsoft Certified: Azure Administrator / DevOps Engineer Google Associate Cloud Engineer / Professional Cloud DevOps Engineer Kubernetes (CKA/CKAD) or equivalent is a plus Education: Bachelor s degree in computer science, IT, or related field Master s Degree (optional but preferred)
Posted 1 month ago
8.0 - 12.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Job Overview: We are looking for a highly skilled AppDynamics Consultant with a strong background in application performance monitoring, cloud-native technologies, and end-to-end observability. The ideal candidate will have hands-on experience with AppDynamics components, instrumentation techniques, and certified expertise. You will be responsible for enabling proactive monitoring solutions in hybrid and multi-cloud environments, working closely with cross-functional teams, including SRE, DevOps, and Engineering. Key Responsibilities: Lead the end-to-end implementation of AppDynamics across enterprise applications and cloud workloads. Instrument and configure AppDynamics agents for Java, .NET, Node.js, PHP, Python, and database tiers. Design and deploy Application Flow Maps, Business Transactions, Health Rules, and Policies. Create and maintain custom dashboards, analytics queries, and synthetic monitoring scripts. Develop SLIs/SLOs and integrate them into AppDynamics Dash Studio and external observability platforms. Tune performance baselines, anomaly detection, and alert thresholds. Collaborate with Cloud Architects and SRE teams to align monitoring with cloud-native best practices. Provide technical workshops, knowledge transfer sessions, and documentation for internal and external stakeholders. Integrate AppDynamics with CI/CD pipelines, incident management tools (e.g., ServiceNow, PagerDuty), and cloud-native telemetry. Required AppDynamics Expertise: Strong hands-on experience in: Controller administration (SaaS or On-Prem) Agent configuration for APM, Infrastructure Visibility, Database Monitoring, and End-User Monitoring. Analytics and Business iQ Service endpoints, data collectors, and custom metrics Experience in AppDynamics Dash Studio and Advanced Dashboards Deep understanding of transaction snapshots, call graphs, errors, and bottleneck analysis Knowledge of AppDynamics APIs for automation and custom integrations Ability to troubleshoot agent issues, data gaps, and controller health Mandatory Cloud & DevOps Skills: Hands-on experience with at least one major cloud platform: AWS (EC2, ECS/EKS, Lambda, CloudWatch, CloudFormation) Azure (App Services, AKS, Functions, Azure Monitor) GCP (GKE, Compute Engine, Cloud Operations Suite) Experience in containerized environments (Kubernetes, Docker) Familiarity with CI/CD pipelines (Jenkins, GitLab, GitHub Actions) Scripting skills (Shell, Python, or PowerShell) for automation and agent deployment Experience with Infrastructure as Code (Terraform, CloudFormation) Preferred Skills: Integration with OpenTelemetry, Grafana, Prometheus, or Splunk Experience with Full-Stack Monitoring (APM + Infra + Logs + RUM + Synthetic) Knowledge of Service Reliability Engineering (SRE) practices and error budgets Familiarity with ITSM tools and alert routing mechanisms Understanding of business KPIs and mapping them to technical metrics Certifications (Preferred or Required): AppDynamics Certified Associate / Professional / Implementation Professional Cloud certifications: AWS Certified Solutions Architect / DevOps Engineer Microsoft Certified: Azure Administrator / DevOps Engineer Google Associate Cloud Engineer / Professional Cloud DevOps Engineer Kubernetes (CKA/CKAD) or equivalent is a plus Education: Bachelors degree in computer science, IT, or related field Masters Degree (optional but preferred)
Posted 1 month ago
15.0 - 20.0 years
3 - 7 Lacs
Navi Mumbai
Work from Office
Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Java Standard Edition Good to have skills : Broadcom Layer 7 API Gateways, JBOSS AdministrationMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Support Engineer, you will act as software detectives, providing a dynamic service that identifies and solves issues within multiple components of critical business systems. Your typical day will involve collaborating with various teams to troubleshoot and resolve software-related challenges, ensuring the smooth operation of essential applications and systems. You will engage in problem-solving activities, analyze system performance, and contribute to the continuous improvement of processes and services, all while maintaining a focus on delivering high-quality support to users and stakeholders. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor system performance and proactively address potential issues. Professional & Technical Skills: - Must To Have Skills: Proficiency in Java Standard Edition.- Good To Have Skills: Experience with JBOSS Administration, Broadcom Layer 7 API Gateways.- Strong understanding of software debugging and troubleshooting techniques.- Experience with application performance monitoring tools - Splunk, Grafana, Kibana, Dynatrace- Familiarity with version control systems such as Git. Additional Information:- The candidate should have minimum 5 years of experience in Java Standard Edition.- This position is based in Mumbai and has permanent Night Shifts. Location & Shift Timing are non negotiable. A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
1.0 - 4.0 years
2 - 5 Lacs
Mumbai, New Delhi, Bengaluru
Work from Office
Experience : 5 + years Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote ,New Delhi,Bengaluru,Mumbai Must have skills required: ActiveCampaign, AI, GPT, Juniper Square, CRM, Google Workspace, Notion, Yardi, Zapier Take ownership of our systems architecture and play a foundational role in operational scale Build the tools and automations that power a modern, data-driven investment platform Work closely with the executive team and gain visibility across business units Enjoy autonomy, flexibility, and a high-trust, results-focused team culture Competitive compensation based on experience and strategic impact We are seeking a systems-driven professional to join us as Head of Systems & Workflow Automation. This is a strategic and implementation-focused role responsible for owning our internal technology stackfrom process discovery and design to full deployment, integration, and automation. You will lead the effort to understand our real estate, marketing, and investor operations workflows, identify points of friction or inefficiency, and implement technology solutions that simplify execution and ensure data flows cleanly across tools. A key part of your role will be building automated data connections across systems and maintaining a centralized Notion-based company dashboard to ensure real-time visibility and team-wide coordination. Core Mission Own the implementation and performance of Aptas technology infrastructure by: Designing and deploying efficient, simplified workflows between departments and platforms Automating data flow between systems (e.g., CRM, investor portals, Google Workspace, Yardi, Agora) and into centralized dashboards in Notion Translating business processes into scalable, tech-enabled solutions that support day-to-day execution and decision-making Key Responsibilities Tech Stack Ownership and Implementation Lead implementation, integration, and ongoing management of core business platforms, including Notion, Slack, Google Workspace, Juniper Square, Yardi Breeze Premier, Agora, and our CRM Serve as the point person for all internal platform configuration and system enhancements Process Mapping and Workflow Design Work with each team function (marketing, investor relations, acquisitions, asset management) to map operational workflows and identify opportunities to streamline processes Design and implement simplified, standardized workflows across platforms that reduce friction and improve handoffs Cross-System Integration and Automation Build and maintain automations using Zapier or equivalent tools to eliminate manual entry, increase accuracy, and connect siloed tools Automate structured data transfer from external platforms into a Notion-based dashboard used across the company Documentation, Training, and Adoption Document systems architecture, SOPs, and platform usage guidelines for each major process Deliver live training and onboarding for internal users and serve as a support resource for troubleshooting system issues Reporting, Governance, and Optimization Ensure system accuracy, data governance, and real-time reporting integrity across all platforms Regularly assess platform usage, functionality gaps, and data flow, and implement ongoing improvements AI and Innovation Enablement Explore and implement intelligent tools (e.g., AI assistants, GPTs, internal automations) that accelerate business operations What Were Looking For Required Skills and Experience 5+ years in systems enablement, technical operations, or RevOps/MarketingOps roles Experience managing business platforms and integrating cross-functional workflows Proven ability to automate data movement between systems and into shared dashboards (especially using Zapier or similar tools) Deep familiarity with CRM tools (HubSpot, ActiveCampaign, or equivalent), platform APIs, and structured data Exceptional systems thinking and the ability to map, simplify, and scale operational processes Strong documentation and communication skills; comfortable leading internal trainings and writing SOPs Self-motivated and highly organized, capable of managing multiple initiatives in parallel Preferred Qualifications Experience with Notion as a central operations dashboard or team knowledge hub Exposure to real estate tech platforms such as Yardi Breeze Premier, Juniper Square, Agora Background working with high-performance teams in fast-paced or entrepreneurial environments Familiarity with AI or GPT-based automations as applied to business process enablement
Posted 1 month ago
4.0 - 8.0 years
7 - 17 Lacs
Chennai
Hybrid
Dear Candidate, Urgent L2 Product/Application Support Engineer-At Wolters Kluwer, Chennai Kindly share your resume on jyoti.salvi@wolterskluwer.com About the Role: Total 4+ Years Of Relevant Exp Job Location-Chennai Education: Bachelor Degree in Computer Sciences or related technical discipline or equivalent combination of work experience and education Experience: Mid-seniority experience supporting web applications and systems Experience supporting Java based applications Experience using ITIL certified software to document and track issues Experience with Application Performance Monitoring (APM) tools and DB monitoring such as DynaTrace, AppDynamics, Datadog, AWS CloudWatch. MS AppInsight Experience with Kubernetes for cloud-based applications at a mid-senior level is an advantage. Other Desirable Knowledge, Skills, Abilities or Certifications: Excellent written and spoken English communication skills. Strong customer service orientation and interpersonal skills. Proven ability to understand logical sequences, perform root cause analysis, identify problems, and escalate issues. Capable of working independently. Must be able to document and illustrate cause and effect as part of problem-solving. Certifications in AWS, Azure, and ITIL are desirable. Proficient in writing queries to retrieve data for analysis. Experience with technologies such as Linux, Apache, JBOSS, J2EE, JavaScript, ELK, and MSSQL. Knowledge of REST API web services. Familiarity with Linux, Apache, JBOSS, J2EE, Bash/Perl scripting, and ELK for log analysis. Development-Operations (DevOps) knowledge supporting web-based applications is advantageous. Applicants may be required to appear onsite at a Wolters Kluwer office as part of the recruitment process.
Posted 1 month ago
7.0 - 10.0 years
0 Lacs
Pune
Hybrid
Job Description EMS and Observability Consultant Location - Bangalore Job Summary: We are seeking a skilled IT Operations Consultant specializing in Monitoring and Observability to design, implement, and optimize monitoring solutions for our customers. The ideal candidate will have a minimum of 7 years of relevant experience, with a strong background in monitoring, observability and IT service management. The ideal candidate will be responsible for ensuring system reliability, performance, and availability by creating robust observability architectures and leveraging modern monitoring tools. Qualification/Experience needed • Minimum 7 years of working experience in Cyber Security Consulting or Advisory. Primary Responsibilities: • Design end-to-end monitoring and observability solutions to provide comprehensive visibility into infrastructure, applications, and networks. • Implement monitoring tools and frameworks (e.g., Prometheus, Grafana, OpsRamp, Dynatrace, New Relic) to track key performance indicators and system health metrics. • Integration of monitoring and observability solutions with IT Service Management Tools. • Develop and deploy dashboards, s, and reports to proactively identify and address system performance issues. • Architect scalable observability solutions to support hybrid and multi-cloud environments. • Collaborate with infrastructure, development, and DevOps teams to ensure seamless integration of monitoring systems into CI/CD pipelines. • Continuously optimize monitoring configurations and thresholds to minimize noise and improve incident detection accuracy. • Automate ing, remediation, and reporting processes to enhance operational efficiency. • Utilize AIOps and machine learning capabilities for intelligent incident management and predictive analytics. • Work closely with business stakeholders to define monitoring requirements and success metrics. • Document monitoring architectures, configurations, and operational procedures. Required Skills: • Strong understanding of infrastructure and platform development principles and experience with programming languages such as Python, Ansible, for developing custom scripts. • Strong knowledge of monitoring frameworks, logging systems (ELK stack, Fluentd), and tracing tools (Jaeger, Zipkin) along with the OpenSource solutions like Prometheus, Grafana. • Extensive experience with monitoring and observability solutions such as OpsRamp, Dynatrace, New Relic, must have worked with ITSM integration (e.g. integration with ServiceNow, BMC remedy, etc.) • Working experience with RESTful APIs and understanding of API integration with the monitoring tools. • Familiarity with AIOps and machine learning techniques for anomaly detection and incident prediction. • Knowledge of ITIL processes and Service Management frameworks. • Familiarity with security monitoring and compliance requirements. • Excellent analytical and problem-solving skills, ability to debug and troubleshoot complex automation issues About Mphasis Mphasis applies next-generation technology to help enterprises transform businesses globally. Customer centricity is foundational to Mphasis and is reflected in the Mphasis’ Front2Back™ Transformation approach. Front2Back™ uses the exponential power of cloud and cognitive to provide hyper-personalized (C=X2C2TM=1) digital experience to clients and their end customers. Mphasis’ Service Transformation approach helps ‘shrink the core’ through the application of digital technologies across legacy environments within an enterprise, enabling businesses to stay ahead in a changing world. Mphasis’ core reference architectures and tools, speed and innovation with domain expertise and specialization are key to building strong relationships with marquee clients. Skills PRIMARY COMPETENCY : Tools PRIMARY SKILL : Dynatrace PRIMARY SKILL PERCENTAGE : 51 SECONDARY COMPETENCY : Tools SECONDARY SKILL : New Relic SECONDARY SKILL PERCENTAGE : 25 TERTIARY COMPETENCY : Tools TERTIARY SKILL : Automation Tools - Chef/Puppet/Ansible/Salt Stack TERTIARY SKILL PERCENTAGE : 24
Posted 1 month ago
5.0 - 10.0 years
14 - 19 Lacs
Bengaluru
Work from Office
As Analyst/ Lead /Scientist, you will play a pivotal role in optimizing our SRE team's ability to proactively identify and resolve issues within our voice, text messaging platform infrastructure. You will leverage your expertise in AI/ML and data analysis to model system behavior, flag anomalies, and analyze large-scale datasets to drive data-driven optimization and ensure effective utilization of ML capabilities. Qualifications/Skills/Abilities Minimum Requirements Formal Education Bachelor's degree in computer science, Information Technology, or a related field with specialization in data science (or equivalent experience). Experience (type duration) 5+ years of experience in data analysis or data science, preferably in a technical or engineering environment. Telecom domain experience is good to have. Skills Proficiency in data analysis tools (e.g., SQL, Python, etc.,). Strong understanding of statistical concepts and techniques. Experience with data visualization tools (e.g., Tableau, Power BI, kibana, graphana, DOMO). Familiarity with cloud-based infrastructure and applications (e.g., AWS, Azure, GCP). Ability to work effectively in a fast-paced, collaborative environment. Experience working with large datasets, log analysis, and tools like Elastic and Domo (or similar) will be a significant advantage. Strong knowledge of AI/ML algorithms and frameworks (e.g., TensorFlow, PyTorch). Experience with anomaly detection techniques and tools. Accreditation / certifications / licenses Preferred: Advanced degree in data science or a related field. Experience in the telecom domain. Certification in data science or machine learning. Key Duties Responsibilities 1 AI/ML Model Development: Develop and implement AI/ML models to analyze system behavior, identify anomalies, and predict potential issues. 2 Collaborative Problem Solving: Work closely with SRE and DevOps teams to identify data logs, analyze system behavior, and develop AI/ML models to address issues. 3 Data Analysis and Modeling: Conduct in-depth analysis of large-scale datasets (SQL, NoSQL) to extract valuable insights and build predictive models. 4 Anomaly Detection: Develop robust anomaly detection algorithms to flag unusual system behavior and prevent potential disruptions. 5 Data-Driven Optimization: Optimize system performance and resource allocation based on data-driven insights and AI/ML recommendations. 6 ML Capability Utilization: Ensure effective integration and utilization of ML capabilities across the SRE team to enhance operational efficiency and reliability. 7 Telemetry Data Analysis: Analyze large datasets of telemetry data from various sources (e.g., call logs, performance metrics, system logs) to identify patterns, trends, and anomalies. 8 Alerting Optimization: Develop and refine alerting rules based on data-driven insights to ensure timely notification of critical issues and minimize alert fatigue. 9 Proactive Issue Identification: Leverage data analysis techniques and AI/ML models to proactively identify potential system issues or outages before they occur. 10 Root Cause Analysis: Investigate and analyze incidents to identify root causes and implement preventive measures. 11 Data Visualization: Create clear and informative visualizations to communicate findings to stakeholders and facilitate decision-making. Preferred Skills Preferred: Advanced degree in data science or a related field. Experience in the telecom domain. Certification in data science or machine learning. Required Skills 5+ years of experience in data analysis or data science, preferably in a technical or engineering environment. Telecom domain experience is good to have.
Posted 1 month ago
4.0 - 6.0 years
4 - 7 Lacs
Gurugram
Work from Office
GreensTurn is seeking a highly skilled DevOps Engineer to manage and optimize our cloud infrastructure, automate deployment pipelines, and enhance the security and performance of our web based platform. The ideal candidate will be responsible for ensuring high availability, scalability, and security of the system while working closely with developers, security teams, and product managers. Key Responsibilities: Cloud Infrastructure Management: Deploy, configure, and manage cloud services on AWS or Azure for scalable, cost-efficient infrastructure. CI/CD Implementation: Develop and maintain CI/CD pipelines for automated deployments using GitHub Actions, Jenkins, or GitLab CI/CD . Containerization & Orchestration: Deploy and manage applications using Docker, Kubernetes (EKS/AKS), and Helm . Monitoring & Performance Optimization: Implement real-time monitoring, logging, and alerting using Prometheus, Grafana, CloudWatch, or ELK Stack . Security & Compliance: Ensure best practices for IAM (Identity & Access Management), role-based access control (RBAC), encryption, firewalls, and vulnerability management . Infrastructure as Code (IaC): Automate infrastructure provisioning using Terraform, AWS CloudFormation, or Azure Bicep . Networking & Load Balancing: Set up VPC, security groups, load balancers (ALB/NLB), and CDN (CloudFront/Azure CDN) . Disaster Recovery & Backup: Implement automated backups, failover strategies, and disaster recovery plans . Database Management: Optimize database performance, backup policies, and replication for MongoDB Collaboration & Documentation: Work with development teams to integrate DevOps best practices and maintain proper documentation for infrastructure and deployment workflows. Preferred candidate profile Perks and benefits
Posted 1 month ago
7.0 - 11.0 years
9 - 12 Lacs
Mumbai, Bengaluru, Delhi
Work from Office
Experience : 7.00 + years Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Must have skills required: DevOps, PowerShell, CLI, Amazon AWS, Java, Scala, Go (Golang), Terraform Opportunity Summary: We are looking for an enthusiastic and dynamic individual to join Upland India as a DevOps Engineer in the Cloud Operations Team. The individual will manage and monitor our extensive set of cloud applications. The successful candidate will possess extensive experience with production systems with an excellent understanding of key SaaS technologies as well as exhibit a high amount of initiative and responsibility. The candidate will participate in technical/architectural discussions supporting Uplands product and influence decisions concerning solutions and techniques within their discipline. What would you do Be an engaged, active member of the team, contributing to driving greater efficiency and optimization across our environments. Automate manual tasks to improve performance and reliability. Build, install, and configure servers in physical and virtual environments. Participate in an on-call rotation to support customer-facing application environments. Monitor and optimize system performance, taking proactive measures to prevent issues and reactive measures to correct them. Participate in the Incident, Change, Problem, and Project Management programs and document details within prescribed guidelines. Advise technical and business teams on tactical and strategic improvements to enhance operational capabilities. Create and maintain documentation of enterprise infrastructure topology and system configurations. Serve as an escalation for internal support staff to resolve issues. What are we looking for Experience: Overall, 7-9 years total experience in DevOps: AWS (solutioning and operations), GitHub/Bitbucket, CI/CD, Jenkins, ArgoCD, Grafana, Prometheus, etc. Technical Skills To be a part of this journey, you should have 7-9 years of overall industry experience managing production systems, an excellent understanding of key SaaS technologies, and a high level of initiative and responsibility. The following skills are needed for this role. Primary Skills: Public Cloud Providers: AWS: Solutioning, introducing new services in existing infrastructure, and maintaining the infrastructure in a production 24x7 SaaS solution. Administer complex Linux-based web hosting configuration components, including load balancers, web, and database servers. Develop and maintain CI/CD pipelines using GitHub Actions, ArgoCD, and Jenkins. EKS/Kubernetes, ECS, Docker Administration/Deployment. Strong knowledge of AWS networking concepts including: Route53, VPC configuration and management, DHCP, VLANs, HTTP/HTTPS and IPSec/SSL VPNs. Strong knowledge of AWS Security concepts: AWS: IAM accounts, KMS managed encryption, CloudTrail, CloudWatch monitoring/alerting. Automating existing manual workload like reporting, patching/updating servers by writing scripts, lambda functions, etc. Expertise in Infrastructure as Code technologies: Terraform is a must. Monitoring and alerting tools like Prometheus, Grafana, PagerDuty, etc. Expertise in Windows and Linux OS is a must. Secondary Skills: It would be advantageous if the candidate also has the following secondary skills: Strong knowledge of scripting/coding with Go, PowerShell, Bash, or Python . Soft Skills: Strong written and verbal communication skills directed to technical and non-technical team members. Willingness to take ownership of problems and seek solutions. Ability to apply creative problem solving and manage through ambiguity. Ability to work under remote supervision and with a minimum of direct oversight. Qualification Bachelors degree in computer science, Engineering, or a related field. Proven experience as a DevOps Engineer with a focus on AWS. Experience with modernizing legacy applications and improving deployment processes. Excellent problem-solving skills and the ability to work under remote supervision. Strong written and verbal communication skills, with the ability to articulate technical information to non-technical team members.
Posted 1 month ago
4.0 - 8.0 years
10 - 12 Lacs
Pune
Work from Office
We are seeking a skilled and motivated DevOps Engineer to join our dynamic team. The ideal candidate will have a strong background in CI/CD pipelines, cloud infrastructure, containerization, and automation, along with basic programming knowledge.
Posted 1 month ago
5.0 - 10.0 years
25 - 35 Lacs
Bengaluru
Remote
- Cloud Support Operations - SaaS and AWS (Storage, Databases, IAM, ECS, EKS, and CloudWatch) - Cloud Observability and Monitoring (Datadog, Splunk, Grafana, and Prometheus) - Infrastructure Management - Kubernetes and Containerization
Posted 1 month ago
6.0 - 11.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 6 to 11+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 month ago
8.0 - 12.0 years
8 - 18 Lacs
Hyderabad, Bengaluru
Work from Office
**Job Title:** Confluent Kafka Engineer (Azure & GCP Focus) **Location:** [Bangalore or Hyderabad ] **Role Overview** We are seeking an experienced **Confluent Kafka Engineer** with hands-on expertise in deploying, administering, and securing Kafka clusters in **Microsoft Azure** and **Google Cloud Platform (GCP)** environments. The ideal candidate will be skilled in cluster administration, RBAC, cluster linking and setup, and monitoring using Prometheus and Grafana, with a strong understanding of cloud-native best practices. **Key Responsibilities** - **Kafka Cluster Administration (Azure & GCP):** - Deploy, configure, and manage Confluent Kafka clusters on Azure and GCP virtual machines or managed infrastructure. - Plan and execute cluster upgrades, scaling, and disaster recovery strategies in cloud environments. - Set up and manage cluster linking for cross-region and cross-cloud data replication. - Monitor and maintain the health and performance of Kafka clusters, proactively identifying and resolving issues. - **Security & RBAC:** - Implement and maintain security protocols, including SSL/TLS encryption and role-based access control (RBAC). - Configure authentication and authorization (Kafka ACLs) across Azure and GCP environments. - Set up and manage **Active Directory (AD) plain authentication** and **OAuth** for secure user and application access. - Ensure compliance with enterprise security standards and cloud provider best practices. - **Monitoring & Observability:** - Set up and maintain monitoring and alerting using Prometheus and Grafana, integrating with Azure Monitor and GCP-native monitoring as needed. - Develop and maintain dashboards and alerts for Kafka performance and reliability metrics. - Troubleshoot and resolve performance and reliability issues using cloud-native and open-source monitoring tools. - **Integration & Automation:** - Develop and maintain automation scripts (Bash, Python, Terraform, Ansible) for cluster deployment, scaling, and monitoring. - Build and maintain infrastructure as code for Kafka environments in Azure and GCP. - Configure and manage **Kafka connectors** for integration with external systems, including **BigQuery Sync connectors** and connectors for Azure and GCP data services (such as Azure Data Lake, Cosmos DB, BigQuery). - **Documentation & Knowledge Sharing:** - Document standard operating procedures, architecture, and security configurations for cloud-based Kafka deployments. - Provide technical guidance and conduct knowledge transfer sessions for internal teams. **Required Qualifications** - Bachelors degree in Computer Science, Engineering, or related field. - 5+ years of hands-on experience with Confluent Platform and Kafka in enterprise environments. - Demonstrated experience deploying and managing Kafka clusters on **Azure** and **GCP** (not just using pre-existing clusters). - Strong expertise in cloud networking, security, and RBAC in Azure and GCP. - Experience configuring **AD plain authentication** and **OAuth** for Kafka. - Proficiency with monitoring tools (Prometheus, Grafana, Azure Monitor, GCP Monitoring). - Hands-on experience with Kafka connectors, including BQ Sync connectors, Schema Registry, KSQL, and Kafka Streams. - Scripting and automation skills (Bash, Python, Terraform, Ansible). - Familiarity with infrastructure as code practices. - Excellent troubleshooting and communication skills. **Preferred Qualifications** - Confluent Certified Developer/Admin certification. - Experience with cross-cloud Kafka streaming and integration scenarios. - Familiarity with Azure and GCP data services (Azure Data Lake, Cosmos DB, BigQuery). - Experience with other streaming technologies (e.g., Spark Streaming, Flink). - Experience with data visualization and analytics tools.
Posted 1 month ago
2.0 - 4.0 years
6 - 7 Lacs
Mumbai Suburban
Work from Office
We are the PERFECT match if you... Are a graduate with a minimum of 2-4 years of technical product support experience with following skills: Clear logical thinking and good communication skills. We believe in individuals who are high on ownership and like to operate with minimum management An ability to "understand" data and analyze logs to help investigate production issues and incidents Hands on experience of Cloud Platforms (GCP/AWS) Experience creating Dashboards & Alerts with tools like Metabase, Grafana, Prometheus Hands-on experience with writing SQL queries Hands on experience of logs monitoring tool (Kibana, Stackdriver, CloudWatch) Knowledge of Scripting language like Elixir/Python is a plus Experience in Kubernetes/Docker is a plus. Has actively worked on documenting RCA and creating incident reports. Good understanding of APls, with hands-on experience using tools like Postman or Insomnia. Knowledge of ticketing tool such as Freshdesk/Gitlab Here's what your day would look like... Defining monitoring events for IDfy's services and setting up the corresponding alerts Responding to alerts, with triaging, investigating and resolving resolution of issues Learning about various IDfy applications and understanding the events emitted Creating analytical dashboards for service performance and usage monitoring Responding to incidents and customer tickets in a timely manner Occasionally running service recovery scripts Helping improve the IDfy Platform by providing insights based on investigations and analysis root cause analysis Get in touch with ankit.pant@idfy.com
Posted 1 month ago
6.0 - 10.0 years
10 - 15 Lacs
Gurugram
Work from Office
Role & responsibilities Job Summary: We are seeking a highly skilled and self-driven DevOps Engineer to manage and optimize our AWS-based infrastructure and CI/CD pipelines. The ideal candidate will be responsible for maintaining Jenkins pipelines, EC2 instances running Docker containers, and managing AWS resources including load balancers. The engineer should be proactive in monitoring systems using Prometheus and Grafana, handling alerts, and driving cost optimization initiatives. Basic scripting knowledge in Bash and Python is essential for automation and operational tasks. Key Responsibilities: Independently manage and maintain Jenkins pipelines for CI/CD. Administer and optimize AWS infrastructure, including EC2 instances, load balancers, IAM, and networking. Manage Docker containers deployed on EC2 instances. Monitor system health and metrics using Prometheus and Grafana, and act on alerts. Troubleshoot and resolve infrastructure and deployment issues promptly. Analyze infrastructure usage and recommend cost optimization strategies. Develop automation scripts in Bash and Python for recurring operational tasks. Maintain and improve system security, scalability, and performance. Collaborate with development teams to streamline release processes and environment stability. Required Skills and Qualifications: 3+ years of hands-on experience in DevOps and AWS ecosystem. Strong expertise in Jenkins, including pipeline scripting (Declarative or Groovy). Good experience with EC2, Elastic Load Balancing (ELB), IAM, VPC, and other AWS services. Proficiency with Docker: building, running, and managing containers on EC2. Experience in monitoring and alerting using Prometheus and Grafana. Solid scripting skills in Bash and Python. Ability to work independently and own the full stack from build to deploy to monitor. Knowledge of security best practices in cloud environments. Preferred Qualifications: AWS Certification (Solutions Architect / SysOps / DevOps Engineer) is a plus. Experience with Infrastructure as Code (e.g., Terraform, CloudFormation) is a plus. Familiarity with Git workflows and source control best practices. Its a consultant role for 2-4 months,
Posted 1 month ago
4.0 - 8.0 years
6 - 10 Lacs
Hyderabad, Ahmedabad, Gurugram
Work from Office
About the Role: Grade Level (for internal use): 10 The Team As a member of the EDO, Collection Platforms & AI Cognitive Engineering team, you will design, build, and optimize enterprisescale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will help define architecture standards, mentor junior engineers, and champion best practices in an AWS-based ecosystem. Youll lead by example in a highly engaging, global environment that values thoughtful risk-taking and self-initiative. Whats in it for you: Drive solutions at enterprise scale within a global organization Collaborate with and coach a hands-on, technically strong team (including junior and mid-level engineers) Solve high-complexity, high-impact problems from end to end Shape the future of our data platform-build, test, deploy, and maintain production-ready pipelines Responsibilities: Architect, develop, and operate robust data extraction and automation pipelines in production Integrate, deploy, and scale ML models within those pipelines (real-time inference and batch scoring) Lead full lifecycle delivery of complex data projects, including: Designing cloud-native ETL/ELT and ML deployment architectures on AWS (EKS/ECS, Lambda, S3, RDS/DynamoDB) Implementing and maintaining DataOps processes with Celery/Redis task queues, Airflow orchestration, and Terraform IaC Establishing and enforcing CI/CD pipelines on Azure DevOps (build, test, deploy, rollback) with automated quality gates Writing and maintaining comprehensive test suites (unit, integration, load) using pytest and coverage tools Optimize data quality, reliability, and performance through monitoring, alerting (CloudWatch, Prometheus/Grafana), and automated remediation Define-and continuously improve-platform standards, coding guidelines, and operational runbooks Conduct code reviews, pair programming sessions, and provide technical mentorship Partner with data scientists, ML engineers, and product teams to translate requirements into scalable solutions, meet SLAs, and ensure smooth hand-offs Technical : 4-8 years' hands-on experience in data engineering, with proven track record on critical projects Expert in Python for building extraction libraries, RESTful APIs, and automation scripts Deep AWS expertiseEKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch, and Terraform Containerization and orchestrationDocker (mandatory) and Kubernetes (advanced) Proficient with task queues and orchestration frameworksCelery, Redis, Airflow Demonstrable experience deploying ML models at scale (SageMaker, ECS/Lambda endpoints) Strong CI/CD background on Azure DevOps; skilled in pipeline authoring, testing, and rollback strategies Advanced testing practicesunit, integration, and load testing; high coverage enforcement Solid SQL and NoSQL database skills (PostgreSQL, MongoDB) and data modeling expertise Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack) Excellent debugging, performance-tuning, and automation capabilities Openness to evaluate and adopt emerging tools, languages, and frameworks Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or a related field Prior contributions to open-source projects, GitHub repos, or technical publications Experience with infrastructure as code beyond Terraform (e.g., CloudFormation, Pulumi) Familiarity with GenAI model integration (calling LLM or embedding APIs) Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group)
Posted 1 month ago
5.0 - 9.0 years
16 - 20 Lacs
Pune
Work from Office
Job Summary Synechron is seeking an experienced Site Reliability Engineer (SRE) / DevOps Engineer to lead the design, implementation, and management of reliable, scalable, and efficient infrastructure solutions. This role is pivotal in ensuring optimal performance, availability, and security of our applications and services through advanced automation, continuous deployment, and proactive monitoring. The ideal candidate will collaborate closely with development, operations, and security teams to foster a culture of continuous improvement and technological innovation. Software Required Skills: Proficiency with cloud platforms such as AWS, GCP, or Azure Expertise with container orchestration tools like Kubernetes and Docker Experience with Infrastructure as Code (IaC) tools such as Terraform or CloudFormation Hands-on experience with CI/CD pipelines using Jenkins, GitLab CI, or similar Strong scripting skills in Python, Bash, or similar languages Preferred Skills: Familiarity with monitoring and logging tools like Prometheus, Grafana, ELK stack Knowledge of configuration management tools such as Ansible, Chef, or Puppet Experience implementing security best practices in cloud environments Understanding of microservices architecture and service mesh frameworks like Istio or Linkerd Overall Responsibilities Lead the development, deployment, and maintenance of scalable, resilient infrastructure solutions. Automate routine tasks and processes to improve efficiency and reduce manual intervention. Implement and refine monitoring, alerting, and incident response strategies to maintain high system availability. Collaborate with software development teams to integrate DevOps best practices into product development cycles. Guide and mentor team members on emerging technologies and industry best practices. Ensure compliance with security standards and manage risk through security controls and assessments. Stay abreast of the latest advancements in SRE, cloud computing, and automation technologies to recommend innovative solutions aligned with organizational goals. Technical Skills (By Category) Cloud Technologies: EssentialAWS, GCP, or Azure (both infrastructure management and deployment) PreferredMulti-cloud management, cloud cost optimization Containers and Orchestration: EssentialDocker, Kubernetes PreferredService mesh frameworks like Istio, Linkerd Automation & Infrastructure as Code: EssentialTerraform, CloudFormation, or similar PreferredAnsible, SaltStack Monitoring & Logging: EssentialPrometheus, Grafana, ELK Stack PreferredDataDog, New Relic, Splunk Security & Compliance: Knowledge of identity and access management (IAM), encryption, vulnerability management Development & Scripting: EssentialPython, Bash scripting PreferredGo, PowerShell Experience 5-9 years of experience in software engineering, systems administration, or DevOps/SRE roles. Proven track record in designing and deploying large-scale, high-availability systems. Hands-on experience with cloud infrastructure automation and container orchestration. Past roles leading incident management, performance tuning, and security enhancements. Experience in working with cross-functional teams using Agile methodologies. BonusExperience with emerging technologies like Blockchain, IoT, or AI integrations. Day-to-Day Activities Architect, deploy, and maintain cloud infrastructure and containerized environments. Develop automation scripts and frameworks to streamline deployment and operations. Monitor system health, analyze logs, and troubleshoot issues proactively. Conduct capacity planning and performance tuning. Collaborate with development teams to integrate new features into production with zero downtime. Participate in incident response, post-mortem analysis, and continuous improvement initiatives. Document procedures, guidelines, and best practices for the team. Stay updated on evolving SRE technologies and industry trends, applying them to enhance our infrastructure. Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or related field. Certifications in cloud platforms (AWS Certified Solutions Architect, Azure DevOps Engineer, Google Professional Cloud Engineer) are preferred. Additional certifications in Kubernetes, Terraform, or security are advantageous. Professional Competencies Strong analytical and problem-solving abilities. Excellent collaboration and communication skills. Leadership qualities with an ability to mentor junior team members. Ability to work under pressure and manage multiple priorities. Commitment to best practices around automation, security, and reliability. Eagerness to learn emerging technologies and adapt to evolving workflows. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice
Posted 1 month ago
10.0 - 15.0 years
15 - 30 Lacs
Thiruvananthapuram
Work from Office
Job Summary: We are seeking an experienced DevOps Architect to drive the design, implementation, and management of scalable, secure, and highly available infrastructure. The ideal candidate should have deep expertise in DevOps practices, CI/CD pipelines, cloud platforms, and infrastructure automation across multiple cloud environments along with strong leadership and mentoring capabilities. Job Duties and Responsibilities Lead and manage the DevOps team to ensure reliable infrastructure and automated deployment processes. Design, implement, and maintain highly available, scalable, and secure cloud infrastructure (AWS, Azure, GCP, etc.). Develop and optimize CI/CD pipelines for multiple applications and environments. Drive Infrastructure as Code (IaC) practices using tools like Terraform, CloudFormation, or Ansible. Oversee monitoring, logging, and alerting solutions to ensure system health and performance. Collaborate with Development, QA, and Security teams to integrate DevOps best practices across the SDLC. Lead incident management and root cause analysis for production issues. Ensure robust security practices for infrastructure and pipelines (secrets management, vulnerability scanning, etc.). Guide and mentor team members, fostering a culture of continuous improvement and technical excellence. Evaluate and recommend new tools, technologies, and processes to improve operational efficiency. Required Qualifications Education Bachelor's degree in Computer Science, IT, or related field; Master's preferred At least two current cloud certifications (e.g., AWS Solutions Architect, Azure Administrator, GCP DevOps Engineer, CKA, Terraform etc.) Experience: 10+ years of relevant experience in DevOps, Infrastructure, or Cloud Operations. 5+ years of experience in a technical leadership or team lead role. Knowledge, Skills & Abilities Expertise in at least two major cloud platform: AWS , Azure , or GCP . Strong experience with CI/CD tools such as Jenkins, GitLab CI, Azure DevOps, or similar. Hands-on experience with Infrastructure as Code (IaC) tools like Terraform, Ansible, or CloudFormation. Proficient in containerization and orchestration using Docker and Kubernetes . Strong knowledge of monitoring, logging, and alerting tools (e.g., Prometheus, Grafana, ELK, CloudWatch). Scripting knowledge in languages like Python , Bash , or Go . Solid understanding of networking, security, and system administration. Experience in implementing security best practices across DevOps pipelines. Proven ability to mentor, coach, and lead technical teams. Preferred Skills Experience with serverless architecture and microservices deployment. Experience with security tools and best practices (e.g., IAM, VPNs, firewalls, cloud security posture management ). Exposure to hybrid cloud or multi-cloud environments. Knowledge of cost optimization and cloud governance strategies. Experience working in Agile teams and managing infrastructure in production-grade environments Relevant certifications (AWS Certified DevOps Engineer, Azure DevOps Expert, CKA, etc.). Working Conditions Work Arrangement: An occasionally hybrid opportunity based out of our Trivandrum office. Travel Requirements: Occasional travel may be required for team meetings, user research, or conferences. On-Call Requirements: Light on-call rotation may be required depending on operational needs. Hours of Work: Monday to Friday, 40 hours per week, with overlap with PST required as needed. Living AOT s Values Our values guide how we work, collaborate, and grow as a team. Every role at AOT is expected to embody and promote these values: Innovation: We pursue true innovation by solving problems and meeting unarticulated needs. Integrity: We hold ourselves to high ethical standards and never compromise. Ownership: We are all responsible for our shared long-term success. Agility: We stay ready to adapt to change and deliver results. Collaboration: We believe collaboration and knowledge-sharing fuel innovation and success. Empowerment: We support our people so they can bring the best of themselves to work every day.
Posted 1 month ago
3.0 - 7.0 years
3 - 7 Lacs
Mohali
Work from Office
The Cloud Computing Training Expert will be responsible for delivering high-quality training sessions, developing curriculum, and guiding students toward industry certifications and career opportunities. Key Responsibilities 1. Training Delivery Design, develop, and deliver high-quality cloud computing training through courses, workshops, boot camps, and webinars. Cover a broad range of cloud topics, including but not limited to: Cloud Fundamentals (AWS, Azure, Google Cloud) Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Serverless Computing Cloud Security, Identity & Access Management (IAM), Compliance DevOps & CI/CD Pipelines (Jenkins, Docker, Kubernetes, Terraform, Ansible) Networking in the Cloud, Virtualization, and Storage Solutions Multi-cloud Strategies & Cost Optimization 2. Curriculum Development Develop and continuously update training materials, hands-on labs, and real-world projects. Align curriculum with cloud certification programs (AWS Certified Solutions Architect, Azure Administrator, Google Cloud Professional, etc.). 3. Training Management Organize and manage cloud computing training sessions, ensuring smooth delivery and active student engagement. Track student progress and provide guidance, feedback, and additional learning resources. 4. Technical Support & Mentorship Assist students with technical queries and troubleshooting related to cloud platforms. Provide career guidance, helping students pursue cloud certifications and job placements in cloud computing and DevOps roles. 5. Industry Engagement Stay updated on emerging cloud technologies, trends, and best practices. Represent ASB at cloud computing conferences, industry events, and tech forums. 6. Assessment & Evaluation Develop and administer hands-on labs, quizzes, and real-world cloud deployment projects. Evaluate learner performance and provide constructive feedback. Required Qualifications & Skills > Educational Background Bachelors or Masters degree in Computer Science, Information Technology, Cloud Computing, or a related field. > Hands-on Cloud Experience 3+ years of experience in cloud computing, DevOps, or cloud security roles. Strong expertise in AWS, Azure, and Google Cloud, including cloud architecture, storage, and security. Experience in Infrastructure as Code (IaC) using Terraform, CloudFormation, or Ansible. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines. > Teaching & Communication Skills 2+ years of experience in training, mentoring, or delivering cloud computing courses. Ability to explain complex cloud concepts in a clear and engaging way. > Cloud Computing Tools & Platforms Experience with AWS services (EC2, S3, Lambda, RDS, IAM, CloudWatch, etc.). Hands-on experience with Azure and Google Cloud solutions. Familiarity with DevOps tools (Jenkins, GitHub Actions, Kubernetes, Docker, Prometheus, Grafana, etc.). > Passion for Education A strong desire to train and mentor future cloud professionals. Preferred Qualifications > Cloud Certifications (AWS, Azure, Google Cloud) AWS Certified Solutions Architect, AWS DevOps Engineer, Azure Administrator, Google Cloud Professional Architect or a similar architecture. > Experience in Online Teaching Prior experience in delivering online training (Udemy, Coursera, or LMS platforms). > Knowledge of Multi-Cloud & Cloud Security Understanding of multi-cloud strategies, cloud cost optimization, and cloud-native security practices. > Experience in Hybrid Cloud & Edge Computing Familiarity with hybrid cloud deployment, cloud automation, and emerging edge computing trends.
Posted 1 month ago
8.0 - 10.0 years
25 - 30 Lacs
Bengaluru, Indiranagar
Work from Office
Years of Experience 8 to 10 years of experience PD1 Any Project specific Prerequisite skills Candidate will work from customer Location Bangalore (Indiranagar) No of Contractors required 1 Detailed JD Extensive hands-on experience with OpenShift (Azure Redhat OpenShift) - installation, upgrades, administration, and troubleshooting. Strong expertise in Kubernetes, containerization (Docker), and cloud-native development. Deep knowledge of Terraform for infrastructure automation and ArgoCD for GitOps workflows. Experience in CI/CD pipelines, automation, and security integration within a DevSecOps framework. Strong understanding of cybersecurity principles, including vulnerability management, policy enforcement, and access control. Proficiency in Microsoft Azure and its services related to networking, security, and compute. Hands-on experience with monitoring and observability tools (Splunk, Prometheus, Grafana, or similar). Agile mindset, preferably with SAFe Agile experience. Strong communication skills and ability to work with global teams across time zones. Experience with Helm charts and Kubernetes operators. Knowledge of Service Mesh (Istio, Linkerd) for OpenShift (Azure Redhat OpenShift) environments, preferred. Hands-on exposure to Terraform Cloud & Enterprise features. Prior experience in automotive embedded software environments.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France