Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6.0 - 9.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 6 to 9+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 2 weeks ago
3.0 - 8.0 years
30 - 35 Lacs
Bengaluru
Work from Office
locationsBangalore - Carinaposted onPosted 14 Days Ago job requisition idR-045305 The IT AI Application Platform team is seeking a Senior Site Reliability Engineer (SRE) to develop, scale, and operate our AI Application Platform based on Red Hat technologies, including OpenShift AI (RHOAI) and Red Hat Enterprise Linux AI (RHEL AI). As an SRE you will contribute to running core AI services at scale by enabling customer self-service, making our monitoring system more sustainable, and eliminating toil through automation. On the IT AI Application Platform team, you will have the opportunity to influence the complex challenges of scale which are unique to Red Hat IT managed AI platform services, while using your skills in coding, operations, and large-scale distributed system design. We develop, deploy, and maintain Red Hats next-generation Ai application deployment environment for custom applications and services across a range of hybrid cloud infrastructures. We are a global team operating on-premise and in the public cloud, using the latest technologies from Red Hat and beyond. Red Hat relies on teamwork and openness for its success. We are a global team and strive to cultivate a transparent environment that makes room for different voices. We learn from our failures in a blameless environment to support the continuous improvement of the team. At Red Hat, your individual contributions have more visibility than most large companies, and visibility means career opportunities and growth. What you will do The day-to-day responsibilities of an SRE involve working with live systems and coding automation. As an SRE you will be expected to Build and manage our large scale infrastructure and platform services, including public cloud, private cloud, and datacenter-based Automate cloud infrastructure through use of technologies (e.g. auto scaling, load balancing, etc.), scripting (bash, python and golang), monitoring and alerting solutions (e.g. Splunk, Splunk IM, Prometheus, Grafana, Catchpoint etc) Design, develop, and become expert in AI capabilities leveraging emerging industry standards Participate in the design and development of software like Kubernetes operators, webhooks, cli-tools.. Implement and maintain intelligent infrastructure and application monitoring designed to enable application engineering teams Ensure the production environment is operating in accordance with established procedures and best practices Provide escalation support for high severity and critical platform-impacting events Provide feedback around bugs and feature improvements to the various Red Hat Product Engineering teams Contribute software tests and participate in peer review to increase the quality of our codebase Help and develop peers capabilities through knowledge sharing, mentoring, and collaboration Participate in a regular on-call schedule, supporting the operation needs of our tenants Practice sustainable incident response and blameless postmortems Work within a small agile team to develop and improve SRE methodologies, support your peers, plan and self-improve What you will bring A bachelor's degree in Computer Science or a related technical field involving software or systems engineering is required. However, hands-on experience that demonstrates your ability and interest in Site Reliability Engineering are valuable to us, and may be considered in lieu of degree requirements. You must have some experience programming in at least one of these languagesPython, Golang, Java, C, C++ or another object-oriented language. You must have experience working with public clouds such as AWS, GCP, or Azure. You must also have the ability to collaboratively troubleshoot and solve problems in a team setting. As an SRE you will be most successful if you have some experience troubleshooting an as-a-service offering (SaaS, PaaS, etc.) and some experience working with complex distributed systems. We like to see a demonstrated ability to debug, optimize code and automate routine tasks. We are Red Hat, so you need a basic understanding of Unix/Linux operating systems. Desired skills 3+ years of experience of using cloud providers and technologies (Google, Azure, Amazon, OpenStack etc) 1+ years of experience administering a kubernetes based production environment 2+ years of experience with enterprise systems monitoring 2+ years of experience with enterprise configuration management software like Ansible by Red Hat, Puppet, or Chef 2+ years of experience programming with at least one object-oriented language; Golang, Java, or Python are preferred 2+ years of experience delivering a hosted service Demonstrated ability to quickly and accurately troubleshoot system issues Solid understanding of standard TCP/IP networking and common protocols like DNS and HTTP Demonstrated comfort with collaboration, open communication and reaching across functional boundaries Passion for understanding users needs and delivering outstanding user experiences Independent problem-solving and self-direction Works well alone and as part of a global team Experience working with Agile development methodologies #LI-SH4 About Red Hat is the worlds leading provider of enterprise software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact.
Posted 2 weeks ago
7.0 - 8.0 years
17 - 22 Lacs
Pune
Work from Office
ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it , our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage an d passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. ZS’s Cloud Center of Excellence team defines and implements cloud best practices that ensure secure and resilient enterprise grade systems architecture for client facing/delivery software solutions. The Cloud team at ZS is a casual, collaborative, and a smart group with offices in Evanston, Illinois and Pune, India. Cloud Engineering Lead This role will involve managing and scaling Docker and Kubernetes infrastructure, designing and implementing cloud architectures, and leading containerization and infrastructure automation across various projects. You will work with a broader set of DevOps and CNCF tools, applying deep expertise in CI/CD, security, and infrastructure-as-code to support high-availability applications across diverse cloud environments. What You’ll Do: Design, implement, and manage Kubernetes clusters (EKS) across AWS environments, maintaining secure, scalable, and resilient solutions. Lead the development and automation of CI/CD pipelines using tools such as ArgoCD, Cilium, TeamCity, CodeBuild, CodeDeploy, and CodePipeline to streamline application deployment and configuration. Expertly manage cloud resources using Terraform, and develop reusable, version-controlled IaC modules, promoting modular, scalable infrastructure deployment. Strong understanding and experience with Helm charts, and CNCF applications such as Cilium, Karpenter, and Prometheus. Configure and optimize Kubernetes clusters, ensuring compliance with container security standards. Oversee Docker image creation, tagging, and management, including maintaining secure, efficient Docker repositories (ECR, JFrog). Utilize monitoring tools (Prometheus, Grafana, Splunk, Cloudwatch Container Insight) to ensure system performance, detect issues, and proactively address performance concerns. Act as an escalation point for critical issues, conduct root cause analyses and maintain SOPs and documentation for efficient incident response and knowledge sharing. Provide training, mentorship, and technical guidance to junior team members, fostering a culture of continuous learning within the CCoE team. What You’ll Bring: BE/B.Tech or higher in CS, IT or EE Hand-on experience of 7-8yrs in delivering container-based deployments using Docker and Kubernetes Good exposure in writing Helm Charts and configuring Kubernetes CI/CD pipelines Experience in writing manifest files for Deployment, Service, Pod, Daemon Sets, Persistent Volume (PV), Persistent Volume Claim (PVC), Storage, Namespaces Hand-on experience in delivering container-based deployments using Docker and Kubernetes Experience in Cloud, DevOps, and Linux and Experience in DevOps Tools like Git, Helm, Terraform, Docker, and Kubernetes Strong hands-on experience in Python, Yaml, or similar languages. Build and deploy Docker containers to break up monolithic apps into microservices, improving developer workflow Strong understanding of container security and relevant tool experience like Sysdig, CrowdStrike, etc Strong knowledge of container performance monitoring and scaling policies Deep Linux knowledge with an understanding of the container ecosystem Good Experience to develop images using Docker container Technology Should have good communication skills and a can-do attitude Perks & Benefits: ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel: Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application: Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At www.zs.com
Posted 2 weeks ago
6.0 - 10.0 years
11 - 15 Lacs
Mumbai, Bengaluru
Work from Office
locationsNew DelhiMumbaiBangalore - Carinaposted onPosted 30+ Days Ago job requisition idR-047126 About the job: Red Hat's Services team is seeking an experienced and highly skilled support engineer or systems administrator with an overall 6-10 years of experience, to join us as Technical Account Manager for our Telco customers covering Red Hat OpenStack and Red Hat OpenShift Container Platform. In this role, you'll provide personalized, proactive technology engagement and guidance, and cultivate high-value relationships with clients as you seek to understand and meet their needs with the complete Red Hat portfolio of products. As a Technical Account Manager, you will provide a level of premium advisory-based support that builds, maintains, and grows long-lasting customer loyalty by tailoring support for each of our customer's environments, facilitating collaboration with their other vendors, and advocating on the customer's behalf. At the same time, you'll work closely with our Engineering, R&D, Product Management, Global Support, Sales & Services teams to debug, test, and resolve issues. What will you do: Perform technical reviews & share knowledge to proactively identify & prevent issues Understand your customers' technical infrastructures, hardware, processes, and offerings Perform initial or secondary investigations and respond to online and phone support requests Deliver key portfolio updates and assist customers with upgrades Manage customer cases and maintain clear and concise case documentation Create customer engagement plans & keep documentation on customer environments updated Ensure a high level of customer satisfaction with each qualified engagement through the complete adoption life cycle of our offerings Engage with Red Hat's field teams, customers to ensure a positive platform & cloud technology experience and a successful outcome resulting in long-term enterprise success Communicate how specific Red Hat platform & cloud solutions and our cloud road-map align to customer use cases Capture Red Hat product capabilities and identify gaps as related to customer use cases through a closed-loop process for each step of the engagement life cycle Engage with Red Hat's product engineering teams to help develop solution patterns based on customer engagements and personal experience that guide platform adoption Establish and maintain parity with Red Hats platform & cloud technologies strategy Contribute internally to the Red Hat team, share knowledge and best practices with team members, contribute to internal projects and initiatives, and serve as a subject matter expert (SME) and mentor for specific technical or process areas or process areas Travel to visit customers, partners, conferences, and other events as needed What will you bring: Bachelor's degree in science or a technical field; engineering or computer science Competent reading and writing skills in English Ability to effectively manage and grow existing enterprise customers by delivering proactive, relationship-based, best-in-class support Upstream involvement in open source projects is a plus RHOSP Proven & strong system administration and troubleshooting experience with Red Hat OpenStack Experience in a support, development, engineering, or quality assurance organization RHOCP Experience in Red Hat OpenShift Container Platform (RHOCP) or Kubernetes or Dockers cluster management Understanding of RHOCP Architecture & different types of RHOCP installations Experience in RHOCP Troubleshooting and data/logs collection Strong working knowledge of Prometheus and Grafana open source monitoring solutions will be considered a plus Administration experience of Red Hat OpenShift v4.x will be considered a plus About Red Hat is the worlds leading provider of enterprise software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact.
Posted 2 weeks ago
7.0 - 10.0 years
22 - 27 Lacs
Gurugram
Work from Office
As the Technical Policy entry point for project teams, you coordinate the study activities for the solution to implement. In this respect, you: Promote the Technical Policy and make the project team adopt it. Depending on the skills of the project team, you design, or coach the project team, for the software architecture and technical architecture. Coordinate and setup a collaborative framework with the project team and the experts in each architecture design field –security, network, cloud offers, DevOps, databases, monitoring, big data, Kubernetes, API’s etc- in order to define a global solution compliant with the enterprise technical policy. Coordinate and define with experts in charge, the services in build and run phases -OLA, SLA, support chain, cloud hosting entity, deployment process in DevOps mode, etc – resulting in a clear and consistent assessment of the roles and commitments in the project. You are accountable for the consistency of the complete solution. Establish the hosting budget of the implementation solution. Build a roadmap with the project team up to the service stabilization. Coach project teams during the build phase by providing contacts names and processes. Make sure that the implemented solution is the one initially defined. As a technical / software architect, Stay informed, about the evolutions in the possible solutions as per the innovations in the domain or the enterprise policy, with the permanent concern to optimize practices and reduce the costs. Propose studies on enablers, solutions, architecture templates, so as to adapt the Technical Policy to new challenges. Animate a community with the representatives of IT Portfolios, or technical architects within these portfolios, in order to share and explain the IT Technical policy. From a master’s degree in computer science, you have a significant experience as a technical architect and project manager so as to design the technical target and coach the project for hosting the apps in a cloud context. You also have software architecture basics allowing to coach the project in designing the apps in a cloud native model. You wish to stay close to technical topics, as well as develop your leadership in an Agile / DevOps context. IT skills: - Technical Architectures o OpenStack clouds, Hyperscaler clouds (Azure), Kubernetes, Service Oriented Architecture, Databases o Network & security architecture skills (load balancing, firewalls, reverse proxies, DNS, certificates, ) o Prometheus, Grafana - Software architectures o Cloud-native applications based on micro-services, Domain-Driven Design. - Application Architecture Understanding o APIs, microservices, middleware, messaging systems Tools / Methods: - Management of transversal projects - ITIL - DevOps – CI/CD chain : Gitlab CI - Agility – JIRA, Confluence Professionnal skills : Leadership Meeting management Capacity for analysis and synthesis Capacity to challenge Curiosity, hungriness for IT techniques, capacity to learn Very good english level spoken and written . Negociation capacity Creativity, proposal-oriented Taste for teamwork and transversal work Result-oriented Rigour Capacity to document Roles and Responsibilities As the Technical Policy entry point for project teams, you coordinate the study activities for the solution to implement. In this respect, you: Promote the Technical Policy and make the project team adopt it. Depending on the skills of the project team, you design, or coach the project team, for the software architecture and technical architecture. Coordinate and setup a collaborative framework with the project team and the experts in each architecture design field –security, network, cloud offers, DevOps, databases, monitoring, big data, Kubernetes, API’s etc- in order to define a global solution compliant with the enterprise technical policy. Coordinate and define with experts in charge, the services in build and run phases -OLA, SLA, support chain, cloud hosting entity, deployment process in DevOps mode, etc – resulting in a clear and consistent assessment of the roles and commitments in the project. You are accountable for the consistency of the complete solution. Establish the hosting budget of the implementation solution. Build a roadmap with the project team up to the service stabilization. Coach project teams during the build phase by providing contacts names and processes. Make sure that the implemented solution is the one initially defined. As a technical / software architect, Stay informed, about the evolutions in the possible solutions as per the innovations in the domain or the enterprise policy, with the permanent concern to optimize practices and reduce the costs. Propose studies on enablers, solutions, architecture templates, so as to adapt the Technical Policy to new challenges. Animate a community with the representatives of IT Portfolios, or technical architects within these portfolios, in order to share and explain the IT Technical policy. From a master’s degree in computer science, you have a significant experience as a technical architect and project manager so as to design the technical target and coach the project for hosting the apps in a cloud context. You also have software architecture basics allowing to coach the project in designing the apps in a cloud native model. You wish to stay close to technical topics, as well as develop your leadership in an Agile / DevOps context. IT skills: - Technical Architectures o OpenStack clouds, Hyperscaler clouds (Azure), Kubernetes, Service Oriented Architecture, Databases o Network & security architecture skills (load balancing, firewalls, reverse proxies, DNS, certificates, ) o Prometheus, Grafana - Software architectures o Cloud-native applications based on micro-services, Domain-Driven Design. - Application Architecture Understanding o APIs, microservices, middleware, messaging systems Tools / Methods: - Management of transversal projects - ITIL - DevOps – CI/CD chain : Gitlab CI - Agility – JIRA, Confluence Professionnal skills : Leadership Meeting management Capacity for analysis and synthesis Capacity to challenge Curiosity, hungriness for IT techniques, capacity to learn Very good english level spoken and written . Negociation capacity Creativity, proposal-oriented Taste for teamwork and transversal work Result-oriented Rigour Capacity to document
Posted 2 weeks ago
6.0 - 10.0 years
16 Lacs
Pune
Work from Office
MAJOR RESPONSIBILITIES Serve as the primary support contact and a technical support liaison to specified customers and monitor their email team-lists and new Support Service requests. Maintain information of allocated customers including contact points, deployment data, remote access method(s) and other information requested by management. Maintain administration of the allocated cases/tickets ensuring that case detail and status is accurate and up-to-date at all times. Where necessary, escalate the issues to other team members and manager or to other teams in accordance with relevant procedures. Serve as a technical expert within the team and assist and guide engineers in the execution of their duties and problem resolution. Install, Deploy and Test company`s software. May travel to customer sites to perform project or support work. Log issues (when a software bug is discovered) in the bug tracking system, reproduce the bug and provide all reasonable data, including the instructions on how the bug is reproduced, to the Product Group to assist them in resolving the issue. Create knowledge base documentation for all resolved issues. May serve as a technical support liaison and the primary support contact to specified partners. Work toward certification in one or more relevant non-company technology. Write tools and scripts to assist in trouble-shooting and support activities. Technically engage in and often lead the technical resolution of crisis management situations as requested by their manager and/or the Crisis Management team. Participate in an on-call rotation by being available by pager 12 hours per day, 7 days per week, including public holidays. Respond to pager alerts immediately and be no more than 15 minutes away from being able to actively engage, log any technical support issues raised in the call tracking system and begin resolution. Install, configure and test new patches and services on laboratory, pre-production and production environments creating all the needed documentation including the cutovers and traffic migrations at customer site. Participate in new Service Line rollouts and the implementation/deployment of new Mobility products Flexibility to work UK/US hours and on weekend. REQUIRED KNOWLEDGE, SKILLS, AND EXPERIENCE Preferable Telkom Support Domain experience. A Product Support experience is +. A scientific degree with at least 4-10 years of experience in the technical support arena in a software and/or Telco environment. Preferably in a multi-national company dealing with customers and colleagues around the world. Strong practical Unix/Linux operations, administration and troubleshooting skills. TCP/IP and knowledge of networking. UNIX scripting and maintaining Databases. Very good debugging in packet/network captures through wireshark, tcpdump, tshark etc Linux System Administration or DevOps experience with an emphasis on system and application Support, Deployment & automation Deep networking experience from OSI layer 2 to layer 7+ (TCP/UDP/IP, HTTP, HTTPS, load balancers, firewalls, routers, switches) Experience with SQL, LDAP or RDBMS Good Understanding around Monitoring & Reporting tools Grafana, Kibana, Pentaho, Splunk, Nagios etc etc Public/Private Clouds & Virtualization (AWS, Openstack, VMWare) Understanding of CI/CD, Kubernetes, Ansible, Genkins and Dev-Ops concepts. Proven skills in writing scripts (Shell, Perl, Python) to automate routine tasks or issue troubleshooting techniques and debugging skills to understanding existing one too. Proven technical expertise in Mobility, Internet and/or mobile technologies with strong problem solving skills and demonstrated ability to articulate and present technical solutions to address business problems. Strong interpersonal and communication skills, both written and verbal with the ability to develop and maintain strong working relationships at all levels both with the customer and internally. Demonstrated ability to work under pressure and manage critical situations and to influence without direct authority. Being politically astute with an understanding of commercial impact and principals. Being operationally driven with proven experience in a results-driven environment. Being customer focused and self-motivated and with strong teamwork skills and a flexible approach.
Posted 2 weeks ago
1.0 - 3.0 years
3 - 6 Lacs
Pune
Work from Office
About the Role: As a TechOps Engineer you will troubleshoot, debug, evaluate and resolve customer impacting issues with a focus on detecting patterns and working with the engineering development and or product teams to eliminate defects. The position requires a combination of strong troubleshooting, technical, communication and problem solving skills. This job requires you to constantly hit the ground running and your ability to learn quickly and work on disparate and overlapping tasks will define your success. Key Responsibilities • Deployment of new releases , environments for applications. • Responding to emails and incident tickets, maintaining issue ownership. • Build and maintain highly scalable, large scale deployments globally • Co-Create and maintain architecture for 100% uptime. E.g. creating alternate connectivity. • Practice sustainable incident response/management and blameless post-mortems. • Monitor and maintain production environment stability. • Perform production support activities which involve the assignment of issues and issue analysis and resolution within the specified SLAs. • Coordinate with the Application Development Team to resolve issues on production. • Suggest fixes to complex issues by doing a thorough analysis of root cause and impact of the defect. • Provide daily support with a resolution of escalated tickets and act as a liaison to business and technical leads to ensure issues are resolved in a timely manner. • Technical hands-on troubleshooting, including parsing logs and following stack traces. • Efficiently do multi-tasking where the job holder will have to handle multiple customer requests from various sources. • Identifying and documenting technical problems, ensuring timely resolution. • Prioritize workload, providing timely and accurate resolutions. • Should be highly collaborative with the team, and other stakeholders. Experience and Skills: • Self-motivated, ability to do multitasking efficiently. • Database queries execution experience in any of DB (MySQL,Postgres /Mongo) • Basic Linux OS knowledge • Hands-on experience on Shell/UNIX commands. • Experience in Monitoring tools like Grafana, Logging tool like ELK. • Rest API working experience to execute curl, Analysing request and response, HTTP codes etc. • Knowledge on Incidents and escalation practices. • Ability to troubleshoot issues and able to handle different types of customer inquiries. • Should have worked in incident management tools like service now.
Posted 2 weeks ago
10.0 - 13.0 years
35 - 50 Lacs
Chennai
Work from Office
Job Summary Site Reliability Engineer Responsibilities Ensure security automation across our entire platform collaborating with developers security and operations teams to ensure platform integrity Have a passion for Security Agile and DevOps and promote shiftleft and ShiftRight culture which integrates security analysis into each CI/CD stages Implement new tools and processes to enable security in Cloud environment Automatic audit and implement security control in the DevOps CI/CD pipeline ensuring processes are followed maintained reviewed and updated regularly Contribute to SRE operations (Production support incident response and Oncall rota) Pasion for observability The skills you will need Strong experience in SRE practice with knowledge of conducting security checks and mitigation (static and dynamic code analysis SAST DAST IAST vulnerability analysis / penetration tests security component analysis) Hands on Experience with Azure DevOps is a must including Repos advanced pipelines and package management. Must have knowledge in Azure Cloud and its solutions Hands on Experience in IaC JSON/YAML Azure Bicep Azure policies Azure DevOps Open Telemetry Azure Monitoring Azure Sentinel Azure Defender Grafana Kusto queries Kubernetes AKS Azure ARC BICEP Azure function apps Azure Synapse PowerBI Azure Data Factory Dynamics 365 AzureML and MLflow Programming skills on PowerShell Knowledge on building and testing .NET and C# application and APIs Experience onCloud Networking Skills (TCP/IP SSL SMTP HTTP FTP DNS) WAF IPS/IDS Azure FrontDoor Experience working on large scale distributed systems with deep understanding of design impacts on performance reliability operations and security Working Experience in Monitoring tools and their implementation preferably with Azure Monitoring Suit. Knowledge of securing APIs and security in microservices is beneficial Should have demonstrated ability to work in an Agile environment Strong communication and teamwork skills Certifications Required Azure DevOps
Posted 2 weeks ago
9.0 - 10.0 years
5 - 7 Lacs
Noida, Bengaluru
Work from Office
Requirements: 5+ years of experience in DevOps or Cloud Engineering. Expertise in AWS (EC2, S3, RDS, Lambda, IAM, VPC, Route 53, etc.) and Azure (VMs, AKS, App Services, Azure Functions, Networking, etc.). Strong experience with Infrastructure as Code (IaC) using Terraform, CloudFormation, or Bicep. Hands-on experience with CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, or Azure DevOps. Proficiency in scripting languages like Python, Bash, or PowerShell. Experience with Kubernetes (EKS, AKS) and containerization (Docker). Knowledge of monitoring and logging tools like Prometheus, Grafana, ELK Stack, CloudWatch, and Azure Monitor. Familiarity with configuration management tools like Ansible, Puppet, or Chef. Strong understanding of security best practices in cloud environments. Experience with version control systems like Git. Excellent problem-solving skills and the ability to work in a fast-paced environment.
Posted 2 weeks ago
8.0 - 9.0 years
5 - 7 Lacs
Noida, Bengaluru
Work from Office
Requirements: 5+ years of experience in DevOps or Cloud Engineering. Expertise in AWS (EC2, S3, RDS, Lambda, IAM, VPC, Route 53, etc.) and Azure (VMs, AKS, App Services, Azure Functions, Networking, etc.). Strong experience with Infrastructure as Code (IaC) using Terraform, CloudFormation, or Bicep. Hands-on experience with CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, or Azure DevOps. Proficiency in scripting languages like Python, Bash, or PowerShell. Experience with Kubernetes (EKS, AKS) and containerization (Docker). Knowledge of monitoring and logging tools like Prometheus, Grafana, ELK Stack, CloudWatch, and Azure Monitor. Familiarity with configuration management tools like Ansible, Puppet, or Chef. Strong understanding of security best practices in cloud environments. Experience with version control systems like Git. Excellent problem-solving skills and the ability to work in a fast-paced environment.
Posted 2 weeks ago
8.0 - 12.0 years
35 - 50 Lacs
Bengaluru
Work from Office
Job Summary We are seeking a highly skilled Principal Infra Developer with 8 to 12 years of experience to join our team. The ideal candidate will have expertise in Splunk Admin SRE Grafana ELK and Dynatrace AppMon. This hybrid role requires a proactive individual who can contribute to our infrastructure development projects and ensure the reliability and performance of our systems. The position does not require travel and operates during day shifts. Responsibilities Systems Engineer Splunk or ElasticSearch Admin Job Requirements Build Deploy and Manage the Enterprise Lucene DB systems Splunk Elastic to ensure that the legacy physical Virtual systems and container infrastructure for businesscritical services are being rigorously and effectively served for high quality logging services with high availability. Support periodic Observability and infrastructure monitoring tool releases and tool upgrades Environment creation Performance tuning of large scale Prometheus systems Serve as Devops SRE for the internal observability systems in Visas various data centers across the globe including in Cloud environment Lead the evaluation selection design deployment and advancement of the portfolio of tools used to provide infrastructure and service monitoring. Ensure tools utilized can provide the critical visibility on modern architectures leveraging technologies such as cloud containers etc. Maintain upgrade and troubleshoot issues with SPLUNK clusters. Monitor and audit configurations and participate in the Change Management process to ensure that unauthorized changes do not occur. Manage patching and updates of Splunk hosts andor Splunk application software. Design develop recommend and implement Splunk dashboards and alerts in support of the Incident Response team. Ensure monitoring team increases use of automation and adopts a DevOpsSRE mentality Qualification 6plus years of enterprise system logging and monitoring tools experience with a desired 5plus years in a relevant critical infrastructure of Enterprise Splunk and Elasticsearch 5plus yrs of working experience as Splunk Administrator with Cluster Building Data Ingestion Management User Role Management Search Configuration and Optimization. Strong knowledge on opensource logging and monitoring tools. Experience with containers logging and monitoring solutions. Experience with Linux operating system management and administration Familiarity with LANWAN technologies and clear understanding of basic network concepts services Strong understanding of multitier application architectures and application runtime environments Monitoring the health and performance of the Splunk environment and troubleshooting any issues that arise. Worked in 247 on call environment. Knowledge of Python and other scripting languages and infrastructure automation technologies such as Ansible is desired Splunk Admin Certified is a plus
Posted 2 weeks ago
6.0 - 10.0 years
27 - 42 Lacs
Chennai
Work from Office
Job Summary We are seeking an experienced Infra Dev Specialist with 6 to 10 years of experience to join our team. The ideal candidate will have expertise in SRE Grafana ELK Dynatrace AppMon and Splunk. This role involves working in a hybrid model with day shifts. The candidate will play a crucial role in ensuring the reliability and performance of our infrastructure contributing to the overall success of our projects and the positive impact on society. Responsibilities Lead the design implementation and maintenance of infrastructure solutions to ensure high availability and performance. Oversee the monitoring and alerting systems using tools like Grafana ELK Dynatrace AppMon and Splunk. Provide expertise in Site Reliability Engineering (SRE) to enhance system reliability and scalability. Collaborate with cross-functional teams to identify and resolve infrastructure issues promptly. Develop and maintain automation scripts to streamline infrastructure management tasks. Implement best practices for infrastructure security and compliance. Conduct regular performance tuning and optimization of infrastructure components. Monitor system health and performance and proactively address potential issues. Create and maintain detailed documentation of infrastructure configurations and procedures. Participate in on-call rotations to provide 24/7 support for critical infrastructure components. Drive continuous improvement initiatives to enhance infrastructure reliability and efficiency. Mentor and guide junior team members in best practices and technical skills. Contribute to the overall success of the company by ensuring the reliability and performance of our infrastructure. Qualifications Possess strong expertise in SRE principles and practices. Have extensive experience with monitoring and alerting tools such as Grafana ELK Dynatrace AppMon and Splunk. Demonstrate proficiency in scripting languages for automation purposes. Exhibit strong problem-solving skills and the ability to work under pressure. Show excellent communication and collaboration skills. Have a solid understanding of infrastructure security and compliance requirements. Display a proactive approach to identifying and addressing potential issues. Hold a relevant certification in SRE or related fields. Possess a strong commitment to continuous learning and improvement. Demonstrate the ability to mentor and guide junior team members. Have a proven track record of successfully managing and optimizing infrastructure components. Show a strong commitment to contributing to the overall success of the company. Exhibit a passion for ensuring the reliability and performance of infrastructure solutions. Certifications Required Certified SRE Practitioner Grafana Certified ELK Stack Certification Dynatrace Certified Associate Splunk Core Certified User
Posted 2 weeks ago
5.0 - 9.0 years
17 - 20 Lacs
Bengaluru
Work from Office
locationsBangalore, Indiaposted onPosted 30+ Days Ago job requisition id30553 FICO (NYSEFICO) is a leading global analytics software company, helping businesses in 100+ countries make better decisions. Join our world-class team today and fulfill your career potential! The Opportunity A DevOps role at FICO is an opportunity to work with cutting edge cloud technologies with a team focused on delivery of secure cloud solutions and products to enterprise customers. - VP, DevOps Engineering What Youll Contribute Design, implement, and maintain Kubernetes clusters in AWS environments. Develop and manage CI/CD pipelines using Tekton, ArgoCD, Flux or similar tools. Implement and maintain observability solutions (monitoring, logging, tracing) for Kubernetes-based applications. Collaborate with development teams to optimize application deployments and performance on Kubernetes. Automate infrastructure provisioning and configuration management using AWS services and tools. Ensure security and compliance in the cloud infrastructure. What Were Seeking Proficiency in Kubernetes administration and deployment, particularly in AWS (EKS). Experience with AWS services such as EC2, S3, IAM, ACM, Route 53, ECR. Experience with Tekton for building CI/CD pipelines. Strong understanding of observability tools like Prometheus, Grafana or similar. Scripting and automation skills (e.g., Bash, GitHub workflows). Knowledge of cloud platforms and container orchestration. Experience with infrastructure as code tools (Terraform, CloudFormation). Knowledge of Helm. Understanding of security best practices in cloud and Kubernetes environments. Proven experience in delivering microservices and Kubernetes-based systems. Our Offer to You An inclusive culture strongly reflecting our core valuesAct Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Why Make a Move to FICO At FICO, you can develop your career with a leading organization in one of the fastest-growing fields in technology today Big Data analytics. Youll play a part in our commitment to help businesses use data to improve every choice they make, using advances in artificial intelligence, machine learning, optimization, and much more. FICO makes a real difference in the way businesses operate worldwide Credit Scoring FICO Scores are used by 90 of the top 100 US lenders. Fraud Detection and Security 4 billion payment cards globally are protected by FICO fraud systems. Lending 3/4 of US mortgages are approved using the FICO Score. Global trends toward digital transformation have created tremendous demand for FICOs solutions, placing us among the worlds top 100 software companies by revenue. We help many of the worlds largest banks, insurers, retailers, telecommunications providers and other firms reach a new level of success. Our success is dependent on really talented people just like you who thrive on the collaboration and innovation thats nurtured by a diverse and inclusive environment. Well provide the support you need, while ensuring you have the freedom to develop your skills and grow your career. Join FICO and help change the way business thinks! Learn more about how you can fulfil your potential at FICO promotes a culture of inclusion and seeks to attract a diverse set of candidates for each job opportunity. We are an equal employment opportunity employer and were proud to offer employment and advancement opportunities to all candidates without regard to race, color, ancestry, religion, sex, national origin, pregnancy, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Research has shown that women and candidates from underrepresented communities may not apply for an opportunity if they dont meet all stated qualifications. While our qualifications are clearly related to role success, each candidates profile is unique and strengths in certain skill and/or experience areas can be equally effective. If you believe you have many, but not necessarily all, of the stated qualifications we encourage you to apply. Information submitted with your application is subject to theFICO Privacy policy at
Posted 2 weeks ago
5.0 - 10.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 7 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm (Monday-26th May to Friday-30th May)
Posted 2 weeks ago
15.0 - 20.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Project Role : Software Development Lead Project Role Description : Develop and configure software systems either end-to-end or for a specific stage of product lifecycle. Apply knowledge of technologies, applications, methodologies, processes and tools to support a client, project or entity. Must have skills : Elastic Stack (ELK) Good to have skills : Elastic Path, GrafanaMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Software Development Lead, you will engage in the development and configuration of software systems, either managing the entire process or focusing on specific stages of the product lifecycle. Your day will involve applying your knowledge of various technologies, methodologies, and tools to support projects and clients effectively, ensuring that the software meets the required standards and specifications. You will collaborate with team members to drive project success and contribute to the overall improvement of software development practices within the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve development processes to increase efficiency. Professional & Technical Skills: - Must To Have Skills: Proficiency in Elastic Stack (ELK).- Good To Have Skills: Experience with Elastic Path, Grafana.- Strong understanding of software development methodologies.- Experience with cloud technologies and deployment strategies.- Familiarity with version control systems such as Git. Additional Information:- The candidate should have minimum 5 years of experience in Elastic Stack (ELK).- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
15.0 - 20.0 years
10 - 14 Lacs
Gurugram
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : SAP Hybris Commerce Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that project goals are met, facilitating discussions to address challenges, and guiding your team through the development process. You will also engage in strategic planning to align application development with business objectives, ensuring that the solutions provided are effective and efficient. Your role will require you to stay updated with industry trends and best practices to continuously improve application performance and user experience. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate training and knowledge-sharing sessions to enhance team capabilities.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Strong understanding of e-commerce platforms and their architecture.- Experience with integration of third-party services and APIs.- Familiarity with agile methodologies and project management tools.- Ability to troubleshoot and resolve technical issues efficiently.Performance Engineering Fundamentals- In-depth knowledge of:Latency, throughput, concurrency, scalability, resource utilization- Performance metrics:CPU usage, memory consumption, disk I/O, network latency- Understanding of bottlenecks in multi-tiered architectures- JVM tuning (GC optimization, thread pools)- Database tuning (indexing, query optimization, DB Connection pool)- Monitoring & Observability- Have knowledge of Dynatrace, New Relic, Prometheus, Grafana- Resource tuning pods, autoscaling, memory/CPU optimization, Load Balancing, Cluster Configuration- Knowledge of Akamai Caching, APG Caching- Good to have if SAP Commerce Cloud CCV2 Experience Additional Information:- The candidate should have minimum 7.5 years of experience in SAP Hybris Commerce.- This position is based at our Gurugram office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
7.0 - 12.0 years
5 - 10 Lacs
Hyderabad
Work from Office
Project Role : DevOps Engineer Project Role Description : Responsible for building and setting up new development tools and infrastructure utilizing knowledge in continuous integration, delivery, and deployment (CI/CD), Cloud technologies, Container Orchestration and Security. Build and test end-to-end CI/CD pipelines, ensuring that systems are safe against security threats. Must have skills : DevOps Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : Bachelors degree in computer science background Summary :As a DevOps Engineer, you will be responsible for building and setting up new development tools and infrastructure utilizing knowledge in continuous integration, delivery, and deployment (CI/CD), Cloud technologies, Container Orchestration and Security. You will build and test end-to-end CI/CD pipelines, ensuring that systems are safe against security threats. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Implement and maintain CI/CD pipelines- Automate infrastructure provisioning and configuration- Monitor system performance and implement security measures Professional & Technical Skills: - Must To Have Skills: Proficiency in DevOps- Strong understanding of cloud technologies- strong understanding on tools like JIra & ServiceNow- Experience with container orchestration tools like Kubernetes- Knowledge of security best practices in CI/CD pipelines- Hands-on experience with infrastructure as code tools like Terraform- proficiency in Jira, ServiceNow & Confluence Additional Information:- The candidate should have a minimum of 7.5 years of experience in DevOps.- This position is based at our Hyderabad office.- A Bachelors degree in computer science background is required. Qualification Bachelors degree in computer science background
Posted 2 weeks ago
4.0 - 6.0 years
6 - 10 Lacs
Bengaluru
Work from Office
: Proficiency in problem solving and troubleshooting technical issues. Willingness to take ownership and strive for the best solutions. Experience in using performance analysis tools, such as Android Profiler, Traceview, perfetto, and Systrace etc. Strong understanding of Android architecture, memory management, and threading. Strong understanding of Android HALs, Car Framework, Android graphics pipeline, DRM, Codecs. Good knowledge in Hardware abstraction layers in Android and/or Linux. Good understanding of the git, CI/CD workflow Experience in agile based projects. Experience with Linux as a development platform and target Extensive experience with Jenkins and Gitlab CI system Hands-on experience with GitLab, Jenkins, Artifactory, Grafana, Prometheus and/or Elastic Search. Experience with different testing frameworks and their implementation in CI system Programming using C/C++, Java/Kotlin, Linux. Yocto and its use in CI Environments Familiarity with ASPICE 1. The Software Engineering Leader oversees and guides teams to deliver high-quality software solutions aligned with organizational goals and industry best practices.2. Is a professional in technology, proficient in strategic planning, decision-making, and mentoring, with an extensive background in software development and leadership.3. Is typically responsible for setting the strategic direction of software development efforts, managing project portfolios, and ensuring effective execution of software engineering initiatives to meet organizational objectives.4. Builds skills and expertise in leadership, staying abreast of industry trends, and cultivating a collaborative and high-performance culture within the software engineering team.5. Collaborates and acts as a team player with cross-functional teams, executives, and stakeholders, fostering a positive and productive environment for successful software development initiatives.
Posted 2 weeks ago
6.0 - 11.0 years
8 - 13 Lacs
Bengaluru
Work from Office
About the Role: This role is responsible for managing and maintaining complex, distributed big data ecosystems. It ensures the reliability, scalability, and security of large-scale production infrastructure. Key responsibilities include automating processes, optimizing workflows, troubleshooting production issues, and driving system improvements across multiple business verticals. Roles and Responsibilities: Manage, maintain, and support incremental changes to Linux/Unix environments. Lead on-call rotations and incident responses, conducting root cause analysis and driving postmortem processes. Design and implement automation systems for managing big data infrastructure, including provisioning, scaling, upgrades, and patching clusters. Troubleshoot and resolve complex production issues while identifying root causes and implementing mitigating strategies. Design and review scalable and reliable system architectures. Collaborate with teams to optimize overall system performance. Enforce security standards across systems and infrastructure. Set technical direction, drive standardization, and operate independently. Ensure availability, performance, and scalability of systems and services through proactive monitoring, maintenance, and capacity planning. Resolve, analyze, and respond to system outages and disruptions and implement measures to prevent similar incidents from recurring. Develop tools and scripts to automate operational processes, reducing manual workload, increasing efficiency and improving system resilience. Monitor and optimize system performance and resource usage, identify and address bottlenecks, and implement best practices for performance tuning. Collaborate with development teams to integrate best practices for reliability, scalability, and performance into the software development lifecycle. Stay informed of industry technology trends and innovations, and actively contribute to the organization's technology communities. Develop and enforce SRE best practices and principles. Align across functional teams on priorities and deliverables. Drive automation to enhance operational efficiency. Skills Required: Over 6 years of experience managing and maintaining distributed big data ecosystems. Strong expertise in Linux including IP, Iptables, and IPsec. Proficiency in scripting/programming with languages like Perl, Golang, or Python. Hands-on experience with the Hadoop stack (HDFS, HBase, Airflow, YARN, Ranger, Kafka, Pinot). Familiarity with open-source configuration management and deployment tools such as Puppet, Salt, Chef, or Ansible. Solid understanding of networking, open-source technologies, and related tools. Excellent communication and collaboration skills. DevOps tools: Saltstack, Ansible, docker, Git. SRE Logging and monitoring tools: ELK stack, Grafana, Prometheus, opentsdb, Open Telemetry. Good to Have: Experience managing infrastructure on public cloud platforms (AWS, Azure, GCP). Experience in designing and reviewing system architectures for scalability and reliability. Experience with observability tools to visualize and alert on system performance.
Posted 2 weeks ago
3.0 - 6.0 years
5 - 15 Lacs
Chennai
Work from Office
We are looking for a Network Automation Engineer with a strong foundation in Python, DevOps , and GUI automation to transform our network operations. This role is central to automating key functions like service provisioning, monitoring, and fault resolution using advanced frameworks and tools. You will work closely with Network, SRE, and DevOps teams to build robust automation solutions that enhance efficiency and reliability across the organization. Key Responsibilities Network Automation & Scripting Develop and maintain Python scripts for network provisioning, configuration, and monitoring. Automate workflows using APIs, CLI, Netconf, REST, and Ansible. DevOps & CI/CD Integration Design CI/CD pipelines using Jenkins, GitLab, or Ansible AWX. Manage containerized applications with Docker and Kubernetes. Use Terraform and other Infrastructure-as-Code (IaC) tools. GUI Automation & RPA Automate GUI-based tasks with tools like Selenium, PyAutoGUI, AutoIt. Develop RPA scripts for repetitive manual processes. Monitoring & Observability Implement observability tools: Prometheus, Grafana, ELK stack. Automate alerts and incident responses using Python scripting. Collaboration & Documentation Work closely with cross-functional teams. Create and maintain technical documentation, playbooks, and standard operating procedures. Required Skills & Experience Programming & Scripting Python (advanced scripting, APIs, multithreading, libraries) Bash, PowerShell, YAML Network & Systems Knowledge Protocols: TCP/IP, SDH, VoIP, SIP, Routing & Switching Experience with routers, switches, firewalls Familiarity with BSS/OSS, NMS/EMS systems, 5G networks, and virtualized platforms (vBlock, CNIS). DevOps Tools & Platforms Ansible, Terraform, Docker, Kubernetes CI/CD: Jenkins, GitLab, Git APIs: REST, SNMP, Netconf, gRPC GUI & RPA Automation Tools: Selenium, PyAutoGUI, Pywinauto, AutoIt. Integration with APIs, data sources, and enterprise tools. Monitoring & Logging Prometheus, Grafana, ELK, Splunk, OpenTelemetry Preferred Qualifications Experience with cloud networking (AWS, GCP, Azure). Knowledge of AI/ML-based network automation. Exposure to orchestration tools (ONAP, Cisco NSO, OpenStack). What We Offer Technically challenging projects in a dynamic environment. Opportunity to work on cutting-edge network infrastructure. Competitive compensation and benefits. Culture of innovation, collaboration, and continuous learning.
Posted 2 weeks ago
6.0 - 10.0 years
27 - 42 Lacs
Bengaluru
Work from Office
We are looking for Kubernetes Platform Engineer , details are below Skills Required. i) On Prem Kubernetes Platform Engineer with OpenShift & Mirantis K8 strong experience (Strictly in Unix OS and not in Cloud) ii) Strong Unix Experience iii) Automation Expert with Prometheus and Grafana Experience. iv) Go language, Python v) Istio. Establish proper deployment architecture with proper networking, storage and security control Platform & component life-cycle management - Vulnerability Mgmt, patching - TSR Compliance - Configuration mgmt. - Infrastructure monitoring, capacity and performance mgmt. Integrations with CMDB, IAM, CV(?) Image Scanning(Prisma Cloud) and other support services. Ongoing L2 support. Job Summary The Infra Dev Specialist role focuses on developing and maintaining infrastructure solutions with a primary emphasis on Kubernetes. The candidate will work in a hybrid model ensuring seamless integration and optimization of infrastructure components. With a focus on Cards & Payments the role aims to enhance system reliability and efficiency contributing to the companys mission of providing secure and innovative solutions. Responsibilities Kubernetes Cluster Build Skillset from Infrastructure Having Automation and Migration of application skillset from K8 Cluster including Mirantis OpenShift Develop and maintain infrastructure solutions with a focus on Kubernetes to ensure optimal performance and scalability. Collaborate with cross-functional teams to integrate infrastructure components seamlessly into existing systems. Implement best practices for infrastructure management to enhance system reliability and efficiency. Monitor and troubleshoot infrastructure issues to minimize downtime and ensure continuous operation. Provide technical expertise in Kubernetes to guide development and deployment processes. Optimize infrastructure solutions to support the companys mission of providing secure and innovative services. Analyze system performance metrics to identify areas for improvement and implement necessary changes. Ensure compliance with industry standards and regulations in the Cards & Payments domain. Collaborate with stakeholders to understand business requirements and translate them into technical solutions. Participate in code reviews and provide constructive feedback to improve overall code quality. Document infrastructure processes and solutions to maintain transparency and facilitate knowledge sharing. Stay updated with the latest trends and technologies in infrastructure development to drive innovation. Support the hybrid work model by ensuring infrastructure solutions are accessible and efficient for remote teams. Qualifications Demonstrate expertise in Kubernetes with a proven track record of successful implementations. Possess strong analytical skills to assess system performance and identify improvement opportunities. Exhibit knowledge of Cards & Payments domain to ensure compliance and enhance service offerings. Show proficiency in infrastructure management best practices to optimize system reliability. Have excellent communication skills to collaborate effectively with cross-functional teams. Display a proactive approach to problem-solving and troubleshooting infrastructure issues. Maintain a commitment to continuous learning and staying updated with industry trends.
Posted 2 weeks ago
6.0 - 8.0 years
27 - 42 Lacs
Hyderabad
Work from Office
Job Summary Job Description Observability Engineer Covers jobs responsible for ensuring the reliability availability and performance of our systems and applications. Also building and maintaining Observability platform including monitoring logging and tracing applications. About this role Wells Fargo is seeking a highly skilled Observability Senior Software Engineer to join the Wells Fargo Commercial Banking Digital team. Observability Engineer will play a critical role in ensuring the reliability availabi Responsibilities Job Description Observability Engineer Covers jobs responsible for ensuring the reliability availability and performance of our systems and applications. Also building and maintaining Observability platform including monitoring logging and tracing applications. About this role Wells Fargo is seeking a highly skilled Observability Senior Software Engineer to join the Wells Fargo Commercial Banking Digital team. Observability Engineer will play a critical role in ensuring the reliability availability and performance of the Applications. In this role is focused on building and maintaining Observability platform including monitoring logging and tracing applications. In this role you will Enhance system reliability through automation redundancy and best practices. Monitor system performance and health and respond to incidents to minimize downtime. Conduct root cause analysis of incidents and implement measures to prevent recurrence. Collaborate with development teams to design and implement scalable and reliable services. Optimize performance and resource utilization across various systems and services. Design implement and manage observability platforms such as Splunk Grafana AppDynamics etc. Develop and maintain dashboards alerts and reports to provide visibility into application performance and health. Utilize Application Performance Management APM tools like App Dynamics Elastic APM etc.. Collaborate with development operations and other Tech teams to define and implement monitoring strategies. Ensure end toend traceability of all transactions and operations within the applications. Identify and resolve issues proactively before they impact users ensuring high availability and reliability of services. Optimize performance monitoring and troubleshooting processes to improve response times and system reliability. Stay updated with the latest trends and technologies in observability and monitoring tools. Provide training and guidance to team members on best practices in observability and monitoring. Required Qualifications 6plus years of Software Engineering experience or equivalent demonstrated through one or a combination of the following work experience and education Bachelors degree in Computer Science Information Technology or a related field. Familiarity with monitoring and observability tools eg Grafana Splunk App Dynamics. Understanding of networking security best practices and performance tuning. Proven experience as an Observability Engineer or similar role. Strong expertise in Splunk including creating complex queries dashboards and reports. Experience with App Dynamics or similar application performance monitoring tools. Proficient in endtoend traceability and observability best practices. Familiarity with cloud platforms AWS Azure GCP and their monitoring tools. Strong analytical and problemsolving skills. Excellent communication and teamwork abilities. Experience working in Python Knowledge in writing queries in SQL Desired Qualifications An industrystandard technology certification Strong verbal written and interpersonal communication skills 5plus years of Splunk experience 3plus years of Tomcat development or implementation experience 3plus years of Agile experience 2plus years of Database experience 5plus years of financial services experience A BSBA degree or higher in information technology Experience with Cloud technologies Knowledge and understanding of DevOps principles Knowledge and understanding of application and web securit
Posted 2 weeks ago
6.0 - 10.0 years
27 - 42 Lacs
Bengaluru
Work from Office
SRE - CI/CD pipelines; Docker, Kubernetes; Prometheus, Grafana, Splunk and ELK , Linux and Windows Job Summary We are seeking an experienced Infra Dev Specialist with 6 to 10 years of experience to join our team. The ideal candidate will have a strong background in Jenkins Jenkins X Azure DevOps AWS DevOps JenkinsCloudBees CircleCI and Bamboo. This role involves working in a hybrid model with day shifts and no travel requirements. The candidate will play a crucial role in developing and maintaining our infrastructure automation processes. Responsibilities Description ArchitectImplementManage infrastructure to support Kubernetes clusters in all environments Experience with CICD tool like Jenkins Ansible chef or similar tools. Experience with Automation tools and script using Shell Powershell or similar scripting technology. Experience with Windows and DotNet8 framework and build tools Provide support for all application environments as well as the continuous integration build environment Developimplement container monitoring strategy Act as the Docker technical SME for our Continuous Integration Team Participate in requirements gathering sessions Evaluate andor recommend purchases of network hardware software and peripheral equipment Coordinate and conduct project architecture infrastructure review meetings Developimplement container scaling strategy
Posted 2 weeks ago
10.0 - 14.0 years
35 - 50 Lacs
Chennai
Work from Office
Job Summary We are seeking a highly skilled Principal Infra Developer with 10 to 14 years of experience to join our team. The ideal candidate will have expertise in SRE Grafana EKS JBOSS and Managing the teams with client interaction. Experience in Property Casualty Insurance is a plus. This hybrid role involves rotational shifts and does not require travel. Responsibilities Strong experience in AWS EKS. Having Good knowledge on creating Kubernetes Cluster pods namespace replicas daemon sets replica controller and set up kubectl. Working Knowledge on AWS EKS EC2 IAM MSK. Good working knowledge on Docker github setting up pipelines troubleshooting related issues. Working knowledge on monitoring tools such as AppDynamics ELK Grafana Nagios. Working knowledge on Rancher vault and Argocd. Good knowledge in networking concepts. Strong troubleshooting skills for triaging and fixing application issues on k8s cluster Hands on experience on installing configuring and maintenance of Jboss EAP 6x 7x in various environments domain based and standalone setup. Strong experience in configuring and administering Connection pools for JDBC connections and JMS queues in Jboss EAP. Strong experience in deploying applications JAR WAR EAR and maintain load balancing High availability and failover functionality in clustered environment through command line in JBoss EAP Extensive experience in troubleshooting by using thread Dumps heap dumps for Jboss server issues. Good experience on SSl certificates creation for JBoss 5x 6x 7x Experience in providing technical assistance for performance tuning and troubleshooting techniques of Java Application. Good to have deployment procedures of J2EE applications and code to JBoss Application server. Good knowledge on installation maintenance and integration of Webservers like Apache Web server OHS Nginx. Good knowledge in scripting .Automation using Ansible Bash and Terraform.
Posted 2 weeks ago
4.0 - 8.0 years
5 - 9 Lacs
Hyderabad, Bengaluru
Work from Office
Whats in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 12 months, or freelancing Be a part of an Elite Community of professionals who can solve complex AI challenges Work location could be: Remote (Highly likely) Onsite on client location Deccan AIs Office: Hyderabad or Bangalore Responsibilities: Design and architect enterprise-scale data platforms, integrating diverse data sources and tools Develop real-time and batch data pipelines to support analytics and machine learning Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices Required Skills: Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP) Proficient in data modeling, governance, warehousing (Snowflake, Redshift, BigQuery), and security/compliance standards (GDPR, HIPAA) Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana) Nice to Have: Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions Contributions to open-source data engineering communities What are the next steps? Register on our Soul AI website
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Grafana is a popular tool used for monitoring and visualizing metrics, logs, and other data. In India, the demand for Grafana professionals is on the rise as more companies are adopting this tool for their monitoring and analytics needs.
The average salary range for Grafana professionals in India varies based on experience level: - Entry-level: ₹4-6 lakhs per annum - Mid-level: ₹8-12 lakhs per annum - Experienced: ₹15-20 lakhs per annum
A typical career path in Grafana may include roles such as: 1. Junior Grafana Developer 2. Grafana Developer 3. Senior Grafana Developer 4. Grafana Tech Lead
In addition to Grafana expertise, professionals in this field often benefit from having knowledge or experience in: - Monitoring tools such as Prometheus - Data visualization tools like Tableau - Scripting languages (e.g., Python, Bash) - Understanding of databases (e.g., SQL, NoSQL)
As the demand for Grafana professionals continues to grow in India, it is essential to stay updated with the latest trends and technologies in this field. Prepare thoroughly for interviews and showcase your skills confidently to land your dream job in Grafana. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2