Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 - 7.0 years
13 - 17 Lacs
Pune
Work from Office
Job Overview We are looking for a detail-oriented and experienced Site Reliability Engineer to join our team. The Site Reliability Engineer will be responsible for creating and implementing scalable software solutions in order to meet system and application performance goals. You will also be responsible for troubleshooting system errors and resolving any relevant issues. Roles And Responsibilities System Monitoring and Incident Response: for implementing monitoring solutions to track system health, performance, and availability. They proactively monitor systems, identify issues, and respond to incidents promptly, working to minimize downtime and mitigate impacts. Post-Incident Analysis: Led incident response efforts, coordinated with cross-functional teams, and conducted post-incident analysis to identify root causes and implement preventive measures. Continuous Improvement and Reliability Engineering: SREs drive continuous improvement efforts by identifying areas for enhancement, implementing best practices, and fostering a culture of reliability engineering. They participate in post-mortems, conduct blameless retrospectives, and drive initiatives to improve system reliability, stability, and maintainability. Collaboration and Knowledge Sharing: SREs collaborate closely with software engineers, operations teams, and other stakeholders to ensure smooth coordination and effective communication. They share knowledge, provide technical guidance, and contribute to the development of a strong engineering culture. Support and maintain configuration management for various applications and systems Implement comprehensive service monitoring, including dashboards, metrics, and alerts Define, measure, and meet key service level objectives, such as uptime, performance, incidents, and chronic problems Partner with application and business stakeholders to ensure high quality product development and release Collaborate with the development team to enhance system reliability and performance. Qualifications Bachelors degree in Information Technology, Computer Science, or related field. Strong knowledge of software development processes and procedures. Strong problem-solving abilities. Excellent understanding of computer systems, servers, and network systems. Ability to work under pressure and manage multiple tasks simultaneously. Strong communication and interpersonal skills. Strong knowledge of coding languages like Python, Java, Go, etc. Ability to program (structured and OOP) using one or more high-level languages, such as Python, Java, C/C++, Ruby, and JavaScript Experience with distributed storage technologies such as NFS, HDFS, Ceph, and Amazon S3, as well as dynamic resource management frameworks (Apache Mesos, Kubernetes,Yarn) Job Description Experience with cloud computing platforms such as AWS, Azure, or Google Cloud Experience with DevOps tools such as Git, Jenkins, Ansible, Terraform, Docker, etc. Experience with monitoring tools such as Splunk, Prometheus Skills: problem solving,post-incident analysis,aws,monitoring tools,cloud computing,key service level objectives,reliability engineering,configuration management,devops practices,coding languages,monitoring tools (splunk, prometheus),continuous improvement,site reliability engineering,service monitoring,incident response,reliability,software development processes,system monitoring,splunk,devops tools (git, jenkins, ansible, terraform, docker),kubernetes,cloud computing (aws, azure, google cloud),devops,ansible,programming (python, java, go, c/c++, ruby, javascript),prometheus,cloud infrastructure,monitoring services
Posted -1 days ago
0.0 - 5.0 years
2 - 7 Lacs
Bengaluru
Work from Office
Ensemble Energy is an exciting startup in industrial IoT space focused on energy. Our mission is to accelerate the clean energy revolution by making it more competitive using the power of data. Ensembles AI enabled SaaS platform provides prescriptive analytics to power plant operators by combining the power of machine learning, big data, and deep domain expertise. As a Full Stack/IOT Intern, you will be participating in developing and deploying frontend/backend applications, creating vizualization dashboards and developing ways to integrate high frequency data data from devices onto our platform. Required Skills & Experience: React/Redux, HTML5, CSS3, JavaScript, Python, Django and REST APIs. BS or MS in Computer Science or related field. Strong foundation in Computer Science, with deep knowledge of data structures, algorithms, and software design. Experience with GIT, CI/CD tools, Sentry, Atlassian software and AWS CodeDeploy a plus Contribute with ideas to overall product strategy and roadmap. Improve codebase with continuous refactoring. Self-starter to take ownership of the platform engineering and application development. Work on multiple projects simultaneously and get things done. Take products from prototype to production. Collaborate with team in Sunnyvale, CA to lead 24x7 product development. Bonus: If you have worked on one or more below then highlight those projects when applying: Experience with Time Series DB - M3DB, Prometheus, InfluxDB, OpenTSDB, ELK Stack Experience with visualization tools like Tableau, KeplerGL etc. Experience with MQTT or other IoT communication protocols a plus
Posted -1 days ago
3.0 - 7.0 years
17 - 25 Lacs
Hyderabad
Work from Office
Project description ACQA is built on Microsoft Azure cloud computing technology. It aims to deliver: Scalable cost-efficient infrastructure, using cloud PaaS components. Single core platform, open architecture, designed for change, itemised $cost metrics, automated data lineage. Shared across Front Office, Finance and Risk, improving regulatory compliance. One-Platform / One-Experience -fast to train, easy to operate, retaining talent.The ACQA platform is made up of a series of components providing the next generation valuation and risk management services. ResponsibilitiesPerform functional/configuration changes to improve automation and reduce maintenance effort Build and maintain a CI/CD pipeline automation Management of monitoring systems (Nagios, Prometheus, Grafana) Migration of applications between two banking organizations Certificate renewals Cleanups, removal of redundant applications, functions, and data SkillsMust have Azure Cloud FinOps / Cloud cost efficiencies Azure CosmosDB / SQL Terraform / IaC PowerShell / Bash Linux DevOps skills CI/CD Automation UBS processes / tooling Grid DataSynpase Nice to have SDLC OtherLanguagesEnglishC1 Advanced SenioritySenior
Posted -1 days ago
3.0 - 5.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Job Title: DevOps Engineer Location: Bangalore, KA Mode of Work: Work From Office (5 Days a Week) Job Type: Full-Time Department: Engineering/Operations : We are looking for a skilled DevOps Engineer to join our team in Bangalore . The ideal candidate will have hands-on experience with a range of technologies including Docker , Kubernetes (K8s) , JFrog Artifactory , SonarQube , CI/CD tools , monitoring tools , Ansible , and auto-scaling strategies. This role is key to driving automation, improving the deployment pipeline, and optimizing infrastructure for seamless development and production operations. You will collaborate with development teams to design, implement, and manage systems that improve the software development lifecycle and ensure a high level of reliability, scalability, and performance. Responsibilities: Containerization & Orchestration: Design, deploy, and manage containerized applications using Docker . Manage, scale, and optimize Kubernetes (K8s) clusters for container orchestration. Troubleshoot and resolve issues related to Kubernetes clusters, ensuring high availability and fault tolerance. Collaborate with the development team to containerize new applications and microservices. CI/CD Pipeline Development & Maintenance: Implement and optimize CI/CD pipelines using tools such as Jenkins , GitLab CI , or similar. Integrate SonarQube for continuous code quality checks within the pipeline. Ensure seamless integration of JFrog Artifactory for managing build artifacts and repositories. Automate and streamline build, test, and deployment processes to support continuous delivery. Monitoring & Alerts: Implement and maintain monitoring solutions using tools like Prometheus , Grafana , or others. Set up real-time monitoring, logging, and alerting systems to proactively identify and address issues. Create and manage dashboards for operational insights into application health, performance, and system metrics. Automation & Infrastructure as Code: Automate infrastructure provisioning and management using Ansible or similar tools. Implement Auto-Scaling solutions to ensure the infrastructure dynamically adjusts to workload demands, ensuring optimal performance and cost efficiency. Define, deploy, and maintain infrastructure-as-code practices for consistent and reproducible environments. Collaboration & Best Practices: Work closely with development and QA teams to integrate DevOps best practices into the software development lifecycle. Ensure a high standard of security and compliance within the CI/CD pipelines. Provide technical leadership and mentorship for junior team members on DevOps practices and tools. Participate in cross-functional teams to define, design, and deliver scalable software solutions. Debugging & Issue Resolution: Troubleshoot complex application and infrastructure issues across development, staging, and production environments. Apply root cause analysis to incidents and implement long-term fixes to prevent recurrence. Continuously improve monitoring and debugging tools for faster issue resolution.
Posted -1 days ago
8.0 - 12.0 years
22 - 27 Lacs
Bengaluru
Work from Office
Date 10 Jun 2025 Location: Bangalore, KA, IN Company Alstom Req ID:486423 Alstom are now looking for the Cloud DevOps Architect, who will ensure the program is able to deliver fast with good level of quality and support the software development and validation with automation as much as possible and shall manage testing & preview environments for team. Responsible for supporting all automation activities like Continuous Integration and Continuous Deployment linked to software production and delivery to on-premise and cloud environments. Responsible for elaboration of pipelines, integration of static/vulnerability analysis tools and managing different environments. Shall support DevOps engineers and verification and validation engineers in establishing the pipelines and integration tests. Provide and manage testing & preview environments working in a representative context to anticipate solution & project integrations. Design and build technical solutions for deployment on cloud-enabled and on-prem infrastructure with microservices. Create a DevOps strategy and manage the adaption process. ( Azure and on-premise) Set up best DevOps practices including making technology choices based on existing constraints and automation. Support team in the application of the processes with Alstom proposed tools in the most efficient way, with good level of simplification & automation. Defining strategy for long-term efficient software platform upgrades and maintenance Write and maintain product technical documentation technical architectures including network diagrams, sequence diagrams etc. Design and maintain highly available and distributed applications. Define and setup application monitoring best practices. Capacity planning for applications including performance tuning. Contribute to identification of tools and technologies that can improve performance KPIs of the software Adept with automation frameworks and technologies that can run within the pipleline that can perform chaos and dependencies validation Keen in identifying how to automate integration of multiple tools that are used for process. QUALIFICATIONS & S: EDUCATION BE/B.Tech/M.Tech in computer science & information systems or related engineering BEHAVIORAL COMPETENCIES: Demonstrate excellent communication skills and able to guide, influence and convince others in a matrix organization. Team Player with prior experience in working with European customer is not mandatory but preferable. Innovative and be aligned to new product development technologies and methods. Proven capabilities with global teams. TECHNICAL COMPETENCIES & EXPERIENCE 8 to 12 years experience in CI-CD on On-premises and Cloud environment. Programming language C#, Python Technology/framework DotNet, WPF, SignalR, Event driven development, Robot framework Version Control Tool Github AutomationAnsible, helm charts, Kubernetes operators, bash/python ContainerizationLinux, Kubernetes, Azure(AKS), Rancher, distributed filesystems VirtualizationHarvester/VMware/OpenShift Data technologiesKafka, RabbitMQ, Postgres, Elasticsearch ObservabilityGrafana, Prometheus Experience in designing blue/green deployment strategy Proficiency in designing and managing Kubernetes based large scale distributed applications. Experience in security protocols, digital certificates, SSL/TLS, Key and secrets management Deep knowledge of Linux and virtualization concepts. Platform engineering and microservices experience is a plus. You dont need to be a train enthusiast to thrive with us. We guarantee that when you step onto one of our trains with your friends or family, youll be proud. If youre up for the challenge, wed love to hear from you! Important to note As a global business, were an equal-opportunity employer that celebrates diversity across the 63 countries we operate in. Were committed to creating an inclusive workplace for everyone. Job Type:Experienced
Posted -1 days ago
4.0 - 6.0 years
27 - 42 Lacs
Chennai
Work from Office
Skill – Aks , Istio service mesh ,CICD Shift timing - Afternoon Shift Location - Chennai, Kolkata, Bangalore Excellent AKS, GKE or Kubernetes admin experience. Good troubleshooting experience on istio service mesh, connectivity issues. Experience with Github Actions or similar ci/cd tool to build pipelines.Working experience on any cloud, preferably Azure, Google with good networking knowledge. Experience on python or shell scripting. Experience on building dashboards, configure alerts using prometheus and Grafana.
Posted 19 hours ago
3.0 - 5.0 years
5 - 7 Lacs
Noida
Work from Office
"Ensure platform reliability and performance: Monitor, troubleshoot, and optimize production systems running on Kubernetes (EKS, GKE, AKS). Automate operations: Develop and maintain automation for infrastructure provisioning, scaling, and incident response. Incident response & on-call support: Participate in on-call rotations to quickly detect, mitigate, and resolve production incidents. Kubernetes upgrades & management: Own and drive Kubernetes version upgrades, node pool scaling, and security patches. Observability & monitoring: Implement and refine observability tools (Datadog, Prometheus, Splunk, etc.) for proactive monitoring and alerting. Infrastructure as Code (IaC): Manage infrastructure using Terraform, Terragrunt, Helm, and Kubernetes manifests. Cross-functional collaboration: Work closely with developers, DBPEs (Database Production Engineers), SREs, and other teams to improve platform stability. Performance tuning: Analyze and optimize cloud and containerized workloads for cost efficiency and high availability. Security & compliance: Ensure platform security best practices, incident response, and compliance adherence.." Required education None Preferred education Bachelor's Degree Required technical and professional expertise Strong expertise in Kubernetes (EKS, GKE, AKS) and container orchestration. Experience with AWS, GCP, or Azure, particularly in managing large-scale cloud infrastructure. Proficiency in Terraform, Helm, and Infrastructure as Code (IaC). Strong understanding of Linux systems, networking, and security best practices. Experience with monitoring & logging tools (Datadog, Splunk, Prometheus, Grafana, ELK, etc.). Hands-on experience with automation & scripting (Python, Bash, or Go). Preferred technical and professional experience Experience in incident management & debugging complex distributed systems. Familiarity with CI/CD pipelines and release automation.
Posted 20 hours ago
7.0 - 12.0 years
5 - 13 Lacs
Pune
Hybrid
So, what’s the role all about? NICE APA is a comprehensive platform that combines Robotic Process Automation, Desktop Automation, Desktop Analytics, AI and Machine Learning solutions as Neva Discover NICE APA is more than just RPA, it's a full platform that brings together automation, analytics, and AI to enhance both front-office and back-office operations. It’s widely used in industries like banking, insurance, telecom, healthcare, and customer service We are seeking a Senior/Specialist Technical Support Engineer with a strong understanding of RPA applications and exceptional troubleshooting skills. The ideal candidate will have hands-on experience in Application Support, the ability to inspect and analyze RPA solutions and Application Server (e.g., Tomcat, Authentication, certificate renewal), and a solid understanding of RPA deployments in both on-premises and cloud-based environments (such as AWS). You should be comfortable supporting hybrid RPA architectures, handling bot automation, licensing, and infrastructure configuration in various environments. Familiarity with cloud-native services used in automation (e.g., AMQ queues, storage, virtual machines, containers) is a plus. Additionally, you’ll need a working knowledge of underlying databases and query optimization to assist with performance and integration issues. You will be responsible for diagnosing and resolving technical issues, collaborating with development and infrastructure teams, contributing to documentation and knowledge bases, and ensuring a seamless and reliable customer experience across multiple systems and platforms How will you make an impact? Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address and resolve product issues. Maintain quality and on-going internal and external communication throughout your investigation. Provide high level of support and minimize R&D escalations. Prioritize daily missions/cases and mange critical issues and situations. Contribute to the Knowledge Base, document troubleshooting and problem resolution steps and participate in Educating/Mentoring other support engineers. Willing to perform on call duties as required. Excellent problem-solving skills with the ability to analyze complex issues and implement effective solutions. Good communication skills with the ability to interact with technical and non-technical stakeholders. Have you got what it takes? Minimum of 8 to 12 years of experience in supporting global enterprise customers. Monitor, troubleshoot, and maintain RPA bots in production environments. Monitor, troubleshoot, system performance, application health, and resource usage using tools like Prometheus, Grafana, or similar Data Analytics - Analyze trends, patterns, and anomalies in data to identify product bugs Familiarity with ETL processes and data pipelines - Advantage Provide L1/L2/L3 support for RPA application, ensuring timely resolution of incidents and service requests Familiarity applications running on Linux-based Kubernetes clusters Troubleshoot and resolve incidents related to pods, services, and deployments Provide technical support for applications running on both Windows and Linux platforms, including troubleshooting issues, diagnosing problems, and implementing solutions to ensure optimal performance. Familiarity with Authentication methods like WinSSO and SAML. Knowledge in Windows/Linux Hardening like TLS enforcement, Encryption Enforcement, Certificate Configuration Working and Troubleshooting knowledge in Apache Software components like Tomcat, Apache and ActiveMQ. Working and Troubleshooting knowledge in SVN/Version Control applications Knowledge in DB schema, structure, SQL queries (DML, DDL) and troubleshooting Collect and analyze logs from servers, network devices, applications, and security tools to identify Environment/Application issues. Knowledge in terminal server (Citrix)- advantage Basic understanding on AWS Cloud systems. Network troubleshooting skills (working with different tools) Certification in RPA platforms and working knowledge in RPA application development/support – advantage. NICE Certification - Knowledge in RTI/RTS/APA products – Advantage Integrate NICE's applications with customers on-prem and cloud-based 3rd party tools and applications to ingest/transform/store/validate data. Shift- 24*7 Rotational Shift (include night shift) Other Required Skills: Excellent verbal and written communication skills Strong troubleshooting and problem-solving skills. Self-motivated and directed, with keen attention to details. Team Player - ability to work well in a team-oriented, collaborative environment. Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7326 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 21 hours ago
6.0 - 9.0 years
4 - 9 Lacs
Pune
Hybrid
So, what’s the role all about? NICE APA is a comprehensive platform that combines Robotic Process Automation, Desktop Automation, Desktop Analytics, AI and Machine Learning solutions as Neva Discover NICE APA is more than just RPA, it's a full platform that brings together automation, analytics, and AI to enhance both front-office and back-office operations. It’s widely used in industries like banking, insurance, telecom, healthcare, and customer service We are seeking a Senior/Specialist Technical Support Engineer with a strong understanding of RPA applications and exceptional troubleshooting skills. The ideal candidate will have hands-on experience in Application Support, the ability to inspect and analyze RPA solutions and Application Server (e.g., Tomcat, Authentication, certificate renewal), and a solid understanding of RPA deployments in both on-premises and cloud-based environments (such as AWS). You should be comfortable supporting hybrid RPA architectures, handling bot automation, licensing, and infrastructure configuration in various environments. Familiarity with cloud-native services used in automation (e.g., AMQ queues, storage, virtual machines, containers) is a plus. Additionally, you’ll need a working knowledge of underlying databases and query optimization to assist with performance and integration issues. You will be responsible for diagnosing and resolving technical issues, collaborating with development and infrastructure teams, contributing to documentation and knowledge bases, and ensuring a seamless and reliable customer experience across multiple systems and platforms How will you make an impact? Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address and resolve product issues. Maintain quality and on-going internal and external communication throughout your investigation. Provide high level of support and minimize R&D escalations. Prioritize daily missions/cases and mange critical issues and situations. Contribute to the Knowledge Base, document troubleshooting and problem resolution steps and participate in Educating/Mentoring other support engineers. Willing to perform on call duties as required. Excellent problem-solving skills with the ability to analyze complex issues and implement effective solutions. Good communication skills with the ability to interact with technical and non-technical stakeholders. Have you got what it takes? Minimum of 5 to 7 years of experience in supporting global enterprise customers. Monitor, troubleshoot, and maintain RPA bots in production environments. Monitor, troubleshoot, system performance, application health, and resource usage using tools like Prometheus, Grafana, or similar Data Analytics - Analyze trends, patterns, and anomalies in data to identify product bugs Familiarity with ETL processes and data pipelines - Advantage Provide L1/L2/L3 support for RPA application, ensuring timely resolution of incidents and service requests Familiarity applications running on Linux-based Kubernetes clusters Troubleshoot and resolve incidents related to pods, services, and deployments Provide technical support for applications running on both Windows and Linux platforms, including troubleshooting issues, diagnosing problems, and implementing solutions to ensure optimal performance. Familiarity with Authentication methods like WinSSO and SAML. Knowledge in Windows/Linux Hardening like TLS enforcement, Encryption Enforcement, Certificate Configuration Working and Troubleshooting knowledge in Apache Software components like Tomcat, Apache and ActiveMQ. Working and Troubleshooting knowledge in SVN/Version Control applications Knowledge in DB schema, structure, SQL queries (DML, DDL) and troubleshooting Collect and analyze logs from servers, network devices, applications, and security tools to identify Environment/Application issues. Knowledge in terminal server (Citrix)- advantage Basic understanding on AWS Cloud systems. Network troubleshooting skills (working with different tools) Certification in RPA platforms and working knowledge in RPA application development/support – advantage. NICE Certification - Knowledge in RTI/RTS/APA products – Advantage Integrate NICE's applications with customers on-prem and cloud-based 3rd party tools and applications to ingest/transform/store/validate data. Shift- 24*7 Rotational Shift (include night shift) Other Required Skills: Excellent verbal and written communication skills Strong troubleshooting and problem-solving skills. Self-motivated and directed, with keen attention to details. Team Player - ability to work well in a team-oriented, collaborative environment. Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7556 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 21 hours ago
4.0 - 9.0 years
6 - 14 Lacs
Hyderabad
Work from Office
Title : .Net Developer(.net+openshift OR Kubernetes) | 4 to 12 years | Bengaluru & Hyderabad : Assess and understand the application implementation while working with architects and business experts Analyse business and technology challenges and suggest solutions to meet strategic objectives Build cloud native applications meeting 12/15 factor principles on OpenShift or Kubernetes Migrate Dot Net Core and/ or Framework Web/ API/ Batch Components deployed in PCF Cloud to OpenShift, working independently Analyse and understand the code, identify bottlenecks and bugs, and devise solutions to mitigate and address these issues Design and Implement unit test scripts and automation for the same using Nunit to achieve 80% code coverage Perform back end code reviews and ensure compliance to Sonar Scans, CheckMarx and BlackDuck to maintain code quality Write Functional Automation test cases for system integration using Selenium. Coordinate with architects and business experts across the application to translate key Required Qualifications: 4+ years of experience in Dot Net Core (3.1 and above) and/or Framework (4.0 and above) development (Coding, Unit Testing, Functional Automation) implementing Micro Services, REST API/ Batch/ Web Components/ Reusable Libraries etc Proficiency in C# with a good knowledge of VB.NET Proficiency in cloud platforms (OpenShift, AWS, Google Cloud, Azure) and hybrid/multi-cloud strategies with at least 3 years in Open Shift Familiarity with cloud-native patterns, microservices, and application modernization strategies. Experience with monitoring and logging tools like Splunk, Log4J, Prometheus, Grafana, ELK Stack, AppDynamics, etc. Familiarity with infrastructure automation tools (e.g., Ansible, Terraform) and CI/CD tools (e.g., Harness, Jenkins, UDeploy). Proficiency in Database like MS SQL Server, Oracle 11g, 12c, Mongo, DB2 Experience in integrating front-end with back-end services Experience in working with Code Versioning methodology as followed with Git, GitHub Familiarity with Job Scheduler through Autosys, PCF Batch Jobs Familiarity with Scripting languages like shell / Helm chats modules" Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders.
Posted 22 hours ago
12.0 - 17.0 years
14 - 19 Lacs
Mysuru
Work from Office
The Site Reliability Engineer is a critical role in Cloud based projects. An SRE works with the development squads to build platform & infrastructure management/provisioning automation and service monitoring using the same methods used in software development to support application development. SREs create a bridge between development and operations by applying a software engineering mindset to system administration topics. They split their time between operations/on-call duties and developing systems and software that help increase site reliability and performance Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall 12+ yrs experience required. Have good exposure to Operational aspects (Monitoring, Automation, Remediations) - Monitoring tools exposure like New Relic, Prometheus, ELK, Distributed tracing, APM, App Dynamics, etc. Troubleshooting and documenting Root cause analysis and automate the incident Understands the Architecture, SRE mindset, Understands data model Platform Architecture and Engineering - Ability to design, architect a Cloud platform that can meet Client SLAs /NFRs such as Availability, system performance etc. SRE will define the environment provisions framework, identify potential performance bottlenecks and design a cloud platform Preferred technical and professional experience Effectively communicate with business and technical team members. Creative problem solving skills and superb communication Skill. Telecom domain experience is an added plus
Posted 23 hours ago
3.0 - 8.0 years
5 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : AWS Operations Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing application features, and ensuring that the applications function seamlessly within the existing infrastructure. You will also engage in troubleshooting and optimizing applications to enhance performance and user experience, while adhering to best practices in software development. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application processes and workflows.- Engage in continuous learning to stay updated with the latest technologies and methodologies.- Quickly identify, troubleshoot, and fix failures to minimize downtime.- To ensure the SLAs and OLAs are met within the timelines such that operation excellence is met. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Operations.- Strong understanding of cloud architecture and services.- Experience with application development frameworks and tools.- Familiarity with DevOps practices and CI/CD pipelines.- Ability to troubleshoot and resolve application issues efficiently.- Strong understanding of cloud networking concepts including VPC design, subnets, routing, security groups, and implementing scalable solutions using AWS Elastic Load Balancer (ALB/NLB).- Practical experience in setting up and maintaining observability tools such as Prometheus, Grafana, CloudWatch, ELK stack for proactive system monitoring and alerting.- Hands-on expertise in containerizing applications using Docker and deploying/managing them in orchestrated environments such as Kubernetes or ECS.- Proven experience designing, deploying, and managing cloud infrastructure using Terraform, including writing reusable modules and managing state across environments.- Good problem solving skills - The ability to quickly identify, analyze, and resolve issues is vital.- Effective Communication - Strong communication skills are necessary for collaborating with cross- functional teams and documenting processes and changes.- Time Management - Efficiently managing time and prioritizing tasks is vital in operations support.- The candidate should have minimum 3 years of experience in AWS Operations. Additional Information:- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 23 hours ago
1.0 - 4.0 years
3 - 6 Lacs
Bengaluru
Work from Office
Title : Front End Developer, React.js, Typescript, Git, CSS, CI/CD, Kubernetes, AWS Develop and maintain user-friendly web applications with React.js.Write clean, maintainable, and efficient code using HTML, CSS, JavaScript (ES6+), and TypeScript.Work closely with UX/UI designers to bring mockups to life with responsive and accessible designs.Optimize applications for speed, scalability, and cross-browser compatibility.Implement and maintain front-end state management solutions such as Redux.Collaborate with back-end developers to integrate APIs and ensure smooth data flow.Debug and resolve front-end issues, improving performance and usability.Stay updated with the latest front-end technologies and industry trends. Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise 1-4 years of experience in front-end development. Strong proficiency in React.js and ecosystem tools. Experience with TypeScript. Proficiency in modern CSS frameworks like SCSS. Familiarity with version control systems like Git and CI/CD pipelines. Understanding of performance optimization techniques (lazy loading, caching, etc.). Knowledge of testing frameworks such as Cypress, or React Testing Library. Knowledge of monitoring tools (Prometheus) and logging frameworks. Experience with Agile methodologies and working in a collaborative team environment. Preferred technical and professional experience Knowledge of Opensource development, and working experience in Opensource projects. Familiarity with cloud platforms (AWS, Azure, GCP) and their storage services. Experience with container orchestration tools such as Kubernetes. Ability to work effectively in a collaborative, cross-functional team environment.
Posted 23 hours ago
2.0 - 4.0 years
4 - 6 Lacs
Bengaluru
Work from Office
ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it , our most valuable asset is our people. Here you ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage an d passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. Platform and Product Team is shaping one of the key growth vector area for ZS, our engagement, comprising of clients from industries like Quick service restaurants, Technology, Food & Beverage, Hospitality, Travel, Insurance, Consumer Products Goods & other such industries across North America, Europe & South East Asia region. Platform and Product India team currently has presence across New Delhi, Pune and Bengaluru offices and is continuously expanding further at a great pace. Platform and Product India team works with colleagues across clients and geographies to create and deliver real world pragmatic solutions leveraging AI SaaS products & platforms, Generative AI applications, and other Advanced analytics solutions at scale. What You ll Do: Experience with cloud technologies AWS, Azure or GCP Create container images and maintain container registries. Create, update, and maintain production grade applications on Kubernetes clusters and cloud. Inculcate GitOps approach to maintain deployments. Create YAML scripts, HELM charts for Kubernetes deployments as required. Take part in cloud design and architecture decisions and support lead architects build cloud agnostic applications. Create and maintain Infrastructure-as-code templates to automate cloud infrastructure deployment Create and manage CI/CD pipelines to automate containerized deployments to cloud and K8s. Maintain git repositories, establish proper branching strategy, and release management processes. Support and maintain source code management and build tools. Monitoring applications on cloud and Kubernetes using tools like ELK, Grafana, Prometheus etc. Automate day to day activities using scripting. Work closely with development team to implement new build processes and strategies to meet new product requirements. Troubleshooting, problem solving, root cause analysis, and documentation related to build, release, and deployments. Ensure that systems are secure and compliant with industry standards. What You ll Bring A master s or bachelor s degree in computer science or related field from a top university. 2-4+ years of hands-on experience in DevOps Hands-on experience designing and deploying applications to cloud (Aws / Azure/ GCP) Expertise on deploying and maintaining applications on Kubernetes Technical expertise in release automation engineering, CI/CD or related roles. Hands on experience in writing Terraform templates as IaC, Helm charts, Kubernetes manifests Should have strong hold on Linux commands and script automation. Technical understanding of development tools, source control, and continuous integration build systems, e.g. Azure DevOps, Jenkins, Gitlab, TeamCity etc. Knowledge of deploying LLM models and toolchains Configuration management of various environments. Experience working in agile teams with short release cycles. Good to have programming experience in python / go. Characteristics of a forward thinker and self-starter that thrives on new challenges and adapts quickly to learning new knowledge. Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered.
Posted 3 days ago
4.0 - 7.0 years
11 - 16 Lacs
Pune
Hybrid
So, what’s the role all about? As a Sr. Cloud Services Automation Engineer, you will be responsible for designing, developing, and maintaining robust end-to-end automation solutions that support our customer onboarding processes from an on-prem software solution to Azure SAAS platform and streamline cloud operations. You will work closely with Professional Services, Cloud Operations, and Engineering teams to implement tools and frameworks that ensure seamless deployment, monitoring, and self-healing of applications running in Azure. How will you make an impact? Design and develop automated workflows that orchestrate complex processes across multiple systems, databases, endpoints, and storage solutions in on-prem and public cloud. Design, develop, and maintain internal tools/utilities using C#, PowerShell, Python, Bash to automate and optimize cloud onboarding workflows. Create integrations with REST APIs and other services to ingest and process external/internal data. Query and analyze data from various sources such as, SQL databases, Elastic Search indices and Log files (structured and unstructured) Develop utilities to visualize, summarize, or otherwise make data actionable for Professional Services and QA engineers. Work closely with test, ingestion, and configuration teams to understand bottlenecks and build self-healing mechanisms for high availability and performance. Build automated data pipelines with data consistency and reconciliation checks using tools like PowerBI/Grafana for collecting metrics from multiple endpoints and generating centralized and actionable dashboards. Automate resource provisioning across Azure services including AKS, Web Apps, and storage solutions Experience in building Infrastructure-as-code (IaC) solutions using tools like Terraform, Bicep, or ARM templates Develop end-to-end workflow automation in customer onboarding journey that spans from Day 1 to Day 2 with minimal manual intervention Have you got what it takes? Bachelor’s degree in computer science, Engineering, or related field (or equivalent experience). Proficiency in scripting and programming languages (e.g., C#, .NET, PowerShell, Python, Bash). Experience working with and integrating REST APIs Experience with IaC and configuration management tools (e.g., Terraform, Ansible) Familiarity with monitoring and logging solutions (e.g., Azure Monitor, Log Analytics, Prometheus, Grafana). Familiarity with modern version control systems (e.g., GitHub). Excellent problem-solving skills and attention to detail. Ability to work with development and operations teams, to achieve desired results, on common projects Strategic thinker and capable of learning new technologies quickly Good communication with peers, subordinates and managers You will have an advantage if you also have: Experience with AKS infrastructure administration. Experience orchestrating automation with Azure Automation tools like Logic Apps. Experience working in a secure, compliance driven environment (e.g. CJIS/PCI/SOX/ISO) Certifications in vendor or industry specific technologies. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7454 Reporting into: Director Role Type: Individual Contributor
Posted 3 days ago
4.0 - 7.0 years
9 - 12 Lacs
Pune
Hybrid
So, what’s the role all about? In NiCE as a Senior Software professional specializing in designing, developing, and maintaining applications and systems using the Java programming language. They play a critical role in building scalable, robust, and high-performing applications for a variety of industries, including finance, healthcare, technology, and e-commerce How will you make an impact? Working knowledge of unit testing Working knowledge of user stories or use cases Working knowledge of design patterns or equivalent experience. Working knowledge of object-oriented software design. Team Player Have you got what it takes? Bachelor’s degree in computer science, Business Information Systems or related field or equivalent work experience is required. 4+ year (SE) experience in software development Well established technical problem-solving skills. Experience in Java, spring boot and microservices. Experience with Kafka, Kinesis, KDA, Apache Flink Experience in Kubernetes operators, Grafana, Prometheus Experience with AWS Technology including (EKS, EMR, S3, Kinesis, Lambda’s, Firehose, IAM, CloudWatch, etc) You will have an advantage if you also have: Experience with Snowflake or any DWH solution. Excellent communication skills, problem-solving skills, decision-making skills Experience in Databases Experience in CI/CD, git, GitHub Actions Jenkins based pipeline deployments. Strong experience in SQL What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 6965 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 3 days ago
5.0 - 8.0 years
15 - 19 Lacs
Pune
Hybrid
So, what’s the role all about? Seeking a skilled and experienced DevOps Engineer in designing, producing, and testing high-quality software that meets specified functional and non-functional requirements within the time and resource constraints given. How will you make an impact? Design, implement, and maintain CI/CD pipelines using Jenkins to support automated builds, testing, and deployments. Manage and optimize AWS infrastructure for scalability, reliability, and cost-effectiveness. To streamline operational workflows and develop automation scripts and tools using shell scripting and other programming languages. Collaborate with cross-functional teams (Development, QA, Operations) to ensure seamless software delivery and deployment. Monitor and troubleshoot infrastructure, build failures, and deployment issues to ensure high availability and performance. Implement and maintain robust configuration management practices and infrastructure-as-code principles. Document processes, systems, and configurations to ensure knowledge sharing and maintain operational consistency. Performing ongoing maintenance and upgrades (Production & non-production) Occasional weekend or after-hours work as needed Have you got what it takes? Experience: 5-8 years in DevOps or a similar role. Cloud Expertise: Proficient in AWS services such as EC2, S3, RDS, Lambda, IAM, CloudFormation, or similar. CI/CD Tools: Hands-on experience with Jenkins pipelines (declarative and scripted). Scripting Skills: Proficiency in either shell scripting or powershell Programming Knowledge: Familiarity with at least one programming language (e.g., Python, Java, or Go). IMP: Scripting/Programming is integral to this role and will be a key focus in the interview process. Version Control: Experience with Git and Git-based workflows. Monitoring Tools: Familiarity with tools like CloudWatch, Prometheus, or similar. Problem-solving: Strong analytical and troubleshooting skills in a fast-paced environment. CDK Knowledge in AWS DevOps. You will have an advantage if you also have: Prior experience in Development or Automation is a significant advantage. Windows system administration is a significant advantage. Experience with monitoring and log analysis tools is an advantage. Jenkins pipeline knowledge What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7318 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 4 days ago
10.0 - 15.0 years
22 - 37 Lacs
Bengaluru
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role As an ELK (Elastic, Logstash & Kibana) Data Engineer, you would be responsible for developing, implementing, and maintaining the ELK stack-based solutions for Kyndryl’s clients. This role would be responsible to develop efficient and effective, data & log ingestion, processing, indexing, and visualization for monitoring, troubleshooting, and analysis purposes. Responsibilities: Design, implement, and maintain scalable data pipelines using ELK Stack (Elasticsearch, Logstash, Kibana) and Beats for monitoring and analytics. Develop data processing workflows to handle real-time and batch data ingestion, transformation and visualization. Implement techniques like grok patterns, regular expressions, and plugins to handle complex log formats and structures. Configure and optimize Elasticsearch clusters for efficient indexing, searching, and performance tuning. Collaborate with business users to understand their data integration & visualization needs and translate them into technical solutions Create dynamic and interactive dashboards in Kibana for data visualization and insights that can enable to detect the root cause of the issue. Leverage open-source tools such as Beats and Python to integrate and process data from multiple sources. Collaborate with cross-functional teams to implement ITSM solutions integrating ELK with tools like ServiceNow and other ITSM platforms. Anomaly detection using Elastic ML and create alerts using Watcher functionality Extract data by Python programming using API Build and deploy solutions in containerized environments using Kubernetes. Monitor Elasticsearch clusters for health, performance, and resource utilization Automate routine tasks and data workflows using scripting languages such as Python or shell scripting. Provide technical expertise in troubleshooting, debugging, and resolving complex data and system issues. Create and maintain technical documentation, including system diagrams, deployment procedures, and troubleshooting guides If you're ready to embrace the power of data to transform our business and embark on an epic data adventure, then join us at Kyndryl. Together, let's redefine what's possible and unleash your potential. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Experience: Minimum of 5 years of experience in ELK Stack and Python programming Graduate/Postgraduate in computer science, computer engineering, or equivalent with minimum of 10 years of experience in the IT industry. ELK Stack : Deep expertise in Elasticsearch, Logstash, Kibana, and Beats. Programming : Proficiency in Python for scripting and automation. ITSM Platforms : Hands-on experience with ServiceNow or similar ITSM tools. Containerization : Experience with Kubernetes and containerized applications. Operating Systems : Strong working knowledge of Windows, Linux, and AIX environments. Open-Source Tools : Familiarity with various open-source data integration and monitoring tools. Knowledge of network protocols, log management, and system performance optimization. Experience in integrating ELK solutions with enterprise IT environments. Strong analytical and problem-solving skills with attention to detail. Knowledge in MySQL or NoSQL Databases will be added advantage Fluent in English (written and spoken). Preferred Technical and Professional Experience “Elastic Certified Analyst” or “Elastic Certified Engineer” certification is preferrable Familiarity with additional monitoring tools like Prometheus, Grafana, or Splunk. Knowledge of cloud platforms (AWS, Azure, or GCP). Experience with DevOps methodologies and tools. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 4 days ago
3.0 - 8.0 years
5 - 10 Lacs
Pune
Work from Office
Since its inception in 2003, driven by visionary college students transforming online rent payment, Entrata has evolved into a global leader serving property owners, managers, and residents. Honored with prestigious awards like the Utah Business Fast 50, Silicon Slopes Hall of Fame - Software Company - 2022, Women Tech Council Shatter List, our comprehensive software suite spans rent payments, insurance, leasing, maintenance, marketing, and communication tools, reshaping property management worldwide. Our 2200+ global team members embody intelligence and adaptability, engaging actively from top executives to part-time employees. With offices across Utah, Texas, India, Israel, and the Netherlands, Entrata blends startup innovation with established stability, evident in our transparent communication values and executive town halls. Our product isn't just desirable; it's industry essential. At Entrata, we passionately refine living experiences, uphold collective excellence, embrace > Job Summary Entrata Software is seeking a DevOps Engineer to join our R&D team in Pune, India. This role will focus on automating infrastructure, streamlining CI/CD pipelines, and optimizing cloud-based deployments to improve software delivery and system reliability. The ideal candidate will have expertise in Kubernetes, AWS, Terraform, and automation tools to enhance scalability, security, and observability. Success in this role requires strong problem-solving skills, collaboration with development and security teams, and a commitment to continuous improvement. If you thrive in fast-paced, Agile environments and enjoy solving complex infrastructure challenges, we encourage you to apply! Key Responsibilities Design, implement, and maintain CI/CD pipelines using Jenkins, GitHub Actions, and ArgoCD to enable seamless, automated software deployments. Deploy, manage, and optimize Kubernetes clusters in AWS, ensuring reliability, scalability, and security. Automate infrastructure provisioning and configuration using Terraform, CloudFormation, Ansible, and scripting languages like Bash, Python, and PHP. Monitor and enhance system observability using Prometheus, Grafana, and ELK Stack to ensure proactive issue detection and resolution. Implement DevSecOps best practices by integrating security scanning, compliance automation, and vulnerability management into CI/CD workflows. Troubleshoot and resolve cloud infrastructure, networking, and deployment issues in a timely and efficient manner. Collaborate with development, security, and IT teams to align DevOps practices with business and engineering objectives. Optimize AWS cloud resource utilization and cost while maintaining high availability and performance. Establish and maintain disaster recovery and high-availability strategies to ensure system resilience. Improve incident response and on-call processes by following SRE principles and automating issue resolution. Promote a culture of automation and continuous improvement, identifying and eliminating manual inefficiencies in development and operations. Stay up-to-date with emerging DevOps tools and trends, implementing best practices to enhance processes and technologies. Ensure compliance with security and industry standards, enforcing governance policies across cloud infrastructure. Support developer productivity by providing self-service infrastructure and deployment automation to accelerate the software development lifecycle. Document processes, best practices, and troubleshooting guides to ensure clear knowledge sharing across teams. Minimum Qualifications 3+ years of experience as a DevOps Engineer or similar role. Strong proficiency in Kubernetes, Docker, and AWS. Hands-on experience with Terraform, CloudFormation, and CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD, ArgoCD). Solid scripting and automation skills with Bash, Python, PHP, or Ansible. Expertise in monitoring and logging tools such as NewRelic, Prometheus, Grafana, and ELK Stack. Understanding of DevSecOps principles, security best practices, and vulnerability management. Strong problem-solving skills and ability to troubleshoot cloud infrastructure and deployment issues effectively. Preferred Qualifications Experience with GitOps methodologies using ArgoCD or Flux. Familiarity with SRE principles and managing incident response for high-availability applications. Knowledge of serverless architectures and AWS cost optimization strategies. Hands-on experience with compliance and governance automation for cloud security. Previous experience working in Agile, fast-paced environments with a focus on DevOps transformation. Strong communication skills and ability to mentor junior engineers on DevOps best practices. If you're passionate about automation, cloud infrastructure, and building scalable DevOps solutions ,
Posted 4 days ago
8.0 - 12.0 years
11 - 15 Lacs
Kochi
Work from Office
Job Title - Cloud Platform Engineer Associate Manager ACS Song Management Level:Level 8 Associate Manager Location:Kochi, Coimbatore, Trivandrum Must have skills:AWS, Terraform Good to have skills:Hybrid Cloud Experience:8-12 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture) Job Summary Within our Cloud Platforms & Managed Services Solution Line, we apply an agile approach to provide true on-demand cloud platforms. We implement and operate secure cloud and hybrid global infrastructures using automation techniques for our clients business critical application landscape. As a Cloud Platform Engineer you are responsible for implementing on cloud and hybrid global infrastructures using infrastructure-as-code. Roles and Responsibilities Implement Cloud and Hybrid Infrastructures using Infrastructure-as-Code. Automate Provisioning and Maintenance for streamlined operations. Design and Estimate Infrastructure with an emphasis on observability and security. Establish CI/CD Pipelines for seamless application deployment. Ensure Data Integrity and Security through robust mechanisms. Implement Backup and Recovery Procedures for data protection. Build Self-Service Systems for enhanced developer autonomy. Collaborate with Development and Operations Teams for platform optimization. Professional and Technical Skills Customer-Focused Communicator adept at engaging cross-functional teams. Cloud Infrastructure Expert in AWS, Azure, or GCP. Proficient in Infrastructure as Code with tools like Terraform. Experienced in Container Orchestration (Kubernetes, Openshift, Docker Swarm). Skilled in Observability Tools like Prometheus, Grafana, etc., as well as Competent in Log Aggregation tools (Loki, ELK, Graylog) and Familiar with Tracing Systems such as Tempo. CI/CD and GitOps Savvy with potential knowledge of Argo-CD or Flux. Automation Proficiency in Bash and high-level languages (Python, Golang). Linux, Networking, and Database Knowledge for robust infrastructure management. Hybrid Cloud Experience a plus Additional Information About Our Company | Accenture (do not remove the hyperlink) Qualification Experience:3-5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)
Posted 4 days ago
15.0 - 20.0 years
5 - 9 Lacs
Chennai
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : DevOps Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements in a fast-paced environment, ensuring seamless integration and functionality. Roles & Responsibilities:- Expected to be an SME, collaborate, and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Lead the development and implementation of software solutions.- Collaborate with cross-functional teams to define, design, and ship new features.- Ensure the best possible performance, quality, and responsiveness of applications.- Identify bottlenecks and bugs, and devise solutions to mitigate and address these issues. Professional & Technical Skills: - Must To Have Skills: Proficiency in DevOps.- Strong understanding of continuous integration and continuous deployment (CI/CD) pipelines.- Experience with infrastructure as code (IaC) tools like Terraform or CloudFormation.- Knowledge of containerization technologies such as Docker and Kubernetes.- Hands-on experience with monitoring and logging tools like Prometheus and ELK stack. Additional Information:- The candidate should have a minimum of 12 years of experience in DevOps.- This position is based at our Chennai office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 4 days ago
7.0 - 10.0 years
11 - 16 Lacs
Mumbai, Hyderabad, Pune
Work from Office
Key Responsibilities: Design, build, and maintain CI/CD pipelines for ML model training, validation, and deployment Automate and optimize ML workflows, including data ingestion, feature engineering, model training, and monitoring Deploy, monitor, and manage LLMs and other ML models in production (on-premises and/or cloud) Implement model versioning, reproducibility, and governance best practices Collaborate with data scientists, ML engineers, and software engineers to streamline end-to-end ML lifecycle Ensure security, compliance, and scalability of ML/LLM infrastructure Troubleshoot and resolve issues related to ML model deployment and serving Evaluate and integrate new MLOps/LLMOps tools and technologies Mentor junior engineers and contribute to best practices documentation Required Skills & Qualifications: 8+ years of experience in DevOps, with at least 3 years in MLOps/LLMOps Strong experience with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker) Proficient in CI/CD tools (Jenkins, GitHub Actions, GitLab CI, etc.) Hands-on experience deploying and managing different types of AI models (e.g., OpenAI, HuggingFace, custom models) to be used for developing solutions. Experience with model serving tools such as TGI, vLLM, BentoML, etc. Solid scripting and programming skills (Python, Bash, etc.) Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK stack) Strong understanding of security and compliance in ML environments Preferred Skills: Knowledge of model explainability, drift detection, and model monitoring Familiarity with data engineering tools (Spark, Kafka, etc. Knowledge of data privacy, security, and compliance in AI systems. Strong communication skills to effectively collaborate with various stakeholders Critical thinking and problem-solving skills are essential Proven ability to lead and manage projects with cross-functional teams
Posted 4 days ago
7.0 - 10.0 years
8 - 13 Lacs
Mumbai, Hyderabad, Pune
Work from Office
Key Responsibilities: Design, build, and maintain CI/CD pipelines for ML model training, validation, and deployment Automate and optimize ML workflows, including data ingestion, feature engineering, model training, and monitoring Deploy, monitor, and manage LLMs and other ML models in production (on-premises and/or cloud) Implement model versioning, reproducibility, and governance best practices Collaborate with data scientists, ML engineers, and software engineers to streamline end-to-end ML lifecycle Ensure security, compliance, and scalability of ML/LLM infrastructure Troubleshoot and resolve issues related to ML model deployment and serving Evaluate and integrate new MLOps/LLMOps tools and technologies Mentor junior engineers and contribute to best practices documentation Required Skills & Qualifications: 8+ years of experience in DevOps, with at least 3 years in MLOps/LLMOps Strong experience with cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker) Proficient in CI/CD tools (Jenkins, GitHub Actions, GitLab CI, etc.) Hands-on experience deploying and managing different types of AI models (e.g., OpenAI, HuggingFace, custom models) to be used for developing solutions. Experience with model serving tools such as TGI, vLLM, BentoML, etc. Solid scripting and programming skills (Python, Bash, etc.) Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK stack) Strong understanding of security and compliance in ML environments Preferred Skills: Knowledge of model explainability, drift detection, and model monitoring Familiarity with data engineering tools (Spark, Kafka, etc. Knowledge of data privacy, security, and compliance in AI systems. Strong communication skills to effectively collaborate with various stakeholders Critical thinking and problem-solving skills are essential Proven ability to lead and manage projects with cross-functional teams
Posted 4 days ago
4.0 - 7.0 years
11 - 16 Lacs
Pune
Hybrid
So, what’s the role all about? As a Sr. Cloud Services Automation Engineer, you will be responsible for designing, developing, and maintaining robust end-to-end automation solutions that support our customer onboarding processes from an on-prem software solution to Azure SAAS platform and streamline cloud operations. You will work closely with Professional Services, Cloud Operations, and Engineering teams to implement tools and frameworks that ensure seamless deployment, monitoring, and self-healing of applications running in Azure. How will you make an impact? Design and develop automated workflows that orchestrate complex processes across multiple systems, databases, endpoints, and storage solutions in on-prem and public cloud. Design, develop, and maintain internal tools/utilities using C#, PowerShell, Python, Bash to automate and optimize cloud onboarding workflows. Create integrations with REST APIs and other services to ingest and process external/internal data. Query and analyze data from various sources such as, SQL databases, Elastic Search indices and Log files (structured and unstructured) Develop utilities to visualize, summarize, or otherwise make data actionable for Professional Services and QA engineers. Work closely with test, ingestion, and configuration teams to understand bottlenecks and build self-healing mechanisms for high availability and performance. Build automated data pipelines with data consistency and reconciliation checks using tools like PowerBI/Grafana for collecting metrics from multiple endpoints and generating centralized and actionable dashboards. Automate resource provisioning across Azure services including AKS, Web Apps, and storage solutions Experience in building Infrastructure-as-code (IaC) solutions using tools like Terraform, Bicep, or ARM templates Develop end-to-end workflow automation in customer onboarding journey that spans from Day 1 to Day 2 with minimal manual intervention Have you got what it takes? Bachelor’s degree in computer science, Engineering, or related field (or equivalent experience). Proficiency in scripting and programming languages (e.g., C#, .NET, PowerShell, Python, Bash). Experience working with and integrating REST APIs Experience with IaC and configuration management tools (e.g., Terraform, Ansible) Familiarity with monitoring and logging solutions (e.g., Azure Monitor, Log Analytics, Prometheus, Grafana). Familiarity with modern version control systems (e.g., GitHub). Excellent problem-solving skills and attention to detail. Ability to work with development and operations teams, to achieve desired results, on common projects Strategic thinker and capable of learning new technologies quickly Good communication with peers, subordinates and managers You will have an advantage if you also have: Experience with AKS infrastructure administration. Experience orchestrating automation with Azure Automation tools like Logic Apps. Experience working in a secure, compliance driven environment (e.g. CJIS/PCI/SOX/ISO) Certifications in vendor or industry specific technologies. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7454 Reporting into: Director of Cloud Services Role Type: Individual Contributor
Posted 5 days ago
3.0 - 7.0 years
5 - 9 Lacs
Chennai
Work from Office
Overview We are looking for a Full-stack Developer and Automation Engineer with knowledge in Cloud, DevOps Tools, Automation, excellent analytical, problem solving and communication skills. You'll need to have Bachelor’s degree or two or more years of work experience Experience working with Front-end and Back-end Technologies for building, enhancing and managing applications Experience working with Backend technologies like Python, DJango, Java, ReactJS, NodeJS, Springboot Experience working with Client-side scripting technologies like JavaScript, JQuery, etc. Experience in advanced SQL/procedures on MySQL/MongoDB/MariaDB/Oracle Experience using AWS Cloud Infrastructure services such as EC2, ALB, RDS, etc. Experience working with serverless technologies like AWS Lambda, Google/Azure Functions Knowledge of SDLC with Devops tools and Agile Development Even Better if you have Experience in Monitoring/Alerting tools and platforms such as Prometheus, Grafana, Catchpoint, New Relic, etc Experience agile practices and tools used in the development (Jira, Confluence, Jenkins, etc.) Experience in code review, quality, performance tuning with problem solving and debugging skills. Experience with Unit testing framework like JUnit, Mokito. Good communication, interpersonal skills and communication skills to clearly articulate and influence stakeholders. Very good problem solving skills. Responsibilities We are looking for a Full-stack Developer and Automation Engineer with knowledge in Cloud, DevOps Tools, Automation, excellent analytical, problem solving and communication skills. You'll need to have Bachelor’s degree or two or more years of work experience Experience working with Front-end and Back-end Technologies for building, enhancing and managing applications Experience working with Backend technologies like Python, DJango, Java, ReactJS, NodeJS, Springboot Experience working with Client-side scripting technologies like JavaScript, JQuery, etc. Experience in advanced SQL/procedures on MySQL/MongoDB/MariaDB/Oracle Experience using AWS Cloud Infrastructure services such as EC2, ALB, RDS, etc. Experience working with serverless technologies like AWS Lambda, Google/Azure Functions Knowledge of SDLC with Devops tools and Agile Development Even Better if you have Experience in Monitoring/Alerting tools and platforms such as Prometheus, Grafana, Catchpoint, New Relic, etc Experience agile practices and tools used in the development (Jira, Confluence, Jenkins, etc.) Experience in code review, quality, performance tuning with problem solving and debugging skills. Experience with Unit testing framework like JUnit, Mokito. Good communication, interpersonal skills and communication skills to clearly articulate and influence stakeholders. Very good problem solving skills. We are looking for a Full-stack Developer and Automation Engineer with knowledge in Cloud, DevOps Tools, Automation, excellent analytical, problem solving and communication skills. You'll need to have Bachelor’s degree or two or more years of work experience Experience working with Front-end and Back-end Technologies for building, enhancing and managing applications Experience working with Backend technologies like Python, DJango, Java, ReactJS, NodeJS, Springboot Experience working with Client-side scripting technologies like JavaScript, JQuery, etc. Experience in advanced SQL/procedures on MySQL/MongoDB/MariaDB/Oracle Experience using AWS Cloud Infrastructure services such as EC2, ALB, RDS, etc. Experience working with serverless technologies like AWS Lambda, Google/Azure Functions Knowledge of SDLC with Devops tools and Agile Development Even Better if you have Experience in Monitoring/Alerting tools and platforms such as Prometheus, Grafana, Catchpoint, New Relic, etc Experience agile practices and tools used in the development (Jira, Confluence, Jenkins, etc.) Experience in code review, quality, performance tuning with problem solving and debugging skills. Experience with Unit testing framework like JUnit, Mokito. Good communication, interpersonal skills and communication skills to clearly articulate and influence stakeholders. Very good problem solving skills.
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2