Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
8.0 - 13.0 years
25 - 30 Lacs
Mumbai
Work from Office
Job Summary This position provides input, support, and performs full systems life cycle management activities (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.). He/She participates in component and data architecture design, technology planning, and testing for Applications Development (AD) initiatives to meet business requirements. This position provides input to applications development project plans and integrations. He/She collaborates with teams and supports emerging technologies to ensure effective communication and achievement of objectives. This position provides knowledge and support for applications development, integration, and maintenance. He/She provides input to department and project teams on decisions supporting projects. Technical Skills: Strong proficiency in .Net, .Net Core, C#, REST API. Strong expertise in PostgreSQL. Additional preferred Skills: Docker, Kubernetes. Cloud : GCP and Services: Google Cloud Storage, Pub/Sub. Monitor Tools: Dynatrace, Grafana, API Security and tooling (SonarQube) Agile Methodology Key Responsibilities: Design, develop and maintain scalable C# applications and microservice implementation. Implement RESTful APIs for efficient communication between client and server applications. Collaborate with product owners to understand requirements and create technical specifications. Build robust database solutions and ensure efficient data retrieval. Write clean, maintainable, and efficient code. Conduct unit testing, integration testing, and code reviews to maintain code quality. Work on implementation of Industry Standard protocols related to API Security including OAuth. Implement scalable and high-performance solutions that integrate with Pub/Sub messaging systems and other GCP services BQ, Dataflows, Spanner etc. Collaborate with cross-functional teams to define, design, and deliver new features. Integrate and manage data flows between different systems using Kafka, Pub/Sub, and other middleware technologies. Qualifications: Bachelors Degree or International equivalent 8+ years of IT experience in .NET. Bachelor's Degree or International equivalent in Computer Science, Information Systems, Mathematics, Statistics, or related field - Preferred
Posted -1 days ago
8.0 - 13.0 years
25 - 30 Lacs
Mumbai
Work from Office
Job Summary This position provides input, support, and performs full systems life cycle management activities (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.). He/She participates in component and data architecture design, technology planning, and testing for Applications Development (AD) initiatives to meet business requirements. This position provides input to applications development project plans and integrations. He/She collaborates with teams and supports emerging technologies to ensure effective communication and achievement of objectives. This position provides knowledge and support for applications development, integration, and maintenance. He/She provides input to department and project teams on decisions supporting projects. Technical Skills: Strong proficiency in .Net, .Net Core, C#, REST API. Strong expertise in PostgreSQL. Additional preferred Skills: Docker, Kubernetes. Cloud : GCP and Services: Google Cloud Storage, Pub/Sub. Monitor Tools: Dynatrace, Grafana, API Security and tooling (SonarQube) Agile Methodology Key Responsibilities: Design, develop and maintain scalable C# applications and microservice implementation. Implement RESTful APIs for efficient communication between client and server applications. Collaborate with product owners to understand requirements and create technical specifications. Build robust database solutions and ensure efficient data retrieval. Write clean, maintainable, and efficient code. Conduct unit testing, integration testing, and code reviews to maintain code quality. Work on implementation of Industry Standard protocols related to API Security including OAuth. Implement scalable and high-performance solutions that integrate with Pub/Sub messaging systems and other GCP services BQ, Dataflows, Spanner etc. Collaborate with cross-functional teams to define, design, and deliver new features. Integrate and manage data flows between different systems using Kafka, Pub/Sub, and other middleware technologies. Qualifications: Bachelors Degree or International equivalent 8+ years of IT experience in .NET/C#. Bachelor's Degree or International equivalent in Computer Science, Information Systems, Mathematics, Statistics, or related field - Preferred
Posted -1 days ago
8.0 - 13.0 years
6 - 10 Lacs
Pune
Work from Office
Job TitleSenior Infrastructure Specialist Experience: 8+ Years Skills: Kubernetes, Infrastructure, Containers, Linux, Cloud, Infrastructure as Code (IaC) Department IT Reports To Tech Lead Location: Pune (Hybrid) Role Summary: We are seeking a highly experienced Senior Infrastructure Specialist to lead and manage scalable infrastructure and container-based environments. This role focuses on Kubernetes orchestration , automation , and maintaining secure, reliable, and efficient platform services. Youll play a critical role in evolving infrastructure systems using modern DevOps practices, and driving the adoption of containerization and cloud-native technologies across the organization. Key Responsibilities: Design, automate, and maintain CI/CD pipelines for OS image creation. Independently create, manage, and deploy hypervisor templates. Manage and scale the Vanderlande Container Platform across test, acceptance, and production environments. Administer and optimize 150+ downstream Kubernetes clusters , ensuring performance, reliability, and uptime. Improve solutions related to container platforms , edge computing , and virtualization . Lead the transition from VMware OVAs to a Kubernetes-based virtualization architecture, incorporating insights from Proof of Concept (PoC) initiatives. Focus on platform automation , using Infrastructure as Code to minimize manual tasks. Ensure security hardening and compliance for all infrastructure components. Collaborate closely with development, DevOps, and security teams to drive container adoption and lifecycle management. Required Qualifications & Skills: 8+ years of infrastructure engineering experience. Deep expertise in Kubernetes architecture, deployment, and management. Strong background in Linux systems administration and troubleshooting. Proficiency with cloud platforms (AWS, Azure, GCP). Hands-on experience with Infrastructure as Code tools like Terraform and Ansible. CI/CD development experience (GitLab, Jenkins, ArgoCD, etc.). Familiarity with virtualization technologies (VMware, KVM). Key Skills (Core): Kubernetes (cluster management, Helm, operators, upgrades) Containerization (Docker, container runtime, image security) Cloud-Native Infrastructure Linux System Engineering Infrastructure as Code (IaC) DevOps & Automation Tools Security & Compliance in container platforms Soft Skills: Proactive and solution-oriented mindset. Strong communication and cross-functional collaboration. Analytical thinking with the ability to troubleshoot complex issues. Time management and the ability to deliver under pressure. Preferred Qualifications: CKA / CKAD Certification (Kubernetes) Cloud certifications (AWS/Azure/GCP) Experience in implementing container security and compliance tools (e.g., Aqua, Prisma, Trivy) Exposure to GitOps tools like ArgoCD or Flux Monitoring and alerting experience (Prometheus, Grafana, ELK stack) Key Relationships: Internal: DevOps, Platform, and Development teams Cloud Infrastructure Teams Cybersecurity and Governance groups External: Technology vendors and third-party platform providers External consultants and cloud service partners Role Dimensions: Ownership of high-scale Kubernetes infrastructure Strategic modernization of infrastructure environments Coordination of multi-cluster container platforms Success Measures (KPIs): Uptime and reliability of container platforms Reduction in manual deployment and provisioning tasks Successful Kubernetes migration from legacy systems Cluster performance and security compliance Team enablement and automation adoption Competency Framework Alignment: Kubernetes Mastery: Deep expertise in managing and optimizing Kubernetes clusters. Infrastructure Automation: Creating reliable, repeatable infrastructure workflows. Containerization Leadership: Driving adoption and best practices for container platforms. Strategic Execution: Aligning infrastructure strategy with enterprise goals. Collaboration: Building bridges between development, operations, and security
Posted -1 days ago
3.0 - 5.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Job Title: DevOps Engineer Location: Bangalore, KA Mode of Work: Work From Office (5 Days a Week) Job Type: Full-Time Department: Engineering/Operations : We are looking for a skilled DevOps Engineer to join our team in Bangalore . The ideal candidate will have hands-on experience with a range of technologies including Docker , Kubernetes (K8s) , JFrog Artifactory , SonarQube , CI/CD tools , monitoring tools , Ansible , and auto-scaling strategies. This role is key to driving automation, improving the deployment pipeline, and optimizing infrastructure for seamless development and production operations. You will collaborate with development teams to design, implement, and manage systems that improve the software development lifecycle and ensure a high level of reliability, scalability, and performance. Responsibilities: Containerization & Orchestration: Design, deploy, and manage containerized applications using Docker . Manage, scale, and optimize Kubernetes (K8s) clusters for container orchestration. Troubleshoot and resolve issues related to Kubernetes clusters, ensuring high availability and fault tolerance. Collaborate with the development team to containerize new applications and microservices. CI/CD Pipeline Development & Maintenance: Implement and optimize CI/CD pipelines using tools such as Jenkins , GitLab CI , or similar. Integrate SonarQube for continuous code quality checks within the pipeline. Ensure seamless integration of JFrog Artifactory for managing build artifacts and repositories. Automate and streamline build, test, and deployment processes to support continuous delivery. Monitoring & Alerts: Implement and maintain monitoring solutions using tools like Prometheus , Grafana , or others. Set up real-time monitoring, logging, and alerting systems to proactively identify and address issues. Create and manage dashboards for operational insights into application health, performance, and system metrics. Automation & Infrastructure as Code: Automate infrastructure provisioning and management using Ansible or similar tools. Implement Auto-Scaling solutions to ensure the infrastructure dynamically adjusts to workload demands, ensuring optimal performance and cost efficiency. Define, deploy, and maintain infrastructure-as-code practices for consistent and reproducible environments. Collaboration & Best Practices: Work closely with development and QA teams to integrate DevOps best practices into the software development lifecycle. Ensure a high standard of security and compliance within the CI/CD pipelines. Provide technical leadership and mentorship for junior team members on DevOps practices and tools. Participate in cross-functional teams to define, design, and deliver scalable software solutions. Debugging & Issue Resolution: Troubleshoot complex application and infrastructure issues across development, staging, and production environments. Apply root cause analysis to incidents and implement long-term fixes to prevent recurrence. Continuously improve monitoring and debugging tools for faster issue resolution.
Posted -1 days ago
3.0 - 6.0 years
6 - 9 Lacs
Mumbai
Work from Office
About The Role The Company World of Kotak product suite encompasses a powerful suite of cross banking assets, all-in-one stop banking services, securities, and investment banking; insights across a wide spectrum of the major financial and banking markets. The Team: The ITSM team is a group of experts managing ITIL practices. We are looking for a highly motivated and hands on individuals to take on a role of Major Incident Manager leading / managing a team of professionals to ensure the smooth functioning of the bank's application and processes. The incumbent will be responsible for MIM operations and ensuring that standard processes are followed in regards to incident, service request, and change and problem management and agreed SLA"™s for the service. The Impact: The Major Incident Manager is responsible for the end-to-end management of all IT major incidents. Basic Qualifications: Individual should have Bachelor of Engineering/Technology OR Masters in Computers OR Bachelor of Computer Science having 4 to7 years of experience in Major Incident Manager or a similar role within a banking or financial services institution Major incident management experience (Crisis and P1 management) Previous experience in liaising with vendor teams, infra teams, app teams etc for the root cause analysis and Post incident reviews. Strong knowledge of ITIL processes. Candidate must be self-sufficient, driven, energetic with passion for technology and have patience to work with users with different levels of technical knowledge Able to operate under pressure and in time sensitive support environments. Preferred Qualifications: Great attitude to learn, respect for fellow employees, think out of the box, and respectfully challenge ideas & hungry for innovation. Good Leadership skills capable of leading a team. Good communication skills and a sense of ownership and drive. Have a process oriented mind-set and capable of understanding the various technologies Embrace automation over manual effort Be able to gel in with companies' culture and effectively collaborate with other technology & business stake holders. Responsibilities: Evaluate whether the incident reported is a Major Incident and run it through MIM cycle Prioritizing incident according to their urgency and influence on the business Logging incident ticket in ticketing tool and manage lifecycle, document chronology, actionable and resolution details. Leading, driving, facilitating and chairing all investigation activities, meetings, and conference calls till the Major Incident is resolved. Overseeing the incident management process and team members involved in resolving the incident. Forming collaborative action plans with specific actions, roles and deadlines, and ensuring these are completed Matrix management of people, processes and resources including third parties including resolving conflict to move forward to resolution Conducting a thorough analysis and preparing Major Incident Report for every MI after it is closed. Ensuring that all the resolution procedures are updated in the knowledge base. Ensuring all administration and reports are maintained and up-to-date, including contacts information, technical diagrams, post major incident reviews Ensuring that the causes for all Major Incidents are analysed and root cause is identified. Maintain Corrective Action and Preventative Action tracker and coordination till closure Supporting and nurturing process improvements and knowledge base improvements Continually maintaining and developing tools and resources to manage major incidents effectively Providing periodic major incident metrics reports Ensuring that SLA"™s are met or exceed agreed targets. Problem management Identify incident trends creating problem tickets to ensure root cause is identified. Approving, Reviewing technical knowledge base documents to be used within the team Work with internal stakeholders to identify and implement process improvements. Working with external vendors to resolve service related incidents and reviewing published root cause analysis reports. Work closely with other IT teams, CTB teams, BSG Teams to implement system Maintain accurate and up-to-date documentation of system processes and procedures Change and release management co-ordination Ensure compliance with regulatory requirements and internal policies and procedures Experience and proficiency with a variety of System tools including: Core Expertise: Strong knowledge of ITIL methodology (ITIL certifications must) with proven operational experience in previous roles. Exposure to industry standard ITSM tools (ServiceNow and Remedy strongly preferred) Experience with monitoring and observability such as with Appdynamics, Dynatrace, Splunk, Graphana or similar Understanding various domains and their functioning i.e. Linux Windows server OS, Middleware, Database, Network, Security Microsoft Office / Office 365 especially Excel (Macros, Worksheets and add-ins) Soft Skills: Communication is core to the success of this role Evangelize adoption and use of tools, processes and technologies Lead engagements to encourage collaboration within and across teams Showcase roadmap and engagement model to relevant stakeholders; through write up, teams groups and webinars Documentation is core to maintain up to date information on use of tools, process and methodologies. [egwiki posts, white papers] Create internal training programs for new staff and upskilling of existing team Demonstrate humility, trust and transparency in the way we interact with individuals
Posted 13 hours ago
4.0 - 6.0 years
27 - 42 Lacs
Chennai
Work from Office
Skill – Aks , Istio service mesh ,CICD Shift timing - Afternoon Shift Location - Chennai, Kolkata, Bangalore Excellent AKS, GKE or Kubernetes admin experience. Good troubleshooting experience on istio service mesh, connectivity issues. Experience with Github Actions or similar ci/cd tool to build pipelines.Working experience on any cloud, preferably Azure, Google with good networking knowledge. Experience on python or shell scripting. Experience on building dashboards, configure alerts using prometheus and Grafana.
Posted 17 hours ago
3.0 - 5.0 years
5 - 7 Lacs
Noida
Work from Office
"Ensure platform reliability and performance: Monitor, troubleshoot, and optimize production systems running on Kubernetes (EKS, GKE, AKS). Automate operations: Develop and maintain automation for infrastructure provisioning, scaling, and incident response. Incident response & on-call support: Participate in on-call rotations to quickly detect, mitigate, and resolve production incidents. Kubernetes upgrades & management: Own and drive Kubernetes version upgrades, node pool scaling, and security patches. Observability & monitoring: Implement and refine observability tools (Datadog, Prometheus, Splunk, etc.) for proactive monitoring and alerting. Infrastructure as Code (IaC): Manage infrastructure using Terraform, Terragrunt, Helm, and Kubernetes manifests. Cross-functional collaboration: Work closely with developers, DBPEs (Database Production Engineers), SREs, and other teams to improve platform stability. Performance tuning: Analyze and optimize cloud and containerized workloads for cost efficiency and high availability. Security & compliance: Ensure platform security best practices, incident response, and compliance adherence.." Required education None Preferred education Bachelor's Degree Required technical and professional expertise Strong expertise in Kubernetes (EKS, GKE, AKS) and container orchestration. Experience with AWS, GCP, or Azure, particularly in managing large-scale cloud infrastructure. Proficiency in Terraform, Helm, and Infrastructure as Code (IaC). Strong understanding of Linux systems, networking, and security best practices. Experience with monitoring & logging tools (Datadog, Splunk, Prometheus, Grafana, ELK, etc.). Hands-on experience with automation & scripting (Python, Bash, or Go). Preferred technical and professional experience Experience in incident management & debugging complex distributed systems. Familiarity with CI/CD pipelines and release automation.
Posted 19 hours ago
7.0 - 12.0 years
5 - 13 Lacs
Pune
Hybrid
So, what’s the role all about? NICE APA is a comprehensive platform that combines Robotic Process Automation, Desktop Automation, Desktop Analytics, AI and Machine Learning solutions as Neva Discover NICE APA is more than just RPA, it's a full platform that brings together automation, analytics, and AI to enhance both front-office and back-office operations. It’s widely used in industries like banking, insurance, telecom, healthcare, and customer service We are seeking a Senior/Specialist Technical Support Engineer with a strong understanding of RPA applications and exceptional troubleshooting skills. The ideal candidate will have hands-on experience in Application Support, the ability to inspect and analyze RPA solutions and Application Server (e.g., Tomcat, Authentication, certificate renewal), and a solid understanding of RPA deployments in both on-premises and cloud-based environments (such as AWS). You should be comfortable supporting hybrid RPA architectures, handling bot automation, licensing, and infrastructure configuration in various environments. Familiarity with cloud-native services used in automation (e.g., AMQ queues, storage, virtual machines, containers) is a plus. Additionally, you’ll need a working knowledge of underlying databases and query optimization to assist with performance and integration issues. You will be responsible for diagnosing and resolving technical issues, collaborating with development and infrastructure teams, contributing to documentation and knowledge bases, and ensuring a seamless and reliable customer experience across multiple systems and platforms How will you make an impact? Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address and resolve product issues. Maintain quality and on-going internal and external communication throughout your investigation. Provide high level of support and minimize R&D escalations. Prioritize daily missions/cases and mange critical issues and situations. Contribute to the Knowledge Base, document troubleshooting and problem resolution steps and participate in Educating/Mentoring other support engineers. Willing to perform on call duties as required. Excellent problem-solving skills with the ability to analyze complex issues and implement effective solutions. Good communication skills with the ability to interact with technical and non-technical stakeholders. Have you got what it takes? Minimum of 8 to 12 years of experience in supporting global enterprise customers. Monitor, troubleshoot, and maintain RPA bots in production environments. Monitor, troubleshoot, system performance, application health, and resource usage using tools like Prometheus, Grafana, or similar Data Analytics - Analyze trends, patterns, and anomalies in data to identify product bugs Familiarity with ETL processes and data pipelines - Advantage Provide L1/L2/L3 support for RPA application, ensuring timely resolution of incidents and service requests Familiarity applications running on Linux-based Kubernetes clusters Troubleshoot and resolve incidents related to pods, services, and deployments Provide technical support for applications running on both Windows and Linux platforms, including troubleshooting issues, diagnosing problems, and implementing solutions to ensure optimal performance. Familiarity with Authentication methods like WinSSO and SAML. Knowledge in Windows/Linux Hardening like TLS enforcement, Encryption Enforcement, Certificate Configuration Working and Troubleshooting knowledge in Apache Software components like Tomcat, Apache and ActiveMQ. Working and Troubleshooting knowledge in SVN/Version Control applications Knowledge in DB schema, structure, SQL queries (DML, DDL) and troubleshooting Collect and analyze logs from servers, network devices, applications, and security tools to identify Environment/Application issues. Knowledge in terminal server (Citrix)- advantage Basic understanding on AWS Cloud systems. Network troubleshooting skills (working with different tools) Certification in RPA platforms and working knowledge in RPA application development/support – advantage. NICE Certification - Knowledge in RTI/RTS/APA products – Advantage Integrate NICE's applications with customers on-prem and cloud-based 3rd party tools and applications to ingest/transform/store/validate data. Shift- 24*7 Rotational Shift (include night shift) Other Required Skills: Excellent verbal and written communication skills Strong troubleshooting and problem-solving skills. Self-motivated and directed, with keen attention to details. Team Player - ability to work well in a team-oriented, collaborative environment. Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7326 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 20 hours ago
6.0 - 9.0 years
4 - 9 Lacs
Pune
Hybrid
So, what’s the role all about? NICE APA is a comprehensive platform that combines Robotic Process Automation, Desktop Automation, Desktop Analytics, AI and Machine Learning solutions as Neva Discover NICE APA is more than just RPA, it's a full platform that brings together automation, analytics, and AI to enhance both front-office and back-office operations. It’s widely used in industries like banking, insurance, telecom, healthcare, and customer service We are seeking a Senior/Specialist Technical Support Engineer with a strong understanding of RPA applications and exceptional troubleshooting skills. The ideal candidate will have hands-on experience in Application Support, the ability to inspect and analyze RPA solutions and Application Server (e.g., Tomcat, Authentication, certificate renewal), and a solid understanding of RPA deployments in both on-premises and cloud-based environments (such as AWS). You should be comfortable supporting hybrid RPA architectures, handling bot automation, licensing, and infrastructure configuration in various environments. Familiarity with cloud-native services used in automation (e.g., AMQ queues, storage, virtual machines, containers) is a plus. Additionally, you’ll need a working knowledge of underlying databases and query optimization to assist with performance and integration issues. You will be responsible for diagnosing and resolving technical issues, collaborating with development and infrastructure teams, contributing to documentation and knowledge bases, and ensuring a seamless and reliable customer experience across multiple systems and platforms How will you make an impact? Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address and resolve product issues. Maintain quality and on-going internal and external communication throughout your investigation. Provide high level of support and minimize R&D escalations. Prioritize daily missions/cases and mange critical issues and situations. Contribute to the Knowledge Base, document troubleshooting and problem resolution steps and participate in Educating/Mentoring other support engineers. Willing to perform on call duties as required. Excellent problem-solving skills with the ability to analyze complex issues and implement effective solutions. Good communication skills with the ability to interact with technical and non-technical stakeholders. Have you got what it takes? Minimum of 5 to 7 years of experience in supporting global enterprise customers. Monitor, troubleshoot, and maintain RPA bots in production environments. Monitor, troubleshoot, system performance, application health, and resource usage using tools like Prometheus, Grafana, or similar Data Analytics - Analyze trends, patterns, and anomalies in data to identify product bugs Familiarity with ETL processes and data pipelines - Advantage Provide L1/L2/L3 support for RPA application, ensuring timely resolution of incidents and service requests Familiarity applications running on Linux-based Kubernetes clusters Troubleshoot and resolve incidents related to pods, services, and deployments Provide technical support for applications running on both Windows and Linux platforms, including troubleshooting issues, diagnosing problems, and implementing solutions to ensure optimal performance. Familiarity with Authentication methods like WinSSO and SAML. Knowledge in Windows/Linux Hardening like TLS enforcement, Encryption Enforcement, Certificate Configuration Working and Troubleshooting knowledge in Apache Software components like Tomcat, Apache and ActiveMQ. Working and Troubleshooting knowledge in SVN/Version Control applications Knowledge in DB schema, structure, SQL queries (DML, DDL) and troubleshooting Collect and analyze logs from servers, network devices, applications, and security tools to identify Environment/Application issues. Knowledge in terminal server (Citrix)- advantage Basic understanding on AWS Cloud systems. Network troubleshooting skills (working with different tools) Certification in RPA platforms and working knowledge in RPA application development/support – advantage. NICE Certification - Knowledge in RTI/RTS/APA products – Advantage Integrate NICE's applications with customers on-prem and cloud-based 3rd party tools and applications to ingest/transform/store/validate data. Shift- 24*7 Rotational Shift (include night shift) Other Required Skills: Excellent verbal and written communication skills Strong troubleshooting and problem-solving skills. Self-motivated and directed, with keen attention to details. Team Player - ability to work well in a team-oriented, collaborative environment. Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7556 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 20 hours ago
4.0 - 9.0 years
6 - 14 Lacs
Hyderabad
Work from Office
Title : .Net Developer(.net+openshift OR Kubernetes) | 4 to 12 years | Bengaluru & Hyderabad : Assess and understand the application implementation while working with architects and business experts Analyse business and technology challenges and suggest solutions to meet strategic objectives Build cloud native applications meeting 12/15 factor principles on OpenShift or Kubernetes Migrate Dot Net Core and/ or Framework Web/ API/ Batch Components deployed in PCF Cloud to OpenShift, working independently Analyse and understand the code, identify bottlenecks and bugs, and devise solutions to mitigate and address these issues Design and Implement unit test scripts and automation for the same using Nunit to achieve 80% code coverage Perform back end code reviews and ensure compliance to Sonar Scans, CheckMarx and BlackDuck to maintain code quality Write Functional Automation test cases for system integration using Selenium. Coordinate with architects and business experts across the application to translate key Required Qualifications: 4+ years of experience in Dot Net Core (3.1 and above) and/or Framework (4.0 and above) development (Coding, Unit Testing, Functional Automation) implementing Micro Services, REST API/ Batch/ Web Components/ Reusable Libraries etc Proficiency in C# with a good knowledge of VB.NET Proficiency in cloud platforms (OpenShift, AWS, Google Cloud, Azure) and hybrid/multi-cloud strategies with at least 3 years in Open Shift Familiarity with cloud-native patterns, microservices, and application modernization strategies. Experience with monitoring and logging tools like Splunk, Log4J, Prometheus, Grafana, ELK Stack, AppDynamics, etc. Familiarity with infrastructure automation tools (e.g., Ansible, Terraform) and CI/CD tools (e.g., Harness, Jenkins, UDeploy). Proficiency in Database like MS SQL Server, Oracle 11g, 12c, Mongo, DB2 Experience in integrating front-end with back-end services Experience in working with Code Versioning methodology as followed with Git, GitHub Familiarity with Job Scheduler through Autosys, PCF Batch Jobs Familiarity with Scripting languages like shell / Helm chats modules" Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders.
Posted 20 hours ago
12.0 - 17.0 years
14 - 19 Lacs
Mysuru
Work from Office
The Site Reliability Engineer is a critical role in Cloud based projects. An SRE works with the development squads to build platform & infrastructure management/provisioning automation and service monitoring using the same methods used in software development to support application development. SREs create a bridge between development and operations by applying a software engineering mindset to system administration topics. They split their time between operations/on-call duties and developing systems and software that help increase site reliability and performance Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall 12+ yrs experience required. Have good exposure to Operational aspects (Monitoring, Automation, Remediations) - Monitoring tools exposure like New Relic, Prometheus, ELK, Distributed tracing, APM, App Dynamics, etc. Troubleshooting and documenting Root cause analysis and automate the incident Understands the Architecture, SRE mindset, Understands data model Platform Architecture and Engineering - Ability to design, architect a Cloud platform that can meet Client SLAs /NFRs such as Availability, system performance etc. SRE will define the environment provisions framework, identify potential performance bottlenecks and design a cloud platform Preferred technical and professional experience Effectively communicate with business and technical team members. Creative problem solving skills and superb communication Skill. Telecom domain experience is an added plus
Posted 21 hours ago
3.0 - 8.0 years
5 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : AWS Operations Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing application features, and ensuring that the applications function seamlessly within the existing infrastructure. You will also engage in troubleshooting and optimizing applications to enhance performance and user experience, while adhering to best practices in software development. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application processes and workflows.- Engage in continuous learning to stay updated with the latest technologies and methodologies.- Quickly identify, troubleshoot, and fix failures to minimize downtime.- To ensure the SLAs and OLAs are met within the timelines such that operation excellence is met. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Operations.- Strong understanding of cloud architecture and services.- Experience with application development frameworks and tools.- Familiarity with DevOps practices and CI/CD pipelines.- Ability to troubleshoot and resolve application issues efficiently.- Strong understanding of cloud networking concepts including VPC design, subnets, routing, security groups, and implementing scalable solutions using AWS Elastic Load Balancer (ALB/NLB).- Practical experience in setting up and maintaining observability tools such as Prometheus, Grafana, CloudWatch, ELK stack for proactive system monitoring and alerting.- Hands-on expertise in containerizing applications using Docker and deploying/managing them in orchestrated environments such as Kubernetes or ECS.- Proven experience designing, deploying, and managing cloud infrastructure using Terraform, including writing reusable modules and managing state across environments.- Good problem solving skills - The ability to quickly identify, analyze, and resolve issues is vital.- Effective Communication - Strong communication skills are necessary for collaborating with cross- functional teams and documenting processes and changes.- Time Management - Efficiently managing time and prioritizing tasks is vital in operations support.- The candidate should have minimum 3 years of experience in AWS Operations. Additional Information:- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 21 hours ago
3.0 - 8.0 years
5 - 10 Lacs
Pune
Work from Office
Contribute to backend feature development in a microservices-based application using Java or GoLang. Develop and integrate RESTful APIs and connecting backend systems to frontend or external services. Collaborate with senior engineers to understand technical requirements and implement maintainable solutions. Participate in code reviews, write unit/integration tests, and support debugging efforts. Gain hands-on experience with CI/CD pipelines and containerized deployments (Docker, basic Kubernetes exposure). Support backend operations including basic monitoring, logging, and troubleshooting under guidance. Engage in Agile development practices, including daily stand-ups, sprint planning, and retrospectives. Demonstrate a growth mindset by learning cloud technologies, tools, and coding best practices from senior team members. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 3+ years of backend development experience using Java, J2EE, and/or GoLang. Hands-on experience building or supporting RESTful APIs and integrating backend services. Foundational understanding of Postgres or other relational databases, including basic query writing and data access patterns. Exposure to microservices principles and containerization using Docker. Basic experience with CI/CD pipelines using tools like Git, GitHub Actions, or Jenkins. Familiarity with backend monitoring/logging tools such as ELK Stack or Grafana is a plus. Exposure to cloud platforms like AWS or Azure, and ability to deploy/test services in cloud environments under guidance. Knowledge of writing unit tests and basic use of testing tools like JUnit or RestAssured. Exposure to Agile software development processes like Scrum or Kanban. Good communication skills, strong problem solving skills and willingness to collaborate with team members and learn from senior developers. Preferred technical and professional experience Exposure to microservices architecture and understanding of modular backend service design. Basic understanding of secure coding practices and awareness of common vulnerabilities (e.g., OWASP Top 10). Familiarity with API security concepts like OAuth2, JWT, or simple authentication mechanisms. Awareness of DevSecOps principles, including interest in integrating security into CI/CD workflows. Introductory knowledge of cryptographic concepts (e.g., TLS, basic encryption) and how they're applied in backend systems. Willingness to learn and work with Java security libraries and compliance-aware coding practices. Exposure to scripting with Shell, Python, or Node.js for backend automation or tooling is a plus. Enthusiasm for working on scalable systems, learning cloud-native patterns, and improving backend reliability.
Posted 21 hours ago
1.0 - 6.0 years
6 - 13 Lacs
Bengaluru
Work from Office
Position Summary: We are seeking an experienced and highly skilled Lead LogicMonitor Administrator to architect, deploy, and manage scalable observability solutions across hybrid IT environments. This role demands deep expertise in LogicMonitor and a strong understanding of modern IT infrastructure and application ecosystems, including on premises, cloud-native, and hybrid environments. The ideal candidate will play a critical role in designing real-time service availability dashboards, optimizing performance visibility, and ensuring comprehensive monitoring coverage for business-critical services. Role & Responsibilities: Monitoring Architecture & Implementation Serve as the subject matter expert (SME) for LogicMonitor, overseeing design, implementation, and continuous optimization. Lead the development and deployment of monitoring solutions that integrate on premise infrastructure, public cloud (AWS, Azure, GCP), and hybrid environments. Develop and maintain monitoring templates, escalation chains, and alerting policies that align with business service SLAs. Ensure monitoring solutions adhere to industry standards and compliance requirements. Real-Time Dashboards & Visualization Design and build real-time service availability dashboards to provide actionable insights for operations and leadership teams. Leverage Logic Monitor’s APIs and data sources to develop custom visualizations, ensuring a single-pane-of-glass view for multi-layered service components. Collaborate with applications and service owners to define KPIs, thresholds, and health metrics. Proficient in interpreting monitoring data and metrics related to uptime and performance. Automation & Integration Automate onboarding/offboarding of monitored resources using LogicMonitor’s REST API, Groovy scripts, and Configuration Modules. Integrate LogicMonitor with ITSM tools (e.g., ServiceNow, Jira), collaboration platforms (e.g., Slack, Teams), and CI/CD pipelines. Enable proactive monitoring through synthetic transactions and anomaly detection capabilities. Streamline processes through automation and integrate monitoring with DevOps practices. Operations & Optimization Perform ongoing health checks, capacity planning, tools version upgrades, and tuning monitoring thresholds to reduce alert fatigue. Establish and enforce monitoring standards, best practices, and governance models across the organization. Lead incident response investigations, root cause analysis, and post-mortem reviews from a monitoring perspective. Optimize monitoring strategies for effective resource utilization and cost efficiency. Qualification Minimum Educational Qualifications: Bachelor’s degree in computer science, Information Technology, Engineering, or a related field Required Skills & Qualifications: 8+ years of total experience. 5+ years of hands-on experience with LogicMonitor, including custom DataSources, Property Sources, dashboards, and alert tuning. Proven expertise in IT infrastructure monitoring: networks, servers, storage, virtualization (VMware, Nutanix), and containerization (Kubernetes, Docker). Strong understanding of cloud platforms (AWS, Azure, GCP) and their native monitoring tools (e.g., CloudWatch, Azure Monitor). Experience in scripting and automation (e.g., Python, PowerShell, Groovy, Bash). Familiarity with observability stacks: ELK, Grafana, is a strong plus. Proficient with ITSM and incident management processes, including integrations with ServiceNow. Excellent problem-solving, communication, and documentation skills. Ability to work collaboratively in cross-functional teams and lead initiatives. Preferred Qualifications: LogicMonitor Certified Professional (LMCA and LMCP) or similar certification. Experience with APM tools (e.g., SolarWinds, AppDynamics, Dynatrace, Datadog) and log analytics platforms and logicmonitor observability Knowledge of DevOps practices and CI/CD pipelines. Exposure to regulatory/compliance monitoring (e.g., HIPAA, PCI, SOC 2). Experience with machine learning or AI-based monitoring solutions. Additional Information Intuitive is an Equal Employment Opportunity Employer. We provide equal employment opportunities to all qualified applicants and employees, and prohibit discrimination and harassment of any type, without regard to race, sex, pregnancy, sexual orientation, gender identity, national origin, color, age, religion, protected veteran or disability status, genetic information or any other status protected under federal, state, or local applicable laws. We will consider for employment qualified applicants with arrest and conviction records in accordance with fair chance laws.
Posted 1 day ago
2.0 - 7.0 years
4 - 9 Lacs
Hyderabad
Work from Office
Job Description Arcadis Development teams within our Intelligence division deliver complex solutions and push the limits of technology solutions. Our talented groups of systems professionals do more than just write code and debug they make a significant impact on the design and development of state-of-the-art projects. We are looking for a DevOps Engineer to join our growing and dynamic product team. Responsibilities: Ensuring availability, performance, security, and scalability of production systems. Troubleshooting system issues causing downtime or performance degradation with expertise in Agile software development methodologies. Implementing CI/CD pipelines, automating configuration management, and using Ansible playbooks. Enforcing DevOps practices in collaboration with software developers. Enhancing development and release processes through automation. Automating alerts for system availability and performance monitoring. Collaboration on defining security requirements and conducting tests to identify weaknesses. Building, securing, and maintaining on-premises and cloud infrastructures. Prototyping solutions, evaluating new tools, and engaging in incident handling and root cause analysis. Leading the automation effort and maintaining servers to the latest security standards. Understanding source code security vulnerabilities and maintaining infrastructure code bases using Puppet. Supporting and improving Docker-based development practices. Contributing to maturing DevOps culture, showcasing a methodical approach to problem-solving, and following agile practices. Qualifications 2+ year of hands-on experience in DevOps in Linux based systems Proficiency in Cloud technologies (AWS, Azure, GCP), CI-CD tools (Jenkins, Ansible, Github, Docker, etc.) and Linux user administration. Expertise on OpenShift along with managing infrastructure as code using Ansible and Docker Experience in setup and management of DB2 database Experience on Maximo Application Suite is desirable Experience in setup of application and infrastructure monitoring tools like Prometheus, Grafana, CAdvisor, Node Exporter, and Sentry. Experience of working with log analysis and monitoring tools in a distributed application scenario, independent analysis of problems and implementation of solutions. Experience of Change Management and Release Management in Agile methodology, DNS Management is desirable Experience in routine security scanning for malicious software and suspicious network activity, along with protocol analysis to identify and remedy network performance issues.
Posted 3 days ago
2.0 - 4.0 years
4 - 6 Lacs
Bengaluru
Work from Office
ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it , our most valuable asset is our people. Here you ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage an d passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. Platform and Product Team is shaping one of the key growth vector area for ZS, our engagement, comprising of clients from industries like Quick service restaurants, Technology, Food & Beverage, Hospitality, Travel, Insurance, Consumer Products Goods & other such industries across North America, Europe & South East Asia region. Platform and Product India team currently has presence across New Delhi, Pune and Bengaluru offices and is continuously expanding further at a great pace. Platform and Product India team works with colleagues across clients and geographies to create and deliver real world pragmatic solutions leveraging AI SaaS products & platforms, Generative AI applications, and other Advanced analytics solutions at scale. What You ll Do: Experience with cloud technologies AWS, Azure or GCP Create container images and maintain container registries. Create, update, and maintain production grade applications on Kubernetes clusters and cloud. Inculcate GitOps approach to maintain deployments. Create YAML scripts, HELM charts for Kubernetes deployments as required. Take part in cloud design and architecture decisions and support lead architects build cloud agnostic applications. Create and maintain Infrastructure-as-code templates to automate cloud infrastructure deployment Create and manage CI/CD pipelines to automate containerized deployments to cloud and K8s. Maintain git repositories, establish proper branching strategy, and release management processes. Support and maintain source code management and build tools. Monitoring applications on cloud and Kubernetes using tools like ELK, Grafana, Prometheus etc. Automate day to day activities using scripting. Work closely with development team to implement new build processes and strategies to meet new product requirements. Troubleshooting, problem solving, root cause analysis, and documentation related to build, release, and deployments. Ensure that systems are secure and compliant with industry standards. What You ll Bring A master s or bachelor s degree in computer science or related field from a top university. 2-4+ years of hands-on experience in DevOps Hands-on experience designing and deploying applications to cloud (Aws / Azure/ GCP) Expertise on deploying and maintaining applications on Kubernetes Technical expertise in release automation engineering, CI/CD or related roles. Hands on experience in writing Terraform templates as IaC, Helm charts, Kubernetes manifests Should have strong hold on Linux commands and script automation. Technical understanding of development tools, source control, and continuous integration build systems, e.g. Azure DevOps, Jenkins, Gitlab, TeamCity etc. Knowledge of deploying LLM models and toolchains Configuration management of various environments. Experience working in agile teams with short release cycles. Good to have programming experience in python / go. Characteristics of a forward thinker and self-starter that thrives on new challenges and adapts quickly to learning new knowledge. Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered.
Posted 3 days ago
3.0 - 5.0 years
5 - 7 Lacs
Bengaluru
Work from Office
Job Title: Site Reliability Engineer Department: Engineering / Infrastructure Reports To: SRE Manager / DevOps Lead Location: Bangalore, India Role Summary The Site Reliability Engineer (SRE) will be responsible for ensuring the availability, performance, and scalability of critical systems. This role involves managing CI/CD pipelines, monitoring production environments, automating operations, and driving platform reliability improvements in collaboration with development and infrastructure teams. Key Responsibilities Manage alerts and monitoring of critical production systems. Operate and enhance CI/CD pipelines and improve deployment and rollback strategies. Work with central platform teams on reliability initiatives. Automate testing, regression, and build tooling for operational efficiency. Execute NFR testing on production systems. Plan and implement Debian version migrations with minimal disruption. Required Qualifications & Skills CI/CD and Packaging Tools: Hands-on experience with Jenkins, Docker, JFrog for packaging and deployment. Operating System Expertise: Experience in Debian OS migration and upgrade processes. Monitoring Systems: Knowledge of Grafana, Nagios, and other observability tools. Configuration Management: Proficiency with Ansible, Puppet, or Chef. Version Control: Working knowledge of Git and related version control systems. Kubernetes: Deep understanding of Kubernetes architecture, deployment pipelines, and debugging. Ability to deploy components with detailed insights into: Configuration parameters and system requirements Monitoring and alerting needs Performance tuning Designing for high availability and fault tolerance Networking: Understanding of TCP/IP, UDP, Multicast, Broadcast. Experience with TCPDump, Wireshark for network diagnostics. Linux & Databases: Strong skills in Linux tools and scripting. Familiarity with MySQL and NoSQL database systems. Soft Skills Strong problem-solving and analytical skills Effective communication and collaboration with cross-functional teams Ownership mindset and accountability Adaptability to fast-paced and dynamic environments Detail-oriented and proactive approach Preferred Qualifications Bachelor’s degree in Computer Science, Engineering, or related technical field Certifications in Kubernetes (CKA/CKAD), Linux, or DevOps practices Experience with cloud platforms (AWS, GCP, Azure) Exposure to service mesh, observability stacks, or SRE toolkits Key Relationships Internal: DevOps, Infrastructure, Software Development, QA, Security Teams External: Tool vendors, platform service providers (if applicable) Role Dimensions Impact on uptime and reliability of business-critical services Ownership of CI/CD and production deployment processes Contributor to cross-team reliability and scalability initiatives Success Measures (KPIs) System uptime and availability (SLA adherence) Mean Time to Detect (MTTD) and Mean Time to Resolve (MTTR) incidents Deployment success rate and rollback frequency Automation coverage of operational tasks Completion of OS migration and infrastructure upgrade projects Competency Framework Alignment Technical Mastery: Infrastructure, automation, CI/CD, Kubernetes, monitoring Execution Excellence: Timely project delivery, process improvements Collaboration: Cross-functional team engagement and support Resilience: Problem solving under pressure and incident response Innovation: Continuous improvement of operational reliability and performance
Posted 3 days ago
4.0 - 7.0 years
11 - 16 Lacs
Pune
Hybrid
So, what’s the role all about? As a Sr. Cloud Services Automation Engineer, you will be responsible for designing, developing, and maintaining robust end-to-end automation solutions that support our customer onboarding processes from an on-prem software solution to Azure SAAS platform and streamline cloud operations. You will work closely with Professional Services, Cloud Operations, and Engineering teams to implement tools and frameworks that ensure seamless deployment, monitoring, and self-healing of applications running in Azure. How will you make an impact? Design and develop automated workflows that orchestrate complex processes across multiple systems, databases, endpoints, and storage solutions in on-prem and public cloud. Design, develop, and maintain internal tools/utilities using C#, PowerShell, Python, Bash to automate and optimize cloud onboarding workflows. Create integrations with REST APIs and other services to ingest and process external/internal data. Query and analyze data from various sources such as, SQL databases, Elastic Search indices and Log files (structured and unstructured) Develop utilities to visualize, summarize, or otherwise make data actionable for Professional Services and QA engineers. Work closely with test, ingestion, and configuration teams to understand bottlenecks and build self-healing mechanisms for high availability and performance. Build automated data pipelines with data consistency and reconciliation checks using tools like PowerBI/Grafana for collecting metrics from multiple endpoints and generating centralized and actionable dashboards. Automate resource provisioning across Azure services including AKS, Web Apps, and storage solutions Experience in building Infrastructure-as-code (IaC) solutions using tools like Terraform, Bicep, or ARM templates Develop end-to-end workflow automation in customer onboarding journey that spans from Day 1 to Day 2 with minimal manual intervention Have you got what it takes? Bachelor’s degree in computer science, Engineering, or related field (or equivalent experience). Proficiency in scripting and programming languages (e.g., C#, .NET, PowerShell, Python, Bash). Experience working with and integrating REST APIs Experience with IaC and configuration management tools (e.g., Terraform, Ansible) Familiarity with monitoring and logging solutions (e.g., Azure Monitor, Log Analytics, Prometheus, Grafana). Familiarity with modern version control systems (e.g., GitHub). Excellent problem-solving skills and attention to detail. Ability to work with development and operations teams, to achieve desired results, on common projects Strategic thinker and capable of learning new technologies quickly Good communication with peers, subordinates and managers You will have an advantage if you also have: Experience with AKS infrastructure administration. Experience orchestrating automation with Azure Automation tools like Logic Apps. Experience working in a secure, compliance driven environment (e.g. CJIS/PCI/SOX/ISO) Certifications in vendor or industry specific technologies. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7454 Reporting into: Director Role Type: Individual Contributor
Posted 3 days ago
4.0 - 7.0 years
11 - 16 Lacs
Pune
Hybrid
So, what’s the role all about? In this position we are looking for a strong DevOps Engineer to work with Professional Services teams, Solution Architects, and Engineering teams. Managing an On-prem to Azure Cloud onboarding, Cloud Infra & DevOps solutions.The Engineer will work with US and Pune Cloud Services and Operations Team as well as other support teams across the Globe. We are seeking a talented DevOps Engineer with strong PowerShell scripting skills to join our team. As a DevOps Engineer, you will be responsible for developing and implementing cloud automation workflows and enhancing our cloud monitoring and self-healing capabilities as well as managing our infrastructure and ensuring its reliability, scalability, and security. We encourageInnovative ideas,Flexible work methods,Knowledge collaboration,good vibes! How will you make an impact? Define, build and manage the automated cloud workflows enhancing overall customer experience in Azure SAAS environment saving time, cost and resources. Automate Pre-Post Host/Tenant Upgrade checklists and processes with automation in Azure SAAS environment. Implement, and manage the continuous integration and delivery pipeline to automate software delivery processes. Collaborate with software developers to ensure that new features and applications are deployed in a reliable and scalable manner. Automation of DevOps pipeline and provisioning of environments. Manage and maintain our cloud infrastructure, including provisioning, configuration, and monitoring of servers and services. Provide technical guidance and support to other members of the team. Manage Docker containers and Kubernetes clusters to support our microservices architecture and containerized applications. Implement and manage networking, storage, security, and monitoring solutions for Docker and Kubernetes environments. Experience with integration of service management, monitoring, logging and reporting tools like ServiceNow, Grafana, Splunk, Power BI etc. Have you got what it takes? 4-7 years of experience as a DevOps engineer with Azure preferably. Strong understanding of Kubernetes & Docker, Ansible, Terraform, Azure SAAS Infrastructure. Strong understanding of DevOps tools such as AKS, Azure DevOps, GitHub, GitHub Actions, and logging mechanisms. Working knowledge of all Azure Services and compliances like CJIS/PCI/SOC etc. Exposure to enterprise software architectures, infrastructures, and integration with Azure (or any other cloud solution) Experience with Application Monitoring Metrics Hands on experience with PowerShell, Bash & Python etc. Should have good knowledge on Linux and windows servers. Comprehensive knowledge of design metrics, analytics tools, benchmarking activities, and related reporting to identify best practices. Consistently demonstrates clear and concise written and verbal communication. Passionately enthusiastic about DevOps & cloud technologies. Ability to work independently, multi-task, and take ownership of various parts of a project or initiative. Azure Certifications in DevOps and Architecture is good to have. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7452 Reporting into: Director Role Type: Individual Contributor
Posted 3 days ago
3.0 - 5.0 years
3 - 6 Lacs
Pune
Work from Office
What You'll Do: CI/CD Pipeline Management: Design, implement, and maintain robust CI/CD pipelines (e.g., Jenkins, GitLab CI, Azure DevOps, CircleCI) to automate the build, test, and deployment processes across various environments (Dev, QA, Staging, Production). Infrastructure as Code (IaC): Develop and manage infrastructure using IaC tools (e.g., Terraform, Ansible, CloudFormation, Puppet, Chef) to ensure consistency, repeatability, and scalability of our cloud and on-premise environments. Cloud Platform Management: Administer, monitor, and optimize resources on cloud platforms (e.g., AWS, Azure, GCP), including compute, storage, networking, and security services. Containerization & Orchestration: Implement and manage containerization technologies (e.g., Docker) and orchestration platforms (e.g., Kubernetes) for efficient application deployment, scaling, and management. Monitoring & Alerting: Set up and maintain comprehensive monitoring, logging, and alerting systems (e.g., Prometheus, Grafana, ELK Stack, Nagios, Splunk, Datadog) to proactively identify and resolve performance bottlenecks and issues. Scripting & Automation: Write and maintain scripts (e.g., Python, Bash, PowerShell, Go, Ruby) to automate repetitive tasks, improve operational efficiency, and integrate various tools. Version Control: Manage source code repositories (e.g., Git, GitHub, GitLab, Bitbucket) and implement branching strategies to facilitate collaborative development and version control. Security & Compliance (DevSecOps): Integrate security best practices into the CI/CD pipeline and infrastructure, ensuring compliance with relevant security policies and industry standards. Troubleshooting & Support: Provide Level 2 support, perform root cause analysis for production incidents, and collaborate with development teams to implement timely fixes and preventive measures. Collaboration: Work closely with software developers, QA engineers, and other stakeholders to understand their needs, provide technical guidance, and foster a collaborative and efficient development lifecycle. Documentation: Create and maintain detailed documentation for infrastructure, processes, and tools.
Posted 3 days ago
4.0 - 7.0 years
9 - 12 Lacs
Pune
Hybrid
So, what’s the role all about? In NiCE as a Senior Software professional specializing in designing, developing, and maintaining applications and systems using the Java programming language. They play a critical role in building scalable, robust, and high-performing applications for a variety of industries, including finance, healthcare, technology, and e-commerce How will you make an impact? Working knowledge of unit testing Working knowledge of user stories or use cases Working knowledge of design patterns or equivalent experience. Working knowledge of object-oriented software design. Team Player Have you got what it takes? Bachelor’s degree in computer science, Business Information Systems or related field or equivalent work experience is required. 4+ year (SE) experience in software development Well established technical problem-solving skills. Experience in Java, spring boot and microservices. Experience with Kafka, Kinesis, KDA, Apache Flink Experience in Kubernetes operators, Grafana, Prometheus Experience with AWS Technology including (EKS, EMR, S3, Kinesis, Lambda’s, Firehose, IAM, CloudWatch, etc) You will have an advantage if you also have: Experience with Snowflake or any DWH solution. Excellent communication skills, problem-solving skills, decision-making skills Experience in Databases Experience in CI/CD, git, GitHub Actions Jenkins based pipeline deployments. Strong experience in SQL What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 6965 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 3 days ago
10.0 - 15.0 years
22 - 37 Lacs
Bengaluru
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role As an ELK (Elastic, Logstash & Kibana) Data Engineer, you would be responsible for developing, implementing, and maintaining the ELK stack-based solutions for Kyndryl’s clients. This role would be responsible to develop efficient and effective, data & log ingestion, processing, indexing, and visualization for monitoring, troubleshooting, and analysis purposes. Responsibilities: Design, implement, and maintain scalable data pipelines using ELK Stack (Elasticsearch, Logstash, Kibana) and Beats for monitoring and analytics. Develop data processing workflows to handle real-time and batch data ingestion, transformation and visualization. Implement techniques like grok patterns, regular expressions, and plugins to handle complex log formats and structures. Configure and optimize Elasticsearch clusters for efficient indexing, searching, and performance tuning. Collaborate with business users to understand their data integration & visualization needs and translate them into technical solutions Create dynamic and interactive dashboards in Kibana for data visualization and insights that can enable to detect the root cause of the issue. Leverage open-source tools such as Beats and Python to integrate and process data from multiple sources. Collaborate with cross-functional teams to implement ITSM solutions integrating ELK with tools like ServiceNow and other ITSM platforms. Anomaly detection using Elastic ML and create alerts using Watcher functionality Extract data by Python programming using API Build and deploy solutions in containerized environments using Kubernetes. Monitor Elasticsearch clusters for health, performance, and resource utilization Automate routine tasks and data workflows using scripting languages such as Python or shell scripting. Provide technical expertise in troubleshooting, debugging, and resolving complex data and system issues. Create and maintain technical documentation, including system diagrams, deployment procedures, and troubleshooting guides If you're ready to embrace the power of data to transform our business and embark on an epic data adventure, then join us at Kyndryl. Together, let's redefine what's possible and unleash your potential. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Experience: Minimum of 5 years of experience in ELK Stack and Python programming Graduate/Postgraduate in computer science, computer engineering, or equivalent with minimum of 10 years of experience in the IT industry. ELK Stack : Deep expertise in Elasticsearch, Logstash, Kibana, and Beats. Programming : Proficiency in Python for scripting and automation. ITSM Platforms : Hands-on experience with ServiceNow or similar ITSM tools. Containerization : Experience with Kubernetes and containerized applications. Operating Systems : Strong working knowledge of Windows, Linux, and AIX environments. Open-Source Tools : Familiarity with various open-source data integration and monitoring tools. Knowledge of network protocols, log management, and system performance optimization. Experience in integrating ELK solutions with enterprise IT environments. Strong analytical and problem-solving skills with attention to detail. Knowledge in MySQL or NoSQL Databases will be added advantage Fluent in English (written and spoken). Preferred Technical and Professional Experience “Elastic Certified Analyst” or “Elastic Certified Engineer” certification is preferrable Familiarity with additional monitoring tools like Prometheus, Grafana, or Splunk. Knowledge of cloud platforms (AWS, Azure, or GCP). Experience with DevOps methodologies and tools. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 4 days ago
3.0 - 8.0 years
5 - 10 Lacs
Pune
Work from Office
Since its inception in 2003, driven by visionary college students transforming online rent payment, Entrata has evolved into a global leader serving property owners, managers, and residents. Honored with prestigious awards like the Utah Business Fast 50, Silicon Slopes Hall of Fame - Software Company - 2022, Women Tech Council Shatter List, our comprehensive software suite spans rent payments, insurance, leasing, maintenance, marketing, and communication tools, reshaping property management worldwide. Our 2200+ global team members embody intelligence and adaptability, engaging actively from top executives to part-time employees. With offices across Utah, Texas, India, Israel, and the Netherlands, Entrata blends startup innovation with established stability, evident in our transparent communication values and executive town halls. Our product isn't just desirable; it's industry essential. At Entrata, we passionately refine living experiences, uphold collective excellence, embrace > Job Summary Entrata Software is seeking a DevOps Engineer to join our R&D team in Pune, India. This role will focus on automating infrastructure, streamlining CI/CD pipelines, and optimizing cloud-based deployments to improve software delivery and system reliability. The ideal candidate will have expertise in Kubernetes, AWS, Terraform, and automation tools to enhance scalability, security, and observability. Success in this role requires strong problem-solving skills, collaboration with development and security teams, and a commitment to continuous improvement. If you thrive in fast-paced, Agile environments and enjoy solving complex infrastructure challenges, we encourage you to apply! Key Responsibilities Design, implement, and maintain CI/CD pipelines using Jenkins, GitHub Actions, and ArgoCD to enable seamless, automated software deployments. Deploy, manage, and optimize Kubernetes clusters in AWS, ensuring reliability, scalability, and security. Automate infrastructure provisioning and configuration using Terraform, CloudFormation, Ansible, and scripting languages like Bash, Python, and PHP. Monitor and enhance system observability using Prometheus, Grafana, and ELK Stack to ensure proactive issue detection and resolution. Implement DevSecOps best practices by integrating security scanning, compliance automation, and vulnerability management into CI/CD workflows. Troubleshoot and resolve cloud infrastructure, networking, and deployment issues in a timely and efficient manner. Collaborate with development, security, and IT teams to align DevOps practices with business and engineering objectives. Optimize AWS cloud resource utilization and cost while maintaining high availability and performance. Establish and maintain disaster recovery and high-availability strategies to ensure system resilience. Improve incident response and on-call processes by following SRE principles and automating issue resolution. Promote a culture of automation and continuous improvement, identifying and eliminating manual inefficiencies in development and operations. Stay up-to-date with emerging DevOps tools and trends, implementing best practices to enhance processes and technologies. Ensure compliance with security and industry standards, enforcing governance policies across cloud infrastructure. Support developer productivity by providing self-service infrastructure and deployment automation to accelerate the software development lifecycle. Document processes, best practices, and troubleshooting guides to ensure clear knowledge sharing across teams. Minimum Qualifications 3+ years of experience as a DevOps Engineer or similar role. Strong proficiency in Kubernetes, Docker, and AWS. Hands-on experience with Terraform, CloudFormation, and CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD, ArgoCD). Solid scripting and automation skills with Bash, Python, PHP, or Ansible. Expertise in monitoring and logging tools such as NewRelic, Prometheus, Grafana, and ELK Stack. Understanding of DevSecOps principles, security best practices, and vulnerability management. Strong problem-solving skills and ability to troubleshoot cloud infrastructure and deployment issues effectively. Preferred Qualifications Experience with GitOps methodologies using ArgoCD or Flux. Familiarity with SRE principles and managing incident response for high-availability applications. Knowledge of serverless architectures and AWS cost optimization strategies. Hands-on experience with compliance and governance automation for cloud security. Previous experience working in Agile, fast-paced environments with a focus on DevOps transformation. Strong communication skills and ability to mentor junior engineers on DevOps best practices. If you're passionate about automation, cloud infrastructure, and building scalable DevOps solutions ,
Posted 4 days ago
6.0 - 11.0 years
11 - 16 Lacs
Pune
Work from Office
Project description We are looking for a seasoned Performance Test Engineer to join our dynamic team. Your role will involve working closely with a group of talented software developers to create new APIs and ensure the smooth functioning of existing ones within the Azure environment. Responsibilities Understand the non-functional requirements (NFRs) from NFR documents, meeting with business and platform owners. Understand business and the infrastructure involved in the project. Understand the critical business scenarios from developers and the business. Prepare the Performance Test Strategy and Test Plan. Communicate with the business/development team manager regularly through daily/weekly reports. Develop the test scripts and workload modelling. Execute sanity tests, load test, soak test, stress test (as required by project). Organise the meeting with all the relevant teams (developers/infra etc.) to monitor core applications during the test execution. Execute the tests and analyse the test results. Prepare the test summary report. Skills Must have 6+ years of experience in performance engineering. Expert in Microfocus LoadRunner and Apache JMeter along with programming/ scripting experience in C/C++, Java, Perl, Python, SQL. Proven performance testing experience across multiple platform architectures and technologies such as micro-services, REST APIs is advantageous as is an exposure to project experience moving workloads to cloud environments including (AWS or Azure) Exposure to open source data visualisation tools. Experience in working with APM tools like AppDynamics. Nice to have Core Banking, Jira, Agile, Grafana Banking domain experience Other Languages EnglishC1 Advanced Seniority Senior
Posted 4 days ago
6.0 - 11.0 years
11 - 16 Lacs
Bengaluru
Work from Office
Project description We are looking for a seasoned Performance Test Engineer to join our dynamic team. Your role will involve working closely with a group of talented software developers to create new APIs and ensure the smooth functioning of existing ones within the Azure environment. Responsibilities Understand the non-functional requirements (NFRs) from NFR documents, meeting with business and platform owners. Understand business and the infrastructure involved in the project. Understand the critical business scenarios from developers and the business. Prepare the Performance Test Strategy and Test Plan. Communicate with the business/development team manager regularly through daily/weekly reports. Develop the test scripts and workload modelling. Execute sanity tests, load test, soak test, stress test (as required by project). Organise the meeting with all the relevant teams (developers/infra etc.) to monitor core applications during the test execution. Execute the tests and analyse the test results. Prepare the test summary report. Skills Must have 6+ years of experience in performance engineering. Expert in Microfocus LoadRunner and Apache JMeter along with programming/ scripting experience in C/C++, Java, Perl, Python, SQL. Proven performance testing experience across multiple platform architectures and technologies such as micro-services, REST APIs is advantageous as is an exposure to project experience moving workloads to cloud environments including (AWS or Azure) Exposure to open source data visualisation tools. Experience in working with APM tools like AppDynamics. Nice to have Core Banking, Jira, Agile, Grafana Banking domain experience Other Languages EnglishC1 Advanced Seniority Senior
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2