Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 9.0 years
11 - 21 Lacs
Pune, Bengaluru
Hybrid
Job Role & responsibilities:- Manage cluster installations and configurations on OpenShift. Lead in developing robust solutions for Jenkins Pipelines and task automation. Drive backup/restore activities and ensure optimal application delivery. Optimize resource deployment, including nodes and certificates. Mentor junior engineers and oversee GitHub repository management. Oversee team operations with a focus on daily activities and failure analysis. Design and implement secure, scalable systems tailored to business needs. Research emerging trends and apply innovative solutions. Technical Skills & Experience required:- 4-10 years of experience Must be flexible for 24/7 shifts/ Rotationa l shifts Cloud Platforms : Advanced expertise in AWS cloud and services like CloudFront and Lambda. Operating Systems : Extensive experience in Linux and security patch deployment. Automation & Orchestration : Expertise in Terraform, Ansible, Kubernetes/Docker. Database Skills : Proficiency in Oracle, MS SQL, DynamoDB, or Redis. Networking Concepts : Advanced understanding of networking protocols and systems. Programming Skills : Strong scripting experience in Python, NodeJS, and OpenAPI. Monitoring Tools : Expertise in tools like Grafana and Nginx Clear and effective communication. Agile methodology orientation. Ability to work autonomously and in collaboration with international teams. Only Immediate Joiners preferred who can Join us on 17th July
Posted 3 weeks ago
6.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
As a Senior Cloud Services Developer at SAP, you will be an integral part of our dynamic team working on building a cutting-edge Database as a Service. Your role will involve designing, developing, and delivering a scalable, secure, and highly available database solution that caters to the evolving needs of our customers. Your responsibilities will include collaborating closely with internal teams like product management, designers, and end-users to ensure the success of the product. Leading a team of software developers, you will contribute to the development of new products and features based on customer requirements for a wide range of use cases. Your technical expertise will be crucial in ensuring adherence to design principles, coding standards, and best practices. Troubleshooting and resolving complex issues related to cloud services performance, scalability, and security will be part of your day-to-day tasks. Additionally, you will develop and maintain automated testing frameworks to ensure the quality and reliability of the services. Staying updated with the latest advancements in cloud computing, database technologies, and distributed systems is essential for this role. To be successful in this position, you should have a bachelor's or master's degree in computer science or equivalent, along with at least 6 years of hands-on development experience in programming languages such as Go, Python, or Java. Good expertise in data structures/algorithms, experience with cloud computing platforms like AWS, Azure, or Google Cloud, and familiarity with container and orchestration technologies such as Docker and Kubernetes are necessary qualifications. Knowledge of monitoring tools like Grafana & Prometheus, cloud security best practices, and compliance requirements will also be beneficial. Your passion for solving distributed systems challenges, experience in large-scale data architecture, data modeling, database design, and information systems implementation, coupled with excellent communication and collaboration skills, will make you a valuable asset to our team. Join us at SAP, where we foster a culture of inclusion, health, well-being, and flexible working models to ensure that everyone, regardless of background, feels included and can perform at their best. We are committed to providing accessibility accommodations to applicants with physical and/or mental disabilities and believe in unleashing all talent to create a better and more equitable world. SAP is an equal opportunity workplace and an affirmative action employer. We value Equal Employment Opportunity and strive to create a diverse and inclusive workforce. If you are interested in applying for a role at SAP, please reach out to our Recruiting Operations Team at Careers@sap.com for any accommodation or special assistance needed during the application process. Please note that successful candidates may be required to undergo a background verification with an external vendor. Requisition ID: 396933 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: .,
Posted 3 weeks ago
12.0 - 16.0 years
0 Lacs
pune, maharashtra
On-site
Success in the role requires agility and results orientation, strategic and innovative thinking, a proven track record of delivering new customer-facing software products at scale, rigorous analytical skills, and a passion for automation and data-driven approaches to solving problems. As a Director of eCommerce Engineering, your responsibilities include overseeing and leading the engineering project delivery for the ECommerce Global Multi-Tenant Platform. You will ensure high availability, scalability, and performance to support global business operations. Defining and executing the engineering strategy that aligns with the company's business goals and long-term vision for omnichannel retail is crucial. Establishing robust processes for code reviews, testing, and deployment to ensure high-quality deliverables is also part of your role. You will actively collaborate with Product Management, Business Stakeholders, and other Engineering Teams to define project requirements and deliver customer-centric solutions. Serving as a key point of contact for resolving technical challenges and ensuring alignment between business needs and technical capabilities. Promoting seamless communication between teams to deliver cross-functional initiatives on time and within budget is essential. Building a strong and diverse engineering team by attracting, recruiting, and retaining top talent is a key responsibility. Designing and implementing a robust onboarding program to ensure new hires are set up for success. Coaching team members to enhance technical expertise, problem-solving skills, and leadership abilities, fostering a culture of continuous learning and improvement. Maintaining a strong pipeline of talent by building relationships with local universities, engineering communities, and industry professionals is also part of your role. You will define clear, measurable goals for individual contributors and teams to ensure alignment with broader organizational objectives. Conducting regular one-on-one meetings to provide personalized feedback, career guidance, and development opportunities. Managing performance reviews and recognizing high-performing individuals, while providing coaching and support to those needing improvement. Fostering a culture of accountability, where team members take ownership of their work and deliver results. Championing the adoption of best practices in software engineering, including agile methodologies, DevOps, and automation is crucial. Facilitating and encouraging knowledge sharing and expertise in critical technologies, such as cloud computing, microservices, and AI/ML. Evaluating and introducing emerging technologies that align with business goals, driving innovation and competitive advantage is part of your responsibility. Developing and executing a continuous education program to upskill team members on key technologies and the Williams-Sonoma business domain is essential. Organizing training sessions, workshops, and certifications to keep the team updated on the latest industry trends. Encouraging team members to actively participate in tech conferences, hackathons, and seminars to broaden their knowledge and network is also important. Accurately estimating development efforts for projects, considering complexity, risks, and resource availability. Developing and implementing project plans, timelines, and budgets to deliver initiatives on schedule. Overseeing system rollouts and implementation efforts to ensure smooth transitions and minimal disruptions to business operations. Optimizing resource allocation to maximize team productivity and ensure proper workload distribution is a key responsibility. Championing initiatives to improve the engineering organization's culture, focusing on collaboration, transparency, and inclusivity. Continuously evaluating and refining engineering processes to increase efficiency and reduce bottlenecks. Promoting team well-being by fostering a positive and supportive work environment where engineers feel valued and motivated. Leading efforts to make the organization a "Great Place to Work," including regular engagement activities, mentorship programs, and open communication. Developing a deep understanding of critical systems and processes, including platform architecture, APIs, data pipelines, and DevOps practices. Providing technical guidance to the team, addressing complex challenges, and ensuring alignment with architectural best practices. Partnering with senior leaders to align technology decisions with business priorities and future-proof the company's systems. Playing a pivotal role in transforming Williams-Sonoma into a leading technology organization by implementing cutting-edge solutions in eCommerce, Platform Engineering, AI, ML, and Data Science. Driving the future of omnichannel retail by conceptualizing and delivering innovative products and features that enhance customer experiences. Actively representing the organization in the technology community, building a strong presence through speaking engagements, partnerships, and contributions to open-source projects. Identifying opportunities for process automation and optimization to improve operational efficiency. Being adaptable to perform other duties as required, addressing unforeseen challenges, and contributing to organizational goals. Staying updated on industry trends and competitive landscapes to ensure the company remains ahead of the curve. Williams-Sonoma Inc. is the premier specialty retailer of high-quality products for the kitchen and home in the United States. Founded in 1956, it is now one of the United States" largest e-commerce retailers with well-known brands in home furnishings. The India Technology Center serves as a critical hub for innovation, focusing on developing cutting-edge solutions in areas such as e-commerce, supply chain optimization, and customer experience management. Through advanced technologies like artificial intelligence, data analytics, and machine learning, the India Technology Center plays a crucial role in accelerating Williams-Sonoma's growth and maintaining its competitive edge in the global market.,
Posted 3 weeks ago
3.0 - 8.0 years
0 Lacs
karnataka
On-site
About Groww Groww is a team of dedicated individuals committed to providing financial services to every Indian through a diverse product platform. Our focus is on empowering millions of customers to take control of their financial journey. At Groww, customer obsession drives our actions, ensuring that every product, design, and algorithm is tailored to meet the needs and convenience of our customers. Our team is our greatest asset, embodying qualities of ownership, customer-centricity, integrity, and a drive to challenge the status quo constantly. Vision Our vision at Groww is to equip every individual with the knowledge, tools, and confidence to make informed financial decisions. Through our cutting-edge multi-product platform, we aim to empower every Indian to take charge of their financial well-being. Our ultimate goal is to become the trusted financial partner for millions of Indians. Values Our culture at Groww has played a pivotal role in establishing us as India's fastest-growing financial services company. It fosters an environment of collaboration, transparency, and open communication, where hierarchies diminish, and every individual is encouraged to bring their best selves to work, shaping a promising career path. The foundational values that guide us are radical customer-centricity, ownership-driven culture, simplicity in approach, long-term thinking, and complete transparency. Job Requirement As a prospective team member at Groww, you will be tasked with the following responsibilities: - Collaborating with developers to ensure new environments meet requirements and adhere to best practices. - Playing a crucial role in product scoping, roadmap discussions, and architecture decisions. - Ensuring team cohesion and alignment towards common goals. - Tracking external project dependencies effectively. - Leading organizational improvement efforts. - Cultivating a positive engineering culture focused on reducing technical debt. - Overseeing the team's sprint execution. - Demonstrating a clear understanding of the project domain and engaging in discussions with other team members effectively. - Staying informed and skilled in the latest cloud, infrastructure, orchestration, and automation technologies. - Reviewing the current environment for deficiencies and proposing solutions for enhancement. - Aligning all delivered capabilities with business objectives, IT strategies, and design intent. - Recommending new technologies for improved DevOps services. - Driving project scope definition and backlog management. - Collaborating with architecture and infrastructure delivery teams for consistent solution design and integration. - Working with the engineering team to implement infrastructure lifecycle practices. Qualifications: - 8+ years of experience in planning and implementing cloud infrastructure and DevOps solutions. - Minimum of 3 years of experience in leading DevOps teams. - Proficiency in working with development teams on microservice architectures. - Degree in Computer Science. - Hands-on experience in designing and operating DevOps systems. - Experience in deploying high-performance systems with robust monitoring and logging practices. - Strong communication skills, both written and verbal. - Expertise in Cloud Infrastructure solutions like Microsoft Azure, Google Cloud, or AWS. - Experience in managing containerized environments using Docker, Mesos/Kubernetes. - Familiarity with multiple data stores such as MySQL, MongoDB, Cassandra, Elasticsearch. - Experience in designing observability platforms using open-source tools like mimir, thanos, Prometheus, and Grafana. - Knowledge in cloud infrastructure security and Kubernetes security.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
navi mumbai, maharashtra
On-site
As a skilled ELK (ElasticSearch, Logstash, and Kibana) Stack Engineer, you will be responsible for designing and implementing ELK stack solutions to create and manage large-scale elastic search clusters on Production and DR environments. Your primary focus will be on designing end-to-end solutions that emphasize performance, reliability, scalability, and maintainability. You will collaborate with subject matter experts (SMEs) to create prototypes and adopt agile and DevOps practices to align with the product delivery lifecycle. Automation of processes using relevant tools and frameworks will be a key aspect of your role. Additionally, you will work closely with Infrastructure and development teams for capacity planning and deployment strategy to achieve a highly available and scalable architecture. Your proficiency in developing ELK stack solutions, including Elasticsearch, Logstash, and Kibana, will be crucial. Experience in upgrading Elasticsearch across major versions, managing large applications in production environments, and proficiency in Python is required. Familiarity with Elasticsearch Painless scripting language, Linux/Unix operating systems (preferably CentOS/RHEL), Oracle PL/SQL, scripting technologies, Git, Jenkins, Ansible, Docker, ITIL, Agile, Jira, Confluence, and security best practices will be advantageous. You should be well versed in applications/infrastructure logging and monitoring tools like SolarWinds, Splunk, Grafana, and Prometheus. Your skills should include configuring, maintaining, tuning, administering, and troubleshooting Elasticsearch clusters in a cloud environment, understanding Elastic cluster architecture, design, and deployment, and handling JSON data ingest proficiently. Agile development experience, proficiency in source control using Git, and excellent communication skills to collaborate with DevOps, Product, and Project Management teams are essential. Your initiative and problem-solving abilities will be crucial in this role, along with the ability to work in a dynamic, fast-moving environment, prioritize tasks effectively, and manage time optimally.,
Posted 3 weeks ago
4.0 - 8.0 years
0 Lacs
maharashtra
On-site
You should have hands-on experience working with Elasticsearch, Logstash, Kibana, Prometheus, and Grafana monitoring systems. Your responsibilities will include installation, upgrade, and management of ELK, Prometheus, and Grafana systems. You should be proficient in ELK, Prometheus, and Grafana Administration, Configuration, Performance Tuning, and Troubleshooting. Additionally, you must have knowledge of various clustering topologies such as Redundant Assignments, Active-Passive setups, and experience in deploying clusters on multiple Cloud Platforms like AWS EC2 & Azure. Experience in Logstash pipeline design, search index optimization, and tuning is required. You will be responsible for implementing security measures and ensuring compliance with security policies and procedures like CIS benchmark. Collaboration with other teams to ensure seamless integration of the environment with other systems is essential. Creating and maintaining documentation related to the environment is also part of the role. Key Skills required for this position include certification in monitoring systems like ELK, RHCSA/RHCE, experience on the Linux Platform, and knowledge of Monitoring tools such as Prometheus, Grafana, ELK stack, ManageEngine, or any APM tool. Educational Qualifications should include a Bachelor's degree in Computer Science, Information Technology, or a related field. The ideal candidate should have 4-7 years of relevant experience and the work location for this position is Mumbai.,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
You will be responsible for: - Delivering complex Java-based solutions and preferably having experience in Fintech product development - Demonstrating a strong understanding of microservices architectures and RESTful APIs - Developing cloud-native applications and being familiar with containerization and orchestration tools such as Docker and Kubernetes - Having experience with at least one major cloud platform like AWS, Azure, or Google Cloud, with knowledge of Oracle Cloud being preferred - Utilizing DevOps tools like Jenkins and GitLab CI/CD for continuous integration and deployment - Understanding monitoring tools like Prometheus and Grafana, as well as event-driven architecture and message brokers like Kafka - Monitoring and troubleshooting the performance and reliability of Cloud-native applications in production environments - Possessing excellent verbal and written communication skills and the ability to collaborate effectively within a team. About Us: Oracle is a global leader in cloud solutions that leverages cutting-edge technology to address current challenges. With over 40 years of experience, we partner with industry leaders across various sectors, maintaining integrity amidst ongoing change. We believe in fostering innovation by empowering all individuals to contribute, striving to build an inclusive workforce that offers opportunities for everyone. Oracle careers provide a gateway to international opportunities where a healthy work-life balance is encouraged. We provide competitive benefits, including flexible medical, life insurance, and retirement options to support our employees. Furthermore, we promote community engagement through volunteer programs. We are dedicated to integrating people with disabilities into all phases of the employment process. If you require assistance or accommodation due to a disability, please contact us at accommodation-request_mb@oracle.com or call +1 888 404 2494 in the United States.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
You will be joining Appice, a machine learning-based mobile marketing automation platform dedicated to helping businesses establish reliable connections with their customers. Our comprehensive solution covers various mobile aspects including acquisition, engagement, retention, and monetization. From crafting in-app messages to sending push notifications, conducting email marketing, performing mobile A/B testing, collecting data with mobile analytics, we equip businesses with effective tools to engage their mobile audience seamlessly. As a DevOps Engineer (Openshift/ OCP Expert) at Appice, your primary responsibility will involve managing and maintaining our infrastructure in Mumbai. You will play a crucial role in optimizing system performance, ensuring smooth deployment and operation of applications. Collaborating closely with development and operations teams, your tasks will revolve around automating processes to enhance efficiency. You will also be accountable for monitoring, troubleshooting, and resolving production issues to uphold the high availability and reliability of our platform. Your expertise should include independently setting up a Kubernetes cluster from scratch on a RHEL VM 9.3/9.4/9.5 Cluster, monitoring it via Prometheus/Grafana, administering the cluster, and deploying Spark Application on the Kubernetes cluster. Documenting these steps comprehensively with screenshots and descriptive text for replication purposes is essential. Required qualifications for this role include: - Extensive experience with Kubernetes in a bare metal environment (on-prem deployment) - Proficiency in cloud technologies and platforms, preferably Azure - Knowledge of containerization technologies like Docker - Familiarity with infrastructure as code tools such as Terraform and Ansible - Experience with monitoring and logging tools like Prometheus and Grafana - Strong problem-solving and troubleshooting abilities - Excellent teamwork and communication skills - Ability to excel in a fast-paced, collaborative environment - Hands-on experience working on Production Servers and independently creating Production servers/Clusters If you are passionate about DevOps, infrastructure management, and ensuring the seamless operation of applications, this role at Appice offers a dynamic environment where you can leverage your skills and contribute to the success of our platform.,
Posted 3 weeks ago
4.0 - 8.0 years
0 Lacs
noida, uttar pradesh
On-site
As an SMO-OSS Integration Consultant specializing in 5G Networks, you will be responsible for designing and implementing integration workflows between the SMO platform and OSS systems to effectively manage 5G network resources and operations. Your role will involve configuring OSS-SMO interfaces for tasks such as fault management, performance monitoring, and lifecycle management to ensure seamless data exchange between SMO and OSS for service orchestration, provisioning, and assurance. You will play a crucial role in enabling automated service provisioning and lifecycle management using SMO, as well as developing workflows and processes for end-to-end network orchestration, including MEC, network slicing, and hybrid cloud integration. Additionally, you will be expected to conduct end-to-end testing of SMO-OSS integration in both lab and production environments, validating integration of specific use cases such as fault detection, service assurance, and anomaly detection. Collaboration with cross-functional teams, including OSS developers, cloud engineers, and AI/ML experts, will be an essential part of your role as you work together to define integration requirements. Furthermore, you will engage with vendors and third-party providers to ensure compatibility and system performance. In terms of required skills and experience, you should possess strong technical expertise in SMO platforms and their integration with OSS systems, along with familiarity with OSS functions like inventory management, fault management, and performance monitoring. Hands-on experience with O-RAN interfaces such as A1, E2, and O1 is essential. Deep knowledge of 5G standards, including 3GPP, O-RAN, and TM Forum, as well as proficiency in protocols like HTTP/REST APIs, NETCONF, and YANG is required. Programming skills in languages like Python, Bash, or similar for scripting and automation, along with experience in AI/ML frameworks and their application in network optimization, will be beneficial. Familiarity with tools like Prometheus, Grafana, Kubernetes, Helm, and Ansible for monitoring and deployment, as well as cloud-native deployments (e.g., OpenShift, AWS, Azure), is desired. Strong problem-solving abilities for debugging and troubleshooting integration and configuration issues are also important for this role.,
Posted 3 weeks ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
The ideal candidate should have 2-3 years of experience in DevOps along with a mandatory 1-year experience in DevSecOps. The role requires working onsite in Pune and following a second shift from 2 PM to 10 PM IST. Key skills for this position include proficiency in Cloud Technology (Azure), Automation Tools (Azure Kubernetes & Terraform), CI/CD Pipelines (Jenkins and Azure DevOps), Scripting Language (Python), Monitoring tools (Prometheus / Grafana / Splunk / ELK), and Security tools (Azure active directory). Additionally, experience in AI and GenAI would be considered a strong advantage. The selected candidate should be available to start immediately within 2 weeks at maximum.,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
You will have the opportunity to work with a wide range of cutting-edge technologies in a high-volume, real-time, mission-critical environment. Your ability to skillfully manage critical situations and earn customer accolades will be crucial. You should be excited about enhancing your communication skills for interactions with CxO level customers and providing Day-2 services to SaaS customers. Join our dynamic team of specialists who are making a global impact. Effective issue triaging, strong debugging skills, a responsive attitude, and the ability to handle pressure are essential qualities for this role. Technical expertise in multiple areas such as Oracle Database, PL/SQL, leading Application Servers, popular Web Technologies, and REST API is required. Experience in Banking Product Support or Development, or working in Bank IT, is highly preferred. Good communication skills for customer interactions and critical situation management are a significant advantage. Familiarity with Spring Framework, Spring Boot, Oracle JET, and Microservices architecture will be beneficial. Technical Skillset: - Core Java, J2EE, Oracle - Basic knowledge of Performance Tuning and Oracle internals, interpreting AWR reports - Application Servers: WebLogic / WebSphere - Familiarity with additional areas is advantageous: - REST/Web Services/ORM - Logging and monitoring tools like OCI Monitoring, Prometheus, Grafana, ELK Stack, Splunk - Kubernetes, Docker, and container orchestration platforms - UI technologies: Knockout JS, OJET, RequireJS, AJAX, jQuery, JavaScript, CSS3, HTML5, JSON, SaaS This position is at Career Level IC3.,
Posted 3 weeks ago
4.0 - 9.0 years
4 - 9 Lacs
Bengaluru
Work from Office
Position GCP DevOps Engineer Location Bangalore, Bellandur Work Mode - Work from office (5 days working) Employment Type - Full Time / Permanent Job Summary: We are seeking a skilled GCP DevOps Engineer to join our team. The ideal candidate will have hands-on experience with Google Cloud Platform (GCP) services, CI/CD pipelines, automation, infrastructure as code (IaC), and monitoring tools. You will be responsible for deploying, managing, and optimizing cloud infrastructure, ensuring system reliability, scalability, and security. Key Responsibilities: Design, implement, and maintain cloud infrastructure using GCP services (Compute Engine, Kubernetes Engine, Cloud Functions, Cloud Storage, BigQuery, etc.). Develop and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or Cloud Build. Automate infrastructure deployment using Terraform, Ansible, or Helm. Manage Kubernetes (GKE) clusters, including deployment, scaling, and monitoring. Implement logging, monitoring, and alerting solutions using Stackdriver (Cloud Operations), Prometheus, and Grafana. Ensure cloud security best practices, IAM policies, and compliance with industry standards. Optimize cloud costs and resource utilization. Troubleshoot infrastructure and deployment issues, ensuring high availability and performance. Required Skills & Qualifications: 4 years of experience in DevOps, Cloud Engineering, or SRE roles. Strong knowledge of Google Cloud Platform (GCP) services. Experience with Kubernetes (GKE) and container orchestration. Proficiency in Terraform for infrastructure as code (IaC). Experience with CI/CD tools (Jenkins, GitHub Actions, Cloud Build, etc.). Scripting experience in Python, Bash, or Go. Familiarity with monitoring and logging tools (Stackdriver, Prometheus, ELK, Grafana). Strong understanding of networking, security, and IAM policies in GCP. Knowledge of Git, version control workflows, and branching strategies. Good to Have: Certification: Google Professional DevOps Engineer or Google Associate Cloud Engineer. Experience with serverless computing (Cloud Functions, Cloud Run). Exposure to multi-cloud environments (AWS, Azure). Knowledge of database management (Cloud SQL, Firestore, BigQuery).
Posted 3 weeks ago
3.0 - 6.0 years
12 - 22 Lacs
Gurugram, Bengaluru, Mumbai (All Areas)
Work from Office
In the role of a DevOps Engineer, you will be responsible for designing, implementing, and maintaining the infrastructure and CI/CD pipelines necessary to support our Generative AI projects. Furthermore, you will have the opportunity to critically assess and influence the engineering design, architecture, and technology stack across multiple products, extending beyond your immediate focus. - Design, deploy, and manage scalable, reliable, and secure Azure cloud infrastructure to support Generative AI workloads. - Implement monitoring, logging, and alerting solutions to ensure the health and performance of AI applications. - Optimize cloud resource usage and costs while ensuring high performance and availability. - Work closely with Data Scientists and Machine Learning Engineers to understand their requirements and provide the necessary infrastructure and tools. - Automate repetitive tasks, configuration management, and infrastructure provisioning using tools like Terraform, Ansible, and Azure Resource Manager (ARM). - Utilize APM (Application Performance Monitoring) to identify and resolve performance bottlenecks Maintain comprehensive documentation for infrastructure, processes, and workflows. Must Have Skills: - Extensive knowledge of Azure services: Kubernetes, Azure App Service, Azure API management(APIM), Application gateway, AAD, GitHub Action, Istio, Datadog, Proficiency in containerization and orchestration tools such as (Jenkins, GitLab CI/CD, Azure DevOps) - Knowledge of API management platforms like APIM for API governance, security, and lifecycle management. - Expertise in monitoring and observability tools like Datadog, loki, grafana, prometheus for comprehensive monitoring, logging, and alerting solutions. Good scripting skills (Python, Bash, PowerShell). - Experience with infrastructure as code (Terraform, ARM Templates). - Experience in optimizing cloud resource usage and costs utilizing insights from Azure cost and monitor metrics.
Posted 3 weeks ago
10.0 - 15.0 years
50 - 100 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Work from Office
Target Operating Model (TOM) Design 10–15 years in digital transformation or operational strategy 5+ years designing or managing command/operations center models Leading transformations involving tools such as Splunk, ServiceNow, BMC, Grafana
Posted 3 weeks ago
5.0 - 8.0 years
7 - 11 Lacs
Pune
Work from Office
SRE: Preferably 3+ years experience working as a Site Reliability Engineer (SRE). Practical experience with Monitoring tools, such as: Grafana, Azure Monitor, Log Analytics, Network Monitoring and Alerting Tools (i.e. Big Panda). Experience with Automation Tooling, such as: Azure Open AI, Amelia Automation, Service Now Orchestration, Power Apps / Power Platform, Python and PowerShell. Experience in monitoring infrastructure, analysing Dashboards and investigating issues / incidents affecting the health, stability and performance of products and services that we support. Knowledge of how to identify and resolve issues with systems, services and applications. Able to proactively drive continuous improvement opportunities, including performance, cost, process and stability optimisations, through working closely with development and infrastructure teams. Good foundational understanding of Agile Methodologies, AI/ML for automating operational initiatives and ITIL / Change Management processes. Knowledge of core Azure Cloud computing concepts (AZ-900 Certification as a minimum requirement, with AZ-104 certification preferred). Knowledge of Azure Chaos Studio for Chaos Engineering. Fluent in English, both written and spoken. Good report writing and documentation skills, and a strong verbal communicator. Able to present concepts, ideas, and recommendations in a clearly structured and logical fashion. Self-motivated, able to prioritise, with excellent time management and goal driven. Quick learner with ability to grasp modern technologies. Collaborative nature; willing to share ideas, debate solutions, to achieve consensus on the best way to drive the service forward for the betterment of the organisation. Able to work in a high pressure and time critical environment. Attention to detail, keen eye on due diligence Experience working in a financial or regulated sector, aware of best practices Do Provide adequate support in architecture planning, migration & installation for new projects in own tower (platform/dbase/ middleware/ backup) Lead the structural/ architectural design of a platform/ middleware/ database/ back up etc. according to various system requirements to ensure a highly scalable and extensible solution Conduct technology capacity planning by reviewing the current and future requirements Utilize and leverage the new features of all underlying technologies to ensure smooth functioning of the installed databases and applications/ platforms, as applicable Strategize & implement disaster recovery plans and create and implement backup and recovery plans Manage the day-to-day operations of the tower Manage day-to-day operations by troubleshooting any issues, conducting root cause analysis (RCA) and developing fixes to avoid similar issues. Plan for and manage upgradations, migration, maintenance, backup, installation and configuration functions for own tower Review the technical performance of own tower and deploy ways to improve efficiency, fine tune performance and reduce performance challenges Develop shift roster for the team to ensure no disruption in the tower Create and update SOPs, Data Responsibility Matrices, operations manuals, daily test plans, data architecture guidance etc. Provide weekly status reports to the client leadership team, internal stakeholders on database activities w.r.t. progress, updates, status, and next steps Leverage technology to develop Service Improvement Plan (SIP) through automation and other initiatives for higher efficiency and effectiveness Team Management Resourcing Forecast talent requirements as per the current and future business needs Hire adequate and right resources for the team Train direct reportees to make right recruitment and selection decisions Talent Management Ensure 100% compliance to Wipros standards of adequate onboarding and training for team members to enhance capability & effectiveness Build an internal talent pool of HiPos and ensure their career progression within the organization Promote diversity in leadership positions Performance Management Set goals for direct reportees, conduct timely performance reviews and appraisals, and give constructive feedback to direct reports. Ensure that organizational programs like Performance Nxt are well understood and that the team is taking the opportunities presented by such programs to their and their levels below Employee Satisfaction and Engagement Lead and drive engagement initiatives for the team Track team satisfaction scores and identify initiatives to build engagement within the team Proactively challenge the team with larger and enriching projects/ initiatives for the organization or team Exercise employee recognition and appreciation Mandatory Skills: M365 Exchange Online. Experience: 5-8 Years.
Posted 3 weeks ago
8.0 - 13.0 years
15 - 20 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Project description We're seeking a strong and creative Software Engineer eager to solve challenging problems of scale and work on cutting edge technologies. In this project, you will have the opportunity to write code that will impact thousands of users You'll implement your critical thinking and technical skills to develop cutting edge software, and you'll have the opportunity to interact with teams across disciplines. In Luxoft, our culture is one that strives on solving difficult problems focusing on product engineering based on hypothesis testing to empower people to come up with ideas. In this new adventure, you will have the opportunity to collaborate with a world-class team in the field of Insurance by building a holistic solution, interacting with multidisciplinary teams. Responsibilities As a Lead OpenTelemetry Developer, you will be responsible for developing and maintaining OpenTelemetry-based solutions. You will work on instrumentation, data collection, and observability tools to ensure seamless integration and monitoring of applications. This role involves writing documentation and promoting best practices around OpenTelemetry. Skills Must have Senior candidates with 8+ years of experience Experience in InstrumentationExpertise in at least one programming language supported by Open Telemetry and a broad knowledge of other languages (e.g., Python, Java, Go, PowerShell, .NET) Passion for ObservabilityStrong interest in observability and experience in writing documentation and blog posts to share knowledge.oExperience in Java instrumentation techniques (e.g. bytecode manipulation, JVM internals, Java agents)Secondary Skills Telemetry Familiarity with tools and technologies on one or more of the belowsuch as Prometheus, Grafana, and other observability platforms (E.g. Dynatrace, AppDynamics (Splunk), Amazon CloudWatch, Azure Monitor, Honeycomb) Nice to have -
Posted 3 weeks ago
5.0 - 8.0 years
4 - 8 Lacs
Coimbatore
Work from Office
Role Purpose The purpose of this role is to support delivery through development and deployment of tools. Extensive working knowledge of Splunk administrator and various components (indexer, forwarder, search head, deployment server), as Splunk system administrator. Setting up Splunk Forwarding for new application tiers introduced into the environment. Identifying bad searches/dashboards and partnering with the creators to improve performance. Troubleshooting Splunk performance issues / Opening support cases with Splunk. Monitor the Splunk infrastructure for capacity planning and optimization.. Experience with any Observability tools such as Grafana, Prometheus and also tenants of Observability (Monitoring, Logging and/or tracing) is a plus. Experience with any programming language: Java/GoLang/Python is a plus. Experience working with Linux environment and Unix scripting. Experience with CI/CD: pipeline management with GitHub, Ansible is a plus. Installing, configuration and managing of datadog tool. Creating alerts,dashboards and other metrics in datadog Mandatory Skills: Splunk AIOPS. Experience:5-8 Years.
Posted 3 weeks ago
3.0 - 5.0 years
5 - 7 Lacs
Bengaluru
Work from Office
B2 - 4.5 - 7 yrs PAN India CBR - 110K Ansible (AAP) tower Design, Build, Upgrades, Administration, Maintenance and Production Support Ansible Certification Preferred. Primary Skills - Linux, Python, Ansible, DevOps tooling, Terraform, Puppet Strong Linux administration skills. Certification preferred. Proficient in Shell Scripting, Python, Ansible Automation. Experience in monitoring tools (Grafana, Prometheus etc) Experience using REST API's to integrate 3rd party tools and provide seamless end to end automation. Develop and maintain ansible playbooks for configuration management and automation. Implement and manage CI/CD pipelines to automate the building, scanning , testing and deployment of applications ensuring rapid and reliable delivery of software releases in adherence to DevSecOps Principles. Manage source code repositories and version control using Github including branch management and code reviews and merge strategies.Experience in using Jenkins, Git, Shell scripts, python and Ansible for automation and build tasks Robust understanding of security technologies (ssl/tls, authentication and authorisation frameworks, directory services, violations and policies, ACLs ) Experience with RBDMS (PostgreSQL, MariaDB, MySQL, Oracle or similar) and NoSQL databases (HBase, MongoDB, CassandraDB, Redis) Familiar with Test automation tools JUnit, Selenium, Cucumber Good Technical Design, Problem Solving and debugging skills Agile proficient and knowledgeable/experience in other agile methodologies. Ideally certified. Additional Skills Knowledge of Hadoop, Airflow, Kafka, Data Flow technologies is added advantage. Contribution to Apache open source, Public GitHub repository with sizeable, big data operations and application code base Analytical approach to problem resolution. Enthusiastic about Big Data technologies, needs to be a proactive learner. 2. Design and execute software developing and reporting Ensure the environment is ready for the execution process designing, test plans, developing test cases/scenarios/usage cases and executing these cases Development of technical specifications and plans and resolution of complex technical design issues Participate and conduct design activities with the development team relating to testing of the automation processes for both functional and non-functional requirements Implement, track, and report key metrics to assure full coverage of functional and non-functional requirements through automation Eliminates errors by owning the testing and validations of codes Track problems, resolutions, and bug fixes throughout the project and create a comprehensive database of defects and successful mitigation techniques Provide resolutions to problems by taking the initiative to use all available resources for research Design and implement automated testing tools when possible, and update tools as needed to ensure efficiency and accuracy Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Develop programs that run efficiently and adhere to WIPRO standards by using similar logic from existing applications, discussing best practices with team members, referencing text books and training manuals, documenting the code and by using accepted design patterns 3. Ensuring smooth flow of communication with customer & internal stakeholders Work with Agile delivery teams to understand product vision and product backlogs; develop robust, scalable, and high quality test automation tests for functional, regression and performance testing Assist in creating acceptance criteria for user stories and generate a test automation backlog Collaborate with Development team to create/improve continuous deployment practices by developing strategies, formalizing processes and providing tools Work closely with business Subject Matter Experts to understand requirements for automation, then designs, builds and deploys the application using automations tools Ensure long term maintainability of the system by documenting projects according to WIPRO guidelines Ensure quality of communication by being clear and effective with test personnel, users, developers, and clients to facilitate quick resolution of problems and accurate documentation of successes Provide assistance to testers and supports personnel as needed to determine system problems Ability to perform backend/database programming for key projects. Stay up-to-date on industry standards and incorporate them appropriately. Design and implement automated testing tools when possible, and update tools as needed to ensure efficiency and accuracya Mandatory Skills: : Ansible Tower Experience: 3-5 Years
Posted 3 weeks ago
4.0 - 8.0 years
13 - 18 Lacs
Bengaluru
Work from Office
Project description We've been engaged by a large Australian financial institution to provide resources to manage the production support activities along with their existing team in Sydney & India. Responsibilities Carry out enhancements to maintenance/housekeeping scripts as required and monitor the DB growth periodically. Handles cloud Environment preparation, refresh, rebuild, upkeep, maintenance, and upgrade activities. Ensure cloud cost optimisation. Troubleshooting of Murex environment-specific issues including Infrastructure related issues and update pipelines for a permanent fix. Handling EOD execution and troubleshooting of issues related to it. Participate in analysis, solutioning, and deployment of solution for production issues during EoD. Participate in the release activity and coordinate with QA/Release teams. Participate in AWS stack deployment, AWS AMI patching, and stack configuration to ensure optimal performance and cost-efficiency. Address requests like warehouse rebuild, maintenance, Perform Health/sanity checks, create XVA engine, environment restores & backup in AWS as per project need. Perform Weekend maintenance and perform health checks in the production environment during the weekend. Support working in shifts (max end time will be 12.30 AM IST) and available for weekend & on-call support. Have to work out of client location on a need basis. Flexible to work in a Hybrid model. Skills Must have 4 to 8 Years of experience in Murex Production Support Murex End of Day support Troubleshooting batch-related issues, including date moves and processing adjustments Murex Env Management & Troubleshooting Experienced in SQL Unix shell scripting, Monitoring tools, Web development Experienced in the Release and CI/CD process Linux/Unix server and Oracle RDS knowledge Working experience with automation/job scheduling tools such as Autosys, GitHub Actions Working experience with monitoring tools like Grafana, Splunk, Obstack, PagerDuty Good communication and organization skills working within a DevOps team supporting a wider IT delivery team Nice to have PL/SQL, Scripting languages (Python) Advanced troubleshooting experience with Shell scripting and Python Experience with CICD tools like Git, flows, Ansible, and AWS including CDK Exposure to AWS Cloud environment Willing to learn and obtain AWS certification
Posted 3 weeks ago
5.0 - 8.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Job Summary: We are seeking strong expertise in microservices architecture and IBM Operational Decision Manager (ODM). The ideal candidate will be responsible for designing, developing, and maintaining scalable backend services and business rule solutions, while ensuring high performance, security, and integration with enterprise systems. Key Responsibilities: Design and develop microservices using Spring Boot framework. Write clean, efficient, and well-tested code following best engineering practices. Collaborate with cross-functional teams to define, design, and ship new features. Remediate Common Vulnerabilities and Exposures (CVEs) and manage security compliance. Design, develop, and maintain business rules and decision services using IBM ODM. Optimize rule execution performance and ensure scalability of the ODM application. Implement and maintain CI/CD pipelines for ODM and Java-based projects. Integrate ODM solutions with various enterprise systems and applications. Participate in code reviews, troubleshooting, and performance tuning. Required Skills: Skillset: BRMS: IBM ODM Front End: React.js/Node.js, HTML5/CSS3 Back End: Java, Sprint Boot, REST, JPA Middleware: RabbitMQ, Kafka Testing: JUnit, JMockit DevOps: Jenkins, GitHub, Docker, SonarQube, Fortify Database: MS SQL, PostgresSQL Logging: Splunk, Grafana Preferred Qualifications: Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Experience working in Agile/Scrum environments. Familiarity with secure coding practices and vulnerability management. Mandatory Skills: IBM BPM - IBM Lombardi. Experience: 5-8 Years.
Posted 3 weeks ago
5.0 - 10.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Project description Institutional Banking Data Platform (IDP) is state-of-the-art cloud platform engineered to streamline data ingestion, transformation, and data distribution workflows that underpin Regulatory Reporting, Market Risk, Credit Risk, Quants, and Trader Surveillance. In your role as Software Engineer, you will be responsible for ensuring the stability of the platform, performing maintenance and support activities, and driving innovative process improvements that add significant business value. Responsibilities Problem solving advanced analytical and problem-solving skills to analyse complex information for key insights and present as meaningful information to senior management Communication excellent verbal and written communication skills with the ability to lead discussions with a varied stakeholder across levels Risk Mindset You are expected to proactively identify and understand, openly discuss, and act on current and future risks Skills Must have Bachelor's degree in computer science, Engineering, or a related field/experience. 5+ years of proven experience as a Software Engineer or similar role, with a strong track record of successfully maintaining and supporting complex applications. Strong hands-on experience with Ab Initio GDE, including Express>It, Control Centre, Continuous>flow. Should have handled and worked with XML, JSON, and Web API. Strong hands-on experience in SQL. Hands-on experience in any shell scripting language. Experience with Batch and streaming-based integrations. Nice to have Knowledge of CI/CD tools such as TeamCity, Artifactory, Octopus, Jenkins, SonarQube, etc. Knowledge of AWS services including EC2, S3, CloudFormation, CloudWatch, RDS and others. Knowledge of Snowflake and Apache Kafka is highly desirable. Experience with configuration management and infrastructure-as-code tools such as Ansible, Packer, and Terraform. Experience with monitoring and observability tools like Prometheus/Grafana.
Posted 3 weeks ago
8.0 - 13.0 years
14 - 18 Lacs
Bengaluru
Work from Office
Project description We are seeking a highly skilled and motivated DevOps Engineer with 8+ years of experience to join our engineering team. You will work in a collaborative environment, automating and streamlining processes related to infrastructure, development, and deployment. As a DevOps Specialist, you will help implement and manage CI/CD pipelines, configure on-prem Windows OS infrastructure, and ensure the reliability and scalability of our systems. The system is on Windows with Microsoft SQL. Responsibilities CI/CD Pipeline ManagementDesign from scratch, implement, and manage automated build, test, and deployment pipelines to ensure smooth code integration and delivery. Infrastructure as Code (IaC)Develop and maintain infrastructure using tools for automated provisioning and management. System Monitoring & MaintenanceSet up monitoring systems for production and staging environments, analyze system performance, and provide solutions to increase efficiency. Deploy and manage configuration using fit-to-purpose tools and scripts with version controls, CI, etc. CollaborationWork closely with software developers, QA teams, and IT staff to define, develop, and improve DevOps processes and solutions. Automation & ScriptingCreate and maintain custom scripts to automate manual processes for deployment, scaling, and monitoring. SecurityImplement security practices and ensure compliance with industry standards and regulations related to cloud infrastructure. Troubleshooting & Issue ResolutionDiagnose and resolve issues related to system performance, deployments, and infrastructure. Drive DevOps thought leadership and delivery experience to the offshore client delivery team. Implement DevOps best practices based on developed patterns. Skills Must have Total 9 to 12 years of experience as a DevOps Engineer 3+ years of experience in AWS Excellent knowledge of DevOps toolchains like GitHub Actions /GitHub Co-pilot Self-starter, capable of driving solutions from 0 to 1 and able to deliver projects from scratch Familiarity with containerization and orchestration tools (Docker, Kubernetes) Working understanding of platform security constructs Good exposure to Monitoring tools/Dashboards like Grafana, Obstack, or similar monitoring solutions Experience of working with Jira, Agile SDLC practices Expert knowledge of CI/CD Excellent written and verbal communication skills, strong collaboration, and teamwork skills Proficient in scripting languages like Python and PowerShell, and Database knowledge of MS SQL Experience with Windows or IIS, including installation, configuration, and maintenance Strong troubleshooting skills, with the ability to think critically, work under pressure, and resolve complex issues Excellent communication skills with the ability to work cross-functionally with development, operations, and IT teams Security Best PracticesKnowledge of security protocols, network security, and compliance standards Adaptability to new learning and strong attention to detail with a proactive approach to identifying issues before they arise Nice to have Cloud CertificationsAWS Certified DevOps Engineer, Google Cloud Professional DevOps Engineer, or equivalent. IAC pipelines and best practice Snyk, sysdiag knowledge Worked on windows OS, SRE, monitoring on Prometheus
Posted 3 weeks ago
10.0 - 15.0 years
16 - 20 Lacs
Bengaluru
Work from Office
Project description We are looking for a seasoned Performance Test Lead to join our dynamic team. Your role will involve working closely with a group of talented software developers to create new APIs and ensure the smooth functioning of existing ones within the Azure environment. Responsibilities Understand the non-functional requirements (NFRs) from NFR documents, meeting with business and platform owners. Understand business and the infrastructure involved in the project. Understand the critical business scenarios from developers and the business. Prepare the Performance Test Strategy and Test Plan. Communicate with the business/development team manager regularly through daily/weekly reports. Develop the test scripts and workload modelling. Execute sanity tests, load test, soak test, stress test (as required by the project). Organise the meeting with all the relevant teams (developers/infra, etc.) to monitor core applications during the test execution. Execute the tests and analyse the test results. Prepare the test summary report. Skills Must have 10+ years of experience in performance engineering. Expert in Microfocus LoadRunner and Apache JMeter, along with programming/ scripting experience in C/C++, Java, Perl, Python, and SQL. Proven performance testing experience across multiple platform architectures and technologies, such as micro-services, REST APIs, is advantageous, as is exposure to project experience moving workloads to cloud environments, including AWS or Azure. Exposure to open-source data visualisation tools. Experience in working with APM tools like AppDynamics. Nice to have Core Banking, Jira, Agile, Grafana Banking domain experience
Posted 3 weeks ago
10.0 - 15.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Network Systems Engineering Technical Leader - Routing, Switching, Nexus, VPC, VDC, VLAN, VXLAN, BGP - Design and large scale network deployment Meet the Team We are the Data Center Network Services team within Cisco IT that supports network services for Cisco Engineering and business functions worldwide. Our mission is simple build the network of the future that is adaptable and agile on Ciscos networking solutions. Cisco IT networks are deployed, monitored, and managed with a DevOps approach to support rapid application changes. We invest in ground-breaking technologies that enable us to deliver services in a fast and reliable manner. The team culture is collaborative and fun, where thinking creatively and tinkering on new ideas are encouraged. Your Impact You will design, develop, test and deploy DC network capabilities within Data Center Network. You are engaging and comfortable collaborating with fellow engineers across multiple disciplines as well as internal clients. You will create innovative, high-quality capabilities enabling our clients to have the best possible experience. Minimum Requirements Bachelor of Engineering or Technology with a minimum of 10 years of experience in designing, deploying, operating and handling scalable DC network infrastructure (using Nexus OS) Experience in technologies like Routing, Switching, Nexus, VPC, VDC, VLAN, VXLAN, BGP Experience on handling incident, problem and organisational change Familiarity with DevOps principles, comfortable with Agile practices Preferred Qualifications CCNP or CCIE/DE Experience with SONiC NoS including: Basic configuration (both CLI and config_db.json) Network solve with SONiC QoS monitoring and fix, particularly for RoCEv2 Experience with BGP routing Desirable to have experience with L3 Fabrics Desirable to have familiarity with Nvidia and Linux networking Desirable to have experience with Python, Prometheus, Splunk and Grafana Desirable to have experience with Cisco Firepower firewalls (FTD/FMC) Nice to have Qualifications Experience with Nexus Dashboard Fabric Controller for building and fix Networks Experience with VXLan based networks and solve
Posted 3 weeks ago
6.0 - 11.0 years
17 - 22 Lacs
Pune
Work from Office
Hi, Please find JD - Job Title: Azure Kubernetes Architect and Administrator (L3 Capacity, Managed Services) Key Responsibilities: • Azure Kubernetes Service (AKS): Architect, manage, and optimize Kubernetes clusters on Azure, ensuring scalability, security, and high availability. • Azure Infrastructure and Platform Services: o IaaS: Design and implement robust Azure-based infrastructure for critical BFSI applications. o PaaS: Optimize the use of Azure PaaS services, including App Services, Azure SQL Database, and Service Fabric. • Security & Compliance: Ensure adherence to BFSI industry standards by implementing advanced security measures (e.g., Azure Security Center, rolebased access control, encryption protocols). • Cost Optimization: Analyze and optimize Azure resource usage to minimize costs while maintaining performance and compliance standards. • Automation: Develop CI/CD pipelines and automate workflows using tools like Terraform, Helm, and Azure DevOps. • Process Improvements: Continuously identify areas for operational enhancements in line with BFSI-specific needs. • Collaboration: Partner with cross-functional teams to support deployment, monitoring, troubleshooting, and the lifecycle management of applications. Required Skills: • Expertise in Azure Kubernetes Service (AKS), Azure IaaS and PaaS, and container orchestration. • Strong knowledge of cloud security principles and tools such as Azure Security Center and Azure Key Vault. • Proficiency in scripting languages like Python, Bash, or PowerShell. • Familiarity with cost management tools such as Azure Cost Management + Billing. • Experience in monitoring with Prometheus and Grafana. • Understanding of BFSI compliance regulations and standards. • Process improvement experience using frameworks like Lean, Six Sigma, or similar methodologies. Qualifications: • Bachelor\'s degree in Computer Science, Engineering, or related field. • Certifications like Azure Solutions Architect, Certified Kubernetes Administrator (CKA), or Certified Azure DevOps Engineer are advantageous. • Minimum 5 years of hands-on experience in Azure and Kubernetes environments within BFSI or similar industries. Expertise in AKS, Azure IaaS, PaaS, and security tools like Azure Security Center. Proficiency in scripting (Python, Bash, PowerShell). Strong understanding of BFSI compliance standards. Experience with monitoring tools such as Prometheus, Grafana, New Relic, Azure Log Analytics, and ADF. Skilled in cost management using Azure Cost Management tools. Knowledge of ServiceNow ITSM, Freshworks ITSM, change management, team leadership, and process improvement frameworks like Lean or Six Sigma .
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France