Jobs
Interviews

1121 Monitoring Tools Jobs - Page 44

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

30 - 45 Lacs

hyderabad

Work from Office

Provide technical support and incident management for ServiceNow FSM, custom apps, and third-party platforms. Troubleshoot issues, ensure system stability, and collaborate with teams to deliver efficient, reliable operations.

Posted Date not available

Apply

5.0 - 10.0 years

2 - 6 Lacs

kolkata

Hybrid

We are seeking a highly skilled Kafka Integration Specialist to join our team. The ideal candidate will have extensive experience in designing, developing, and integrating Apache Kafka solutions to support real-time data streaming and distributed systems. Key Responsibilities : - Design, implement, and maintain Kafka-based data pipelines. - Develop integration solutions using Kafka Connect, Kafka Streams, and other related technologies. - Manage Kafka clusters, ensuring high availability, scalability, and performance. - Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions. - Implement best practices for data streaming, including message serialization, partitioning, and replication. - Monitor and troubleshoot Kafka performance, latency, and security issues. - Ensure data integrity and implement failover strategies for critical data pipelines. Required Skills : - Strong experience in Apache Kafka (Kafka Streams, Kafka Connect). - Proficiency in programming languages like Java, Python, or Scala. - Experience with distributed systems and data streaming concepts. - Familiarity with Zookeeper, Confluent Kafka, and Kafka Broker configurations. - Expertise in creating and managing topics, partitions, and consumer groups. - Hands-on experience with integration tools such as REST APIs, MQ, or ESB. - Knowledge of cloud platforms like AWS, Azure, or GCP for Kafka deployment. Nice to Have : - Experience with monitoring tools like Prometheus, Grafana, or Datadog. - Exposure to DevOps practices, CI/CD pipelines, and infrastructure automation. - Knowledge of data serialization formats like Avro, Protobuf, or JSON. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - 4+ years of hands-on experience in Kafka integration projects.

Posted Date not available

Apply

1.0 - 3.0 years

3 - 6 Lacs

pune

Work from Office

The candidates are expected to provide round the clock (in shifts) operational support for incidents (perform troubleshooting to deliver solutions or workarounds) and service requests (perform actions to fulfil service requests). Minimum qualification: Tertiary Education in a technology discipline, 1 year working experience in a similar setting would be preferred Language: English Knowledge Mandatory: Reasonable amount of Cloud, Windows or Network knowledge/experience Mandatory: Some troubleshooting flair. Plus : Relevant knowledge/experience on Jenkins/Bitbucket/Python is a bonus Roles and Responsibilities NA

Posted Date not available

Apply

5.0 - 10.0 years

2 - 6 Lacs

mumbai

Work from Office

We are seeking a highly skilled Kafka Integration Specialist to join our team. The ideal candidate will have extensive experience in designing, developing, and integrating Apache Kafka solutions to support real-time data streaming and distributed systems. Key Responsibilities : - Design, implement, and maintain Kafka-based data pipelines. - Develop integration solutions using Kafka Connect, Kafka Streams, and other related technologies. - Manage Kafka clusters, ensuring high availability, scalability, and performance. - Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions. - Implement best practices for data streaming, including message serialization, partitioning, and replication. - Monitor and troubleshoot Kafka performance, latency, and security issues. - Ensure data integrity and implement failover strategies for critical data pipelines. Required Skills : - Strong experience in Apache Kafka (Kafka Streams, Kafka Connect). - Proficiency in programming languages like Java, Python, or Scala. - Experience with distributed systems and data streaming concepts. - Familiarity with Zookeeper, Confluent Kafka, and Kafka Broker configurations. - Expertise in creating and managing topics, partitions, and consumer groups. - Hands-on experience with integration tools such as REST APIs, MQ, or ESB. - Knowledge of cloud platforms like AWS, Azure, or GCP for Kafka deployment. Nice to Have : - Experience with monitoring tools like Prometheus, Grafana, or Datadog. - Exposure to DevOps practices, CI/CD pipelines, and infrastructure automation. - Knowledge of data serialization formats like Avro, Protobuf, or JSON. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - 4+ years of hands-on experience in Kafka integration projects.

Posted Date not available

Apply

5.0 - 10.0 years

2 - 6 Lacs

hyderabad

Work from Office

We are seeking a highly skilled Kafka Integration Specialist to join our team. The ideal candidate will have extensive experience in designing, developing, and integrating Apache Kafka solutions to support real-time data streaming and distributed systems. Key Responsibilities : - Design, implement, and maintain Kafka-based data pipelines. - Develop integration solutions using Kafka Connect, Kafka Streams, and other related technologies. - Manage Kafka clusters, ensuring high availability, scalability, and performance. - Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions. - Implement best practices for data streaming, including message serialization, partitioning, and replication. - Monitor and troubleshoot Kafka performance, latency, and security issues. - Ensure data integrity and implement failover strategies for critical data pipelines. Required Skills : - Strong experience in Apache Kafka (Kafka Streams, Kafka Connect). - Proficiency in programming languages like Java, Python, or Scala. - Experience with distributed systems and data streaming concepts. - Familiarity with Zookeeper, Confluent Kafka, and Kafka Broker configurations. - Expertise in creating and managing topics, partitions, and consumer groups. - Hands-on experience with integration tools such as REST APIs, MQ, or ESB. - Knowledge of cloud platforms like AWS, Azure, or GCP for Kafka deployment. Nice to Have : - Experience with monitoring tools like Prometheus, Grafana, or Datadog. - Exposure to DevOps practices, CI/CD pipelines, and infrastructure automation. - Knowledge of data serialization formats like Avro, Protobuf, or JSON. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - 4+ years of hands-on experience in Kafka integration projects.

Posted Date not available

Apply

5.0 - 10.0 years

2 - 6 Lacs

gurugram

Work from Office

We are seeking a highly skilled Kafka Integration Specialist to join our team. The ideal candidate will have extensive experience in designing, developing, and integrating Apache Kafka solutions to support real-time data streaming and distributed systems. Key Responsibilities : - Design, implement, and maintain Kafka-based data pipelines. - Develop integration solutions using Kafka Connect, Kafka Streams, and other related technologies. - Manage Kafka clusters, ensuring high availability, scalability, and performance. - Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions. - Implement best practices for data streaming, including message serialization, partitioning, and replication. - Monitor and troubleshoot Kafka performance, latency, and security issues. - Ensure data integrity and implement failover strategies for critical data pipelines. Required Skills : - Strong experience in Apache Kafka (Kafka Streams, Kafka Connect). - Proficiency in programming languages like Java, Python, or Scala. - Experience with distributed systems and data streaming concepts. - Familiarity with Zookeeper, Confluent Kafka, and Kafka Broker configurations. - Expertise in creating and managing topics, partitions, and consumer groups. - Hands-on experience with integration tools such as REST APIs, MQ, or ESB. - Knowledge of cloud platforms like AWS, Azure, or GCP for Kafka deployment. Nice to Have : - Experience with monitoring tools like Prometheus, Grafana, or Datadog. - Exposure to DevOps practices, CI/CD pipelines, and infrastructure automation. - Knowledge of data serialization formats like Avro, Protobuf, or JSON. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - 4+ years of hands-on experience in Kafka integration projects.

Posted Date not available

Apply

5.0 - 10.0 years

2 - 6 Lacs

bengaluru

Work from Office

We are seeking a highly skilled Kafka Integration Specialist to join our team. The ideal candidate will have extensive experience in designing, developing, and integrating Apache Kafka solutions to support real-time data streaming and distributed systems. Key Responsibilities : - Design, implement, and maintain Kafka-based data pipelines. - Develop integration solutions using Kafka Connect, Kafka Streams, and other related technologies. - Manage Kafka clusters, ensuring high availability, scalability, and performance. - Collaborate with cross-functional teams to understand integration requirements and deliver robust solutions. - Implement best practices for data streaming, including message serialization, partitioning, and replication. - Monitor and troubleshoot Kafka performance, latency, and security issues. - Ensure data integrity and implement failover strategies for critical data pipelines. Required Skills : - Strong experience in Apache Kafka (Kafka Streams, Kafka Connect). - Proficiency in programming languages like Java, Python, or Scala. - Experience with distributed systems and data streaming concepts. - Familiarity with Zookeeper, Confluent Kafka, and Kafka Broker configurations. - Expertise in creating and managing topics, partitions, and consumer groups. - Hands-on experience with integration tools such as REST APIs, MQ, or ESB. - Knowledge of cloud platforms like AWS, Azure, or GCP for Kafka deployment. Nice to Have : - Experience with monitoring tools like Prometheus, Grafana, or Datadog. - Exposure to DevOps practices, CI/CD pipelines, and infrastructure automation. - Knowledge of data serialization formats like Avro, Protobuf, or JSON. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - 4+ years of hands-on experience in Kafka integration projects.

Posted Date not available

Apply

2.0 - 5.0 years

4 - 8 Lacs

chennai

Hybrid

Roles & Responsibility: Continuously analyse monitoring data to identify areas for infrastructure optimization and implement best practices that enhance system reliability and efficiency across diverse environments. Collaborate with cross-functional teams to ensure monitoring configurations align with evolving business needs and technology landscapes. Observing servers, network devices, apps and other essential systems to make sure they are running as expected. Establishing and managing documentation for network equipment, servers, monitoring software, configurations, processes, and troubleshooting procedures. Maintain high system availability via monitoring and proactive issue resolution. Demonstrable understanding of critical server and Infrastructure technologies, as well as the capacity to incorporate this knowledge into Problem, Incident and Change request creation. Utilize monitoring tools to track performance and availability of applications and determine trends. Ability to coach others in learning this technology. Manage the all-important, incident management communications process. Translate operational requirements into monitoring actions. Performing routine maintenance support tasks such as patching, updates, and backups. Coordinate the response of several on-call organizations to address detected anomalies in the enterprise. Make a significant contribution to the incident response procedures that regulate the team, constantly seeking to reduce our response time and time to resolution metrics by valuable minutes. Owns and drives continual improvement of the ITSM Management and its processes as well as its documentation Share your exceptional, hard-earned knowledge of operations excellence with the rest of your peers. Working with other teams, vendors, and third-party providers to resolve network-outage problems Mandatory Skills: Bachelors degree in computer science, or a related technical field Be curious and have a deep desire to learn and possess the self-discipline to do so Knowing your audience (peer organizations, other staff, management, etc.), be able to communicate exceptionally well in both oral and written forms. 4+ year experience with network and systems operations – either in a NOC capacity, or other enterprise-class support capacity. Experience with troubleshooting Windows and Linux based servers and services (Windows Server, CentOS, RHEL, SUSE, Ubuntu, etc.) Have experience working in a team setting; success is a team activity, and we're committed to winning it. Prior experience with enterprise-class monitoring and management systems (Nagios, Splunk, SolarWinds, Cisco, Interlink etc.). Experience with any ticketing software such as JIRA and Service Now to develop and maintain Incident & Change Management. Hands-on experience with at least one programming or scripting language. Able to work scheduled shift rotations (rotation will include weekends and holidays) Even Better: Bachelor’s Degree with CCNA, Windows/Linux Administrator Certifications. 4+ years of industry experience as a Technical Operations Centre/Network Operations Centre Analyst. Prior experience with enterprise-class monitoring and management systems (Prometheus, Nagios, Solar winds, Splunk etc.) Experience working with business intelligence and reporting tools. Experience on monthly patching activity of Windows and Linux servers. Must have prior experience working in a war room setting with teams on high-priority tickets.

Posted Date not available

Apply

7.0 - 12.0 years

6 - 16 Lacs

hyderabad

Remote

Key Responsibilities: Design, implement, and maintain GCP resources, including, but not limited to, virtual machines, virtual networks, storage, and GCP data services. Monitor and optimize GCP infrastructure for performance, cost efficiency, and security. Support deployment efforts for the delivery of project-based assignments including proof-of-concept, analysis, design/architecture, deployment, and support. Ensure that GCP resources and services are compliant with industry standards and company policies. Implement and maintain security best practices for GCP, including identity and access management, firewall rules, and encryption. Collaborate with Terraform, ServiceNow and Automation teams in developing automation scripts and tools to streamline GCP resource provisioning and management. Coordinate with Infrastructure Engineering team to prepare standards for cloud operations processes. Coordinate with Infrastructure and Cloud Architecture teams to design cloud solutions. Analyze vulnerabilities reported by Cyber tools. Plan and track remediation actions on the vulnerabilities based on priorities. Provide L3 support for GCP-related incidents and issues, working with the team to resolve complex problems. Hands on experience with Unix/Linux environments. Perform Linux administration activities, such as installation, configuration of tools or applications, disk management, volume management etc. Troubleshoot and fix VM boot issues such as FS corruption, Kernel Panic. Support Linux Operating System on cloud. Work with application support team to troubleshoot any issues. Participate in on-call rotations to address critical issues outside of regular working hours. Participate in internal & external audits as and when required. Participate in Project standup meetings related to Migration, DR Configuration etc. Act as a point of escalation for complex GCP-related incidents, participating in incident resolution and post-incident analysis. Create and update documentation for GCP infrastructure and best practices. Drive FinOps initiatives to save money on cloud infrastructure including IaaS and PaaS resources. Provide technical guidance and mentorship to junior team members, sharing knowledge and best practices. Qualifications: Bachelors degree in computer science, Information Technology, or a related field (or equivalent work experience) GCP Professional Cloud Certification is a plus Proven experience in designing and managing GCP-based solutions. Excellent problem-solving and troubleshooting skills. Strong communication and teamwork abilities. Required Skills Experience with GCP monitoring and management tools. Knowledge of cloud cost management and optimization tools. Expertise in troubleshooting & fixing Linux OS/Kernel issues in a Cloud environment. Preferred Skills: Knowledge of other cloud platforms (e.g., AWS, Azure) is a plus

Posted Date not available

Apply

7.0 - 12.0 years

18 - 25 Lacs

mumbai

Work from Office

Hi All, PFB opening. Please find below opening Positions- -.Application/Production Support (Linux, Oracle, Autosys, ITIL, ITRS) Experience-7 to 20 Years Location-Mumbai BKC Manditory-Domain Knowledge: Experience in Capital Markets, Trading, or BFSI sectors is highly desirable. CTC-Upto 25LPA(Depends on Last Drawn Salary) Mode: Work from Office (5 Days a Week) Shift Timing - 7:30AM-5:30PM Interview Process: 1st Round: Virtual Interview and 2nd round-face to face. Notice Period: Immediate to Serving 15 Days Max Job Description MAPs Team- Role: Markets Application Support Production Services – Equities Cash Technology Responsibilities Responsible for ensuring the smooth running of production Equities business facing support Provide and manage day-to-day application support to front-office Equities trading systems, managing client and market connectivity, answering user queries, monitoring applications, release of changes to production environment and post-release checking Interact closely with regional business users and other groups in Technology to ensure issues are addressed and communicated in a timely manner Work closely with developers and business analysts to plan system rollout and application updates, provide feedback on production system performance, and investigate production problems and identify solutions Manage system outages in production by interfacing between business, developers, infrastructure teams and management Main tasks including stakeholder communication, execute remedial actions to resolve outage, provide information to business to alleviate knock-on impact, work on post-mortem and follow-up actions for root cause identification, recurrence prevention, and any improvement in problem detection and resolution Ensure that standards of operating procedures for incident, problem, monitoring, service requests and change management procedures are followed Candidates should be prepared to work out of normal business hours on a rota basis such as weekends, or early morning shifts according to business needs Proactively identify and resolve potential production problems in all trading systems and escalate them to relevant parties if necessary. Suggest continuous improvements to existing processes for support both locally and globally Provide performance and capacity monitoring and planning to the platform to ensure the applications are running in healthy state and enough capacity to handle volatile market volumes Skills Desired/ Required Knowledge in production technology support in investment banking environment Good system knowledge of Unix/Linux, Messaging Middleware (MQ, EMS, RV), market data, Oracle and networks Familiar with UNIX shell scripts, Perl and python Knowledge of Equities markets Strong analytical, problem solving and troubleshooting skills. Troubleshooting experience in direct market access and algorithmic trading Understanding of Equities algos would be advantageous Proactive, has initiative, diligent, hardworking. Willing to learn Good team player and able to work independently Good verbal and communication skills are necessary Able to work under pressure in time sensitive trading environment Experience with ITRS, Java VM, Windows, Ansible, AMPS, Autosys, Tableau would be an advantage Interested can apply on shivangi.verma@nusummit.com Regards Shivangi Verma

Posted Date not available

Apply

8.0 - 13.0 years

25 - 40 Lacs

bengaluru

Work from Office

Roles and Responsibilities : SME in at least one functional module of the system or service Design and quality ownership of the service owned as the the SME Say/do ratio of > 95% in execution Code reviews and quality gatekeeper for the SME area Assist and mentor junior members Responsible for enforcing code reviews and test coverage Ensure no regressions in the modules owned Serve on interview panels and help grow the organization Key Responsibilities Redis Administration: Manage and maintain Redis/RMQ clusters (standalone, sentinel, and Redis Enterprise setups). Perform upgrades, patching, and capacity planning. Configure persistence (RDB, AOF) and high availability. Monitor Redis performance and tune configuration parameters. Handle backup/recovery and disaster recovery planning. Troubleshoot and resolve production issues. L3 Support Act as an escalation point for complex incidents and root cause analysis. Automate common DBA tasks using scripting (Python, Bash etc.). Develop and maintain infrastructure as code (e.g., Terraform, Ansible). Collaborate with SRE and DevOps to integrate with CI/CD pipelines. Participate in on-call rotations and incident response. Must Have Skills: 8+ years (with 2+ years in L3 support for Redis / RMQ) 6+ years in database/infrastructure management roles. 3+ years of hands-on experience with Redis. 3+ years of hands-on experience with RMQ. Proficiency in Linux systems administration. Strong scripting knowledge (Bash, Python, or similar). Experience with monitoring tools (Prometheus, Grafana, ELK, etc.) Knowledge of container orchestration (Docker, Kubernetes). Understanding of networking, firewalls, DNS, and load balancing. Preferred Qualifications Experience with Redis Enterprise and Confluent Kafka. Certifications in Redis/Kafka/Linux. Familiarity with cloud platforms (AWS, GCP, Azure). Experience in PCI/HIPAA-compliant environments. Soft Skills Excellent problem-solving and analytical skills. Strong communication and documentation skills. Ability to work independently and under pressure. Collaborative mindset with cross-functional teams.

Posted Date not available

Apply

5.0 - 10.0 years

16 - 25 Lacs

pune

Hybrid

You have: 5+ yrs experience in support/engineering roles Strong Unix/Linux skills, able to administer systems and troubleshoot effectively Good understanding of SQL and PL/SQL with the ability to follow the code logic for troubleshooting Ability to read logs and interpret Exceptions and error messages Familiarity with monitoring and logging tools e.g. Prometheus, Grafana ,ELK stack, Splunk, AppDynamics. Strong understanding of networking, security practices and infrastructure management. Excellent problem-solving skills and ability to work in a fast-paced environment. Familiarity with Agile ways of working as part of multi-disciplinary teams, participate in agile ceremonies, and collaborate with engineers, product managers, designers, and others. Good to have: Proficiency in scripting languages (E.g. python, Bash) and configuration management tools(E.g. Ansible ,Chef ,Puppet) Experience supporting complex Production environments, with a proven track record of resolving critical issues and improving system stability. Working knowledge of client-side web technologies (React, JavaScript) Experience with Messaging frameworks (like Tibco, Kafka) Experience with web servers running Tomcat, Apache Exposure to Azure Cloud services (like Azure AKS, CI/CD) Knowledge of Syndicate Loans domain

Posted Date not available

Apply

4.0 - 8.0 years

12 - 15 Lacs

gurugram

Hybrid

Proficiency with the web stack and web services applications Experience in troubleshooting and analytical skills to determine the root cause of issues Working understanding of relational and no-SQL database concepts Experience in Linux and Kibana Required Candidate profile Comfortable with 24*7*365 support role Exceptional verbal and written communication Docker containerization, virtualization Basic networking knowledge Experience in Application Monitoring Tools

Posted Date not available

Apply

5.0 - 10.0 years

12 - 21 Lacs

pune

Work from Office

Position summary: Ensure that the team is sufficiently and adequately staffed in all shifts with rightly skilled resources to support 24x7 operations. Has a secondary role as Technical Lead to drive team to perform L1.5 tasks such as Patching, SOP Based troubleshooting, Automation via Scripts etc.. Identify areas for process and efficiency improvements within Prod OPS and optimize services. Lead, participate and contribute to service improvements Work with the SDM, Operations Managers and Infra L2 support teams to ensure E2E Prod OPS & monitoring operations are managed within SLAs and expectations. Key responsibilities: 24 x 7 Prod OPS Ensures that 24 x 7 Prod OPS team is working effectively. Ensures that Standard Operating Procedures are in place for all Prod OPS tasks and that escalation to support teams is timely and effective. Ensure that monitoring is in place for all client infrastructure and maintain inventory. Ensure that critical alerts are actioned on promptly before business impact. Ensure timely escalation and track issues till resolution. Team management responsibilities Ensure team is rightly staffed to support operations Ensure team is competent on the use of the customer & NCS tools and trained on the processes Document and update processes and guides Identify areas for process and efficiency improvements Qualifications: IT degree Experience Minimum 5 years’ relevant experience in Windows Support & Automation of simple tasks. Minimum 2 years work experience as 24x7 operations support team lead Hands-on experience in infrastructure operations support Extensive knowledge and experience in 24x7 E2E monitoring of infrastructure Must be willing to work on call after office hours. Knowledge required Knowledge of Windows & monitoring tools (e.g. Logic Monitor) K nowledge of ITSM tools (e.g. ServiceNow) Good to have ITIL intermediate certificate Technical certifications Soft skills Ability to lead, influence and coordinate resources to achieve results Autonomous and self-motivated Good communication (verbal and written) skills Demonstrates initiative and a commitment to continuous improvement Ability to perform under pressure Roles and Responsibilities NA

Posted Date not available

Apply

0.0 - 3.0 years

4 - 6 Lacs

mumbai

Work from Office

Grow Fearlessly Who are we? Trust isnt a given, it needs to be built. And in a world where fraud is evolving fasterthan ever, trust must be safeguarded at every step. At IDfy, we make trust scalable. As an Integrated Identity Platform, we help businesses verify identities, detectfraud, and stay compliant—ensuring every interaction starts with confidence. Our clients include HDFC Bank, Zomato, Amazon, PhonePe, Paytm, HUL and many others. With more than 13+ years of experience and 2 million verifications per day, we are pioneers in this industry. We do this through three interconnected platforms: Onboarding Platform Our IDfy360 and Video Solutions make KYC and identity verification seamless, turning compliance into a frictionless experience. Fraud & Risk Management Platform We stay ahead with CrimeCheck, RiskAI, and our Transaction Intelligence Platform— identifying synthetic identities,financialrisks, and bad actors before they cause damage. Privacy & Data Governance Platform With PRIVY, businesses can navigate evolving data protection laws with ease, ensuring security and transparency at every step. From opening a bank account to landing a job, from securing a loan to making a payment—IDfy is there, ensuring that trust is built, fraud is eliminated, and businesses can operate with confidence. We’ll Get Along If You... Have 0-2 years of experience in a product support ortechnical supportrole Excellent verbal and written communication skills Strong problem-solving skills, with the ability to troubleshoot and find workable solutions Ability to prioritize tickets based on urgency and business impact, with attention to detail Self-motivated and eagerto learn and adaptto new technologies Basic understanding of RESTAPIs and their functionality Willingness to work in rotational shifts, even on weekends Good-to-Have: Knowledge of SQL, with the ability to write basic queries. Hands-on experience with ticketing systems like Freshdesk, Jira, Zendesk, or ServiceNow. Ability to write simple scripts in languages like Python to automate repetitive tasks or improve troubleshooting efficiency. Experience in working closely with customers to resolve issues and build strong relationships through effective communication. Basic understanding of cloud platforms such asAWS,Azure, or Google Cloud, particularly for products hosted on cloud infrastructure. Familiarity with any one ofthe monitoring and logging tools likeKibana, Splunk, Nagios, or Grafana for proactive systemmonitoring and health checks. What You’ll Be Doing (a.k.a. Your Playground) Review test strategies and see that allthe various kinds oftesting like unit,functional, performance, stress, acceptance etc. are getting covered. Defining and executing manual and automation testing strategy Ensuring all developmenttasks meet quality criteria through test planning,test execution, quality assurance and issue tracking Keep raising the bar and standards of allthe quality processes with every project Identifying best practices & tools in the industry and adopting them from time to time Get your big break at IDfy!

Posted Date not available

Apply

2.0 - 5.0 years

0 - 3 Lacs

noida

Work from Office

About the Role: We are seeking a skilled and proactive Site Reliability Engineer II (SRE II) to join our growing infrastructure team. As an SRE II, you will play a critical role in ensuring the reliability, scalability, and performance of our systems. Youll work independently and collaboratively to design, implement, and maintain robust infrastructure solutions, while driving improvements in monitoring, alerting, and deployment strategies. Key Responsibilities: Design, implement, and maintain monitoring tools, metrics, traces, logs, alerts, and notification systems. Independently respond to and configure observability tools to ensure system health and performance. Deeply understand system architecture including frontend, backend, data pipelines, and asynchronous processing components. Work with infrastructure solutions across on-premises, cloud services, Kubernetes (k8s), and serverless architectures. Collaborate with development teams using tech stacks such as Java, React, Python, and PostgreSQL. Implement and optimize redundancy, autoscaling, failovers, and load balancing strategies. Lead efforts in testing environments setup, load testing, and cost optimization. Develop and refine deployment strategies to ensure smooth and reliable releases. Advocate for and implement best practices in reliability engineering and DevOps. Required Qualifications: Experience: 2–4 years in Site Reliability Engineering or related roles. Strong understanding of observability principles and tools (e.g., Prometheus, Grafana, ELK, Datadog). Proficiency in cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes). Hands-on experience with CI/CD pipelines and deployment automation. Solid grasp of infrastructure concepts including failover mechanisms, autoscaling, and load balancing. Familiarity with modern software development practices and tech stacks (Java, React, Python, PostgreSQL). Ability to work independently and make informed decisions regarding tooling and infrastructure strategies. Preferred Qualifications: Experience with cost optimization in cloud environments. Knowledge of serverless architecture and microservices. Exposure to performance testing and chaos engineering. Strong scripting skills (e.g., Bash, Python) for automation.

Posted Date not available

Apply

3.0 - 6.0 years

4 - 5 Lacs

mumbai suburban

Hybrid

XPLN GmbH is a Germany-based data analytics company offering competitive intelligence to leading e-commerce players. We are looking for a Product Support Specialist who brings both technical expertise and a proactive approach to improving systems and workflows. As a freelance Product Support Specialist, you will work with global teams to ensure data accuracy, investigate technical issues, support product teams, and contribute to process optimization. Key Responsibilities Monitor and validate crawled product data from marketplaces like Amazon and eBay. Detect, analyze, and document technical issues using tools like Jira or YouTrack. Coordinate with developers to ensure timely resolution of reported issues. Execute technical support tasks such as account configuration, crawler adjustments, etc. Act as a bridge between Product, Development, and Customer Success teams. Provide internal stakeholders and clients with timely updates on issue progress. Collaborate with freelancers and internal teams under the guidance of the Product Lead. Required Skills Product Support / Technical Support experience QA / Manual Testing / Data Validation Working knowledge of Jira or similar ticketing systems Familiarity with JSON and API concepts (basic level) Experience in E-commerce or Marketplace platforms is a plus Excellent English communication (written and verbal) Strong analytical and troubleshooting abilities Desired Candidate Profile 3 -6 years of relevant experience in Technical Support, Product Support, or QA Prior exposure to data-centric or e-commerce projects Ability to work independently in a hybrid environment Technically sound, with a background in IT, Data Science, or similar fields preferred Role Details Location: Remote/ Hybrid Work Timing: Flexible, aligned with European working hours Contract Type: Freelance / Contractual Start Date: Immediate or ASAP Perks and Benefits Work with a fast-growing European tech company Exposure to international teams and projects Dynamic and collaborative work culture

Posted Date not available

Apply

4.0 - 6.0 years

8 - 15 Lacs

bengaluru

Work from Office

Job Role: Devops Engineer Location: Bangalore Exp: 4-6 years Notice: Immediate Skillsets Required: Proficiency in cloud platforms (AWS, Azure, or Google Cloud). Experience with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible. Strong knowledge of containerization technologies (Docker, Kubernetes). Familiarity with CI/CD tools (uDeploy, GitHub Actions, Jenkins, GitLab CI, CircleCI). Scripting experience with Bash, Shell, Powershell, Python, or other scripting languages. Understanding of networking fundamentals and troubleshooting. Monitoring and logging tools (Splunk, Dynatrace, Prometheus, Grafana, ELK stack,). Experience with version control systems (Git, GitHub, Bitbucket). Familiarity with configuration management tools (Chef, Puppet, SaltStack). Strong problem-solving skills and ability to troubleshoot complex issues. Ability to work in a fast-paced, collaborative environment. Strong communication skills to work effectively with both technical and non-technical teams.

Posted Date not available

Apply

4.0 - 9.0 years

15 - 27 Lacs

mumbai, bengaluru

Hybrid

Role Overview You'll lead efforts to instrument and monitor our production environment with deep visibility and proactive issue detection. This includes tracking Core Web Vitals, feature KPIs, funnel conversions, API responsiveness, and broader traffic shifts. Your work will empower to measure daily change with precision and respond to any anomalies before customers are impacted. Key Responsibilities Architect and maintain robust monitoring frameworks using LogRocket, Datadog, AppDynamics, Launch Darkly, Browser Stack, and more Define and track performance indicators such as Core Web Vitals, feature-specific KPIs, and system throughput metrics Quickly identify, analyze, and escalate production issues with full operational context Build automated alerting and escalation systems to streamline support responses Recommend and implement new observability tools to enhance coverage and reduce blind spots Partner with engineering and support teams to develop best-in-class incident response playbooks Qualifications 3+ years experience in Site Reliability Engineering, DevOps, or Infrastructure roles Hands-on expertise with modern observability platforms and cloud ecosystems Strong troubleshooting and root cause analysis skills across distributed systems Passion for clean instrumentation, operational excellence, and building resilient platforms Bonus: Experience in high-traffic consumer-facing platforms or working with Kubernetes/Docker setups Role & responsibilities Preferred candidate profile

Posted Date not available

Apply

3.0 - 8.0 years

4 - 6 Lacs

noida, ghaziabad, delhi / ncr

Work from Office

Job Title: System Administrator Location: Ghaziabad Company: Gravity Bath Pvt Ltd Experience: 3-8 Years Employment Type: Full-time Job Summary: Gravity Bath Pvt Ltd is seeking a skilled System Administrator to manage and maintain our IT infrastructure, ensuring optimal performance, security, and reliability. The ideal candidate will be responsible for installing, upgrading, and troubleshooting hardware, software, and networks while ensuring system security and data backup. Key Responsibilities: Install, configure, and maintain servers, networks, and computer systems. Monitor system performance and troubleshoot issues to ensure smooth IT operations. Manage user accounts, permissions, and access controls. Implement security measures to protect company data and systems from threats. Maintain backup and disaster recovery plans. Update and patch software and operating systems regularly. Support end-users by resolving hardware and software issues. Document system configurations, procedures, and troubleshooting steps. Collaborate with IT vendors and service providers as needed. Requirements: Diploma/Bachelor's degree in Computer Science, Information Technology, or a related field. Proven experience as a System Administrator or similar role. Excellent problem-solving and communication skills. Ability to work independently and handle multiple tasks efficiently.

Posted Date not available

Apply

3.0 - 8.0 years

5 - 10 Lacs

bengaluru

Work from Office

Who we are About Stripe About the team Stripe will succeed at our mission of increasing the GDP of the internet only if we continue to prove ourselves worthy of our users trust. The Secure Cloud Expansion team is a newly formed security team tasked with building the guardrails to extend Stripe s secure platform, allowing engineers to leverage new technologies that best meet their needs. The team is an entry point to enable new technologies for product and platform teams, prioritising on business needs as well as building the tooling and controls that further enables self-serve, guaranteeing a high bar is maintained across all Stripe surfaces with security in-built into foundations. The team will own their work end-to-end, collaborating closely with users and stakeholders to deliver state of the art solutions and increase Stripe s security posture and empower the next generation of Stripe products. Responsibilities Design and build solutions that will advance Stripe s infrastructure security beyond state of the art and expand our cloud footprint across clouds and services Designing and implementing guardrails and controls that support security invariants and enforce our security principles while providing a surprisingly great user experience for commonly used and newer cloud technologies CI tooling for platform-related configuration Ensuring all cloud infrastructure is defined in code and strict change management is in place Who you are We re looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements, you are encouraged to apply. The preferred qualifications are a bonus, not a requirement. Minimum requirements Empathy, strong communication skills and a deep respect for the power of collaboration A learning mindset, regardless of level or experience 3+ years software engineering experience in a high-stakes production environment Experience writing high quality code in a major programming language and a constructive attitude to help others raise the bar Ability to drive clear next steps when encountering ambiguous spaces without clear lines of ownership Ability to think creatively and holistically about reducing risk in complex environments Deep experience with infrastructure on one, or more of, AWS, Azure, or GCP Preferred qualifications Experience conducting threat modeling of software or infrastructure in cloud native environments Prior usage of security monitoring tools (e.g., CSPM, CNAAP) Experience in a multi-cloud, or complex, cloud environment Team Security Job type Full time

Posted Date not available

Apply

4.0 - 6.0 years

6 - 8 Lacs

gurugram

Work from Office

COMPANY OVERVIEW KKR is a leading global investment firm that offers alternative asset management as well as capital markets and insurance solutions. KKR aims to generate attractive investment returns by following a patient and disciplined investment approach, employing world-class people, and supporting growth in its portfolio companies and communities. KKR sponsors investment funds that invest in private equity, credit and real assets and has strategic partners that manage hedge funds. KKR s insurance subsidiaries offer retirement, life and reinsurance products under the management of Global Atlantic Financial Group. References to KKR s investments may include the activities of its sponsored funds and insurance subsidiaries. KKR Gurugram office provides best in class services and solutions to our internal stakeholders and clients, drives organization wide process efficiency and transformation, and reflects KKRs global culture and values of teamwork and innovation. The office contains multifunctional business capabilities and will be integral in furthering the growth and transformation of KKR. TEAM OVERVIEW KKR s Code of Ethics team sits within the Compliance function. The team is responsible for the administration of all aspects of KKR s Surveillance and Monitoring Program including Communication Surveillance. The team broadly is under Code of Ethics as the practice area monitoring other aspects of conflict of interest around the Firm s Personal Investment Policy and other policies and procedures designed to mitigate conflicts of interest that could arise between the Firm and its employees. POSITION SUMMARY The role will support the global compliance team responsible for administering the Code of Ethics. This individual will closely partner with members of the Code team in various regions to disposition employee requests and otherwise assist employees with respect to the administration of KKR s Code of Ethics. The individual will undertake a variety of regular and ad hoc Code-related tasks. ROLES & RESPONSIBILITIES Real-time and retrospective surveillance of electronic communications, including emails, instant messages, and voice recordings, using advanced surveillance tools and systems. Identify and analyze suspicious patterns or behaviors indicative of market abuse, insider trading, conflicts of interest, or other regulatory violations. Investigate alerts generated by surveillance systems, conducting thorough reviews and analysis of relevant communications and trading activity to determine the nature and severity of potential misconduct. Collaborate closely with Compliance, Legal, and other stakeholders to escalate and report findings, facilitate investigations, and implement remedial actions as necessary. Stay abreast of regulatory developments and industry best practices related to communication surveillance and market abuse detection, providing insights and recommendations for enhancing surveillance capabilities and processes. Contribute to the development and enhancement of surveillance policies, procedures, and training programs to promote a culture of compliance and integrity within the organization. QUALIFICATIONS 4-6 years of relevant compliance experience, preferably in surveillance and monitoring. Familiarity with financial markets and financial instruments, including some familiarity with securities trading, strongly preferred. Strong understanding of relevant regulations and regulatory requirements, including but not limited to SEC, FINRA, and MiFID II will be of added advantage. Excellent analytical skills with the ability to interpret and analyze large volumes of data and identify anomalies or patterns indicative of potential misconduct. Detail- oriented with a strong commitment to accuracy and quality in all aspects of work. Ability to work independently, prioritize tasks effectively, and manage multiple projects simultaneously in a fast-paced environment. Familiarity with Code of Ethics software such as Behavox and ComplySci and other surveillance and monitoring tools are strongly preferred. Proficient with Microsoft Excel, PowerPoint and Word. Demonstrates highest levels of integrity. Displays team-work orientation and is highly collaborative. Builds strong relationships with local and global colleagues. Good communications skills with a focus on efficiency and responsiveness to employee and team requests. #LI-ONSITE

Posted Date not available

Apply

4.0 - 9.0 years

6 - 11 Lacs

pune

Work from Office

So, what s the role all about Within Actimize, we are seeking a proactive and skilled DevSecOps Engineer to join the SOC team and lead some of the security efforts for our cloud-native SaaS platform hosted on AWS. This role is ideal for someone passionate about cloud security, automation, and threat detection, with a strong foundation in networking and DevOps practices. How will you make an impact NICE Actimize is the largest and broadest provider of financial crime, risk and compliance solutions for regional and global financial institutions & has been consistently ranked as number one in the space At NICE Actimize, we recognize that all employee s contributions are integral to our company s growth and success. To find and acquire the best and brightest talent around the globe, we offer a challenging work environment, competitive compensation, and benefits, and rewarding career opportunities. Come share, grow and learn with us you ll be challenged, you ll have fun and you ll be part of a fast growing, highly respected organization. This new SaaS platform will enable our customers (some of the biggest financial institutes around the world) to create solutions on the platform to fight financial crime. Key Responsibilities Security Architecture & Implementation Design and implement security controls across AWS infrastructure and CI/CD pipelines. Ensure compliance with industry standards (e.g., ISO 27001, SOC 2, GDPR). Threat Detection & Response Proactively monitor, detect, and respond to security threats using modern alerting and SIEM tools. Develop and maintain automated threat intelligence and anomaly detection systems. Incident Response Lead and coordinate incident response efforts, including investigation, containment, and remediation. Maintain and continuously improve incident response playbooks and runbooks. Conduct post-incident reviews and root cause analyses to strengthen security posture. Automation & Infrastructure as Code Build and maintain automated security checks and remediation workflows using tools like Terraform, CloudFormation, and AWS Config. Integrate security into CI/CD pipelines using tools like GitHub Actions, Jenkins, or GitLab CI. Networking & Cloud Security Manage and secure VPCs, subnets, security groups, and firewalls. Implement secure API gateways, load balancers, and IAM policies. Security Awareness & Collaboration Work closely with engineering teams to embed security best practices into development workflows. Conduct regular security reviews, audits, and penetration tests. Required Qualifications 4+ years of experience in DevSecOps, Cloud Security, or related roles. Strong hands-on experience with AWS services (EC2, S3, IAM, Lambda, CloudTrail, GuardDuty, etc.). Proficiency in networking concepts (TCP/IP, DNS, VPN, firewalls). Experience with automation tools (Terraform, Ansible, Python, Bash). Familiarity with security monitoring tools (e.g., Datadog, Splunk, AWS Security Hub). Knowledge of DevOps practices and CI/CD pipelines. Excellent problem-solving and communication skills. Be able to attend on Preferred Qualifications AWS Security Specialty or other relevant certifications. Experience with container security (e.g., Docker, Kubernetes). Knowledge of zero-trust architecture and secure software development lifecycle (SSDLC). What s in it for you Join an ever-growing, market disrupting, global company where the teams comprised of the best of the best work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere.

Posted Date not available

Apply

8.0 - 12.0 years

25 - 30 Lacs

kolkata, mumbai, new delhi

Work from Office

About Us At SentinelOne, we re redefining cybersecurity by pushing the limits of what s possible leveraging AI-powered, data-driven innovation to stay ahead of tomorrow s threats. From building industry-leading products to cultivating an exceptional company culture, our core values guide everything we do. We re looking for passionate individuals who thrive in collaborative environments and are eager to drive impact. If you re excited about solving complex challenges in bold, innovative ways, we d love to connect with you. What are we looking for As a member of the SaaS Performance team you will play a pivotal role in ensuring our cutting-edge cybersecurity solutions deliver unmatched speed and reliability at scale. Imagine optimizing systems that protect millions of users worldwide, working with industry-leading cloud technologies, and solving performance challenges in real time. Your contributions will directly impact the seamless experience of our customers and the resilience of their security infrastructure. At SentinelOne, youll collaborate with some of the brightest minds in the field, embrace innovative problem-solving, and see your work make a tangible difference in safeguarding the digital world. If youre passionate about high-performance systems and crave a mission-driven role, this is your chance to redefine whats possible in SaaS engineering. What will you do Develop and maintain platform for improving performance across multiple productsDefine, design and develop tools for performance analysis and testingAnalyze design and architecture of complex productsPromote scalability and reliability across wide range of SaaS products and servicesEfficiently work with large amounts of data, analyzing and extracting key performance and scalability insightsBe an expert in performance and scalability, integrating industry best practices with development team practices What skills and knowledge should you bring Several years of experience in designing services and developing features using Java. Passion for software performance and history of improving it. Proven knowledge in: designing and architecting large and scalable cloud-based applications, developing on public cloud infrastructure (AWS, GCP etc.), and containerization & orchestration (Docker, Helm & Kubernetes). Experience with performance testing(K6, Gatling), profiling and monitoring tools. Strong familiarity with agile development methodologies. Why Us You will be joining a cutting-edge company, where you will tackle extraordinary challenges and work with the very best in the industry Flexible working hours and hybrid/remote work model. Flexible Time Off. Flexible Paid Sick Days. Global gender-neutral Parental Leave (16 weeks, beyond the leave provided by the local laws) Generous employee stock plan in the form of RSUs (restricted stock units) On top of RSUs, you can benefit from our attractive ESPP (employee stock purchase plan) Gym membership/sports gears by Cultfit. Wellness Coach app, with 3,000+ on-demand sessions, daily interactive classes, audiobooks, and unlimited private coaching. Private medical insurance plan for you and your family. Life Insurance covered by S1 (for employees) Telemedical app consultation (Practo) Global Employee Assistance Program (confidential counseling related to both personal and work life matters) High-end MacBook or Windows laptop. Home-office-setup allowances (one time) and maintenance allowance. Internet allowances. Provident Fund and Gratuity (as per govt clause) NPS contribution (Employee contribution) Half yearly bonus program depending on the individual and company performance. Above standard referral bonus as per policy. Udemy Business platform for Hard/Soft skills Training & Support for your further educational activities/trainings Sodexo food coupons.

Posted Date not available

Apply

7.0 - 10.0 years

9 - 12 Lacs

pune

Work from Office

Job description Role Overview: We are seeking an experienced Lead GenAI Developer to drive the design, development, and deployment of cutting-edge Generative AI solutions. This role requires strong expertise in LLMs, prompt engineering, and AI solution architecture, along with proven leadership to mentor teams and deliver scalable AI-driven applications. Key Responsibilities: Lead the end-to-end design, development, and integration of Generative AI solutions. Drive prompt engineering, RAG pipelines, LLM fine-tuning , and deployment workflows. Mentor and guide junior developers, ensuring adherence to best engineering practices . Collaborate with architects, product managers, and stakeholders to shape AI solution roadmaps . Own performance benchmarking, scalability, and reusability across AI solutions. Stay abreast of the latest GenAI advancements, frameworks, and tools to drive innovation. Requirements: Proven expertise in developing GenAI / LLM-based applications . Strong hands-on experience with Python, LangChain, LangGraph, prompt tuning, and vector databases . Proficiency in MLOps practices, API integrations, and observability/monitoring tools . Demonstrated ability to lead teams and successfully deliver complex AI projects. Strong problem-solving, communication, and leadership skills .

Posted Date not available

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies