Jobs
Interviews

1087 Monitoring Tools Jobs - Page 11

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

7 - 11 Lacs

coimbatore

Work from Office

We are seeking an experienced Site Reliability Engineer (SRE) who will play a critical role in ensuring the reliability, performance, and scalability of our payment systems. The ideal candidate will possess deep expertise in DevOps automation, enterprise monitoring, and cloud platforms, along with a strong background in Card Payment systems. This role requires hands-on technical skills, a passion for problem-solving, and the ability to collaborate across teams in a fast-paced, dynamic environment. Key Responsibilities : Reliability & Performance : - Ensure the reliability, availability, and performance of critical payment platforms and services. - Drive root cause analysis (RCA) and implement long-term solutions to prevent recurrence of incidents. - Manage capacity planning, scalability, and performance tuning across cloud and on-prem environments. - Lead and participate in the on-call rotation, providing timely support and issue resolution. DevOps Automation & CI/CD : - Design, implement, and maintain CI/CD pipelines using Jenkins, GitHub, and other DevOps tools. - Automate infrastructure deployment, configuration, and monitoring, following Infrastructure as Code (IaC) principles. - Enhance automation for routine operational tasks, incident response, and self-healing capabilities. Monitoring & Observability : - Implement and manage enterprise monitoring solutions including Splunk, Dynatrace, Prometheus, and Grafana. - Build real-time dashboards, alerts, and reporting to proactively identify system anomalies. - Continuously improve observability, logging, and tracing across all environments. Cloud Platforms & Infrastructure : - Work with AWS, Azure, and PCF (Pivotal Cloud Foundry) environments, managing cloud-native services and infrastructure. - Design and optimize cloud architecture for reliability and cost-efficiency. - Collaborate with cloud security and networking teams to ensure secure and compliant infrastructure. Payment Systems Expertise : - Apply your understanding of Card Payment systems to ensure platform reliability and compliance. - Troubleshoot payment-related issues, ensuring minimal impact on transaction flows and customer experience. - Collaborate with product and development teams to ensure alignment with business objectives.

Posted 2 weeks ago

Apply

10.0 - 12.0 years

11 - 15 Lacs

bengaluru

Work from Office

About the Role : We are seeking an experienced and highly skilled Senior AWS Engineer with over 10 years of professional experience to join our dynamic and growing team. This is a fully remote position, requiring strong expertise in serverless architectures, AWS services, and infrastructure as code. You will play a pivotal role in designing, implementing, and maintaining robust, scalable, and secure cloud solutions. Key Responsibilities : - Design & Implementation : Lead the design and implementation of highly scalable, resilient, and cost-effective cloud-native applications leveraging a wide array of AWS services, with a strong focus on serverless architecture and event-driven design. - AWS Services Expertise : Architect and develop solutions using core AWS services including AWS Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, Amazon Pinpoint, and Cognito. - Infrastructure as Code (IaC) : Develop, maintain, and optimize infrastructure using AWS CDK (Cloud Development Kit) to ensure consistent, repeatable, and version-controlled deployments. Drive the adoption and implementation of CodePipeline for automated CI/CD. - Serverless & Event-Driven Design : Champion serverless patterns and event-driven architectures to build highly efficient and decoupled systems. - Cloud Monitoring & Observability : Implement comprehensive monitoring and observability solutions using CloudWatch Logs, X-Ray, and custom metrics to proactively identify and resolve issues, ensuring optimal application performance and health. - Security & Compliance : Enforce stringent security best practices, including the establishment of robust IAM roles and boundaries, PHI/PII tagging, secure configurations with Cognito and KMS, and adherence to HIPAA standards. Implement isolation patterns and fine-grained access control mechanisms. - Cost Optimization : Proactively identify and implement strategies for AWS cost optimization, including S3 lifecycle policies, leveraging serverless tiers, and strategic service selection (e.g., evaluating Amazon Pinpoint vs. SES based on cost-effectiveness). - Scalability & Resilience : Design and implement highly scalable and resilient systems incorporating features like auto-scaling, Dead-Letter Queues (DLQs), retry/backoff mechanisms, and circuit breakers to ensure high availability and fault tolerance. - CI/CD Pipeline : Contribute to the design and evolution of CI/CD pipelines, ensuring automated, efficient, and reliable software delivery. - Documentation & Workflow Design : Create clear, concise, and comprehensive technical documentation for architectures, workflows, and operational procedures. - Cross-Functional Collaboration : Collaborate effectively with cross-functional teams, including developers, QA, and product managers, to deliver high-quality solutions. - AWS Best Practices : Advocate for and ensure adherence to AWS best practices across all development and operational activities. Required Skills & Experience : of hands-on experience as an AWS Engineer or similar role. - Deep expertise in AWS Services : Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, CloudWatch Logs, X-Ray, EventBridge, Amazon Pinpoint, Cognito, KMS. - Proficiency in Infrastructure as Code (IaC) with AWS CDK; experience with CodePipeline is a significant plus. - Extensive experience with Serverless Architecture & Event-Driven Design. - Strong understanding of Cloud Monitoring & Observability tools : CloudWatch Logs, X-Ray, Custom Metrics. - Proven ability to implement and enforce Security & Compliance measures, including IAM roles boundaries, PHI/PII tagging, Cognito, KMS, HIPAA standards, Isolation Pattern, and Access Control. - Demonstrated experience with Cost Optimization techniques (S3 lifecycle policies, serverless tiers, service selection). - Expertise in designing and implementing Scalability & Resilience patterns (auto-scaling, DLQs, retry/backoff, circuit breakers). - Familiarity with CI/CD Pipeline Concepts. - Excellent Documentation & Workflow Design skills. - Exceptional Cross-Functional Collaboration abilities. - Commitment to implementing AWS Best Practices.

Posted 2 weeks ago

Apply

9.0 - 12.0 years

3 - 5 Lacs

hyderabad, india

Hybrid

Job Purpose The primary focus of a Performance Tester is responsible for evaluating the performance and scalability of software applications and systems. They work closely with development teams to identify and rectify performance bottlenecks, ensuring that applications can handle anticipated user loads and perform optimally under various conditions. The role works closely with Test Managers, Test leads, Dev Leads, Developers, Business and Systems analysts, test analysts & engineers and with business stakeholders. Key Activities / Outputs • Conduct performance testing activities, including load testing, stress testing, scalability testing, and endurance testing, to assess the system's performance and identify bottlenecks. • Collaborate with project managers, developers, and quality assurance teams to create performance testing strategies and test plans that align with project goals and requirements. • Co-ordinate with Infra team to configure and maintain test environments that mimic the production environment as closely as possible, ensuring accurate test results. • Execute performance tests using automated testing tools and scripts, monitor system resources during tests, and collect performance metrics. • Analyse test results, identify performance issues, and document findings. Create detailed test reports and provide recommendations for performance improvements. • Collaborate with developers to optimize code and system configurations to address performance bottlenecks, such as database queries, code inefficiencies, or network latency. • Assess the system's scalability by simulating user growth and evaluating how it performs as the user base increases. • Implement monitoring tools and performance profiling to continuously monitor system performance in production and identify potential issues proactively. • Maintain documentation related to performance testing processes, test scenarios, configurations, and results. • Collaborate with cross-functional teams, including developers, system architects, and infrastructure teams, to ensure performance requirements are met. • Stay updated with industry trends, emerging technologies, and best practices in performance testing and share knowledge with the team. Technical Skills or Knowledge Proficiency in performance testing tools like Gatling, JMeter, LoadRunner or similar, Strong understanding of performance testing methodologies and best practices, Knowledge of scripting and programming languages (e.g., Java, Python, or JavaScript), Experience with performance monitoring and profiling tools, Familiarity with database performance tuning and SQL queries, Excellent analytical and problem-solving skills, Strong communication and teamwork skills, Attention to detail and a commitment to delivering high-quality results Preferred Technical Skills (Would be advantageous) Experience with cloud-based performance testing tools and services, Knowledge of DevOps practices and CI/CD pipelines, Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes), Understanding of network protocols and architecture. This position is a hybrid role based in Hyderabad which requires you to be in the office on a Tuesday, Wednesday and Thursday.

Posted 2 weeks ago

Apply

4.0 - 5.0 years

4 - 6 Lacs

bengaluru

Work from Office

Role & responsibilities Install, configure, and manage operating systems, software applications, and hardware Administer and support Office 365, SharePoint Online, Azure, Microsoft Endpoint Protection, and Microsoft Cloud Compliance environments Implement and manage mobile device management (MDM) solutions for iOS and Android, enforcing app-based and profile-based restrictions to secure company devices and data Configure and manage physical IT equipment, including servers, workstations, laptops, printers, and other peripherals Set up, relocate, and maintain hardware and IT equipment as needed Monitor and maintain IP PBX system, ensuring reliable and high-quality telecommunications services Perform regular system maintenance, updates, and backups Implement and manage security measures, including firewall configurations, antivirus software, and access controls Create, modify, and delete user accounts and permissions across Office 365 and Azure Set up and manage VPNs, file sharing, and network devices, including IP address management Provide on-site support for IT equipment, including troubleshooting printers and managing equipment inventory Maintain documentation for processes, configurations, and procedures Develop and enforce IT policies, procedures, and best practices Assist in planning and implementing infrastructure projects Provide technical support to internal staff and ensure timely resolution of issues Preferred candidate profile Relevant certifications such as Microsoft Certified: Azure Administrator, Microsoft 365 Certified: Modern Desktop Administrator, Microsoft Certified: Security, Compliance, and Identity Fundamentals, CompTIA Network+, or CompTIA Security+ Experience with monitoring tools and log management solutions Knowledge of compliance frameworks such as ISO 27001, HIPAA, or GDPR Prior experience in managing IT equipment inventory and handling physical moves of hardware

Posted 2 weeks ago

Apply

4.0 - 7.0 years

4 - 8 Lacs

bengaluru

Work from Office

Tittle : Kubernetes modernization Engineers Experience : 5 - 7 years Location : Hyderabad/Bangalore Key Responsibilities : - Lead Automation : Design and implement an automation framework to migrate workloads to Kubernetes platforms such as AWS EKS, Azure AKS, Google GKE, Oracle OKE, and OpenShift. - Develop Cloud-Native Automation Tools : Build automation tools using Go (Golang) for workload discovery, planning, and transformation into Kubernetes artifacts. - Migrate Kubernetes Across Cloud Providers : Plan and execute seamless migrations of Kubernetes workloads from one cloud provider to another (AWS - Azure, GCP - OCI, etc.) with minimal disruption. - Leverage Open-Source Technologies : Utilize Helm, Kustomize, ArgoCD, and other popular open-source frameworks to streamline cloud-native adoption. - CI/CD & DevOps Integration : Architect and implement CI/CD pipelines using Jenkins (including Jenkinsfile generation) and cloud-native tools like AWS CodePipeline, Azure DevOps, and GCP Cloud Build to support diverse Kubernetes deployments. - Security & Compliance : Define and enforce security best practices, implement zero-trust principles, and proactively address vulnerabilities in automation workflows. - Technical Leadership & Mentorship : Lead and mentor a team of developers, fostering expertise in Golang development, Kubernetes, and DevOps best practices. - Stakeholder Collaboration : Work closely with engineering, security, and cloud teams to align modernization and migration efforts with business goals and project timelines. - Performance & Scalability : Ensure high performance, scalability, and security across automation frameworks and multi-cloud Kubernetes deployments. - Continuous Innovation : Stay ahead of industry trends, integrating emerging tools and methodologies to enhance automation and Kubernetes portability. Qualifications & Experience : - Education : Bachelor's or Master's degree in Computer Science, Engineering, or a related field (or equivalent experience). - Experience : 8+ years in software development, DevOps, or cloud engineering, with 3+ years in a leadership role. - Programming Expertise : Strong proficiency in Go (Golang), Python for building automation frameworks and tools. - Kubernetes & Containers : Deep knowledge of Kubernetes (K8s), OpenShift, Docker, and container orchestration. - Cloud & DevOps : Hands-on experience with AWS, Azure, GCP, OCI, self-managed Kubernetes, OpenShift, and DevOps practices. - CI/CD & Infrastructure-as-Code : Strong background in CI/CD tools (Jenkins, Git, AWS CodePipeline, Azure DevOps, GCP Cloud Build) and Infrastructure-as-Code (IaC) with Terraform, Helm, or similar tools. - Kubernetes Migration Experience : Proven track record in migrating Kubernetes workloads between cloud providers, addressing networking, security, and data consistency challenges. - Security & Observability : Expertise in cloud-native security best practices, vulnerability remediation, and observability solutions. - Leadership & Communication : Proven ability to lead teams, manage projects, and collaborate with stakeholders across multiple domains. Preferred Skills & Certifications : - Experience in self-managed Kubernetes provisioning (e.g., kubeadm, Kubespray) and OpenShift customization (e.g., Operators). - Industry Certifications - CKA, CKAD, or cloud-specific credentials (e.g., AWS Certified DevOps Engineer). - Exposure to multi-cloud and hybrid cloud migration projects

Posted 2 weeks ago

Apply

6.0 - 11.0 years

5 - 9 Lacs

mumbai

Work from Office

Dynatrace Specialist Banking Domain Role Summary : We are looking for a skilled Dynatrace Specialist with strong experience in Application Performance Monitoring (APM), Dynatrace SaaS implementation, and cloud observability. The ideal candidate will have a solid background in banking domain environments, migration from legacy monitoring tools, and a strong understanding of DevOps, CI/CD, and Agile delivery practices. Key Responsibilities : - Implement and manage Dynatrace SaaS for application performance monitoring - Migrate legacy monitoring solutions to next-gen observability solutions - Implement logging services with Dynatrace and Grail Datalake - Diagnose and optimize application, middleware, and infrastructure performance - Monitor and report on business metrics, customer experience, and digital product optimization - Work with agile software engineering teams to integrate observability into CI/CD and DevOps pipelines - Configure and manage event management processes in alignment with ITIL - Develop and maintain automation scripts (Ansible, Shell, Bash, Perl, PowerShell) for monitoring requirements - Collaborate with stakeholders to design monitoring solutions for complex applications and architectures Mandatory Skills & Experience : - Bachelors degree in IT, Computer Science, or related field - 5+ years of experience in Application Performance Monitoring using enterprise-standard tools - Proven Dynatrace SaaS implementation experience - Experience migrating from legacy monitoring solutions to modern observability platforms - Cloud observability experience - Logging services implementation with Dynatrace & Grail Datalake - 3+ years in Agile software engineering practices - 3+ years in CI/CD, automation, and DevOps - Strong knowledge of application architecture, OSI layers, software design methodologies - Proven performance tuning expertise across application, middleware, and infrastructure components - Familiarity with ADO, SharePoint, Confluence, MS Office tools - Event Management and ITIL Foundations (certification preferred) - Scripting experience : Ansible, Shell, Bash, Perl, PowerShell Preferred Skills : - Advanced Excel, Power BI, and reporting/analytics tools - Banking domain experience in digital product monitoring and optimization

Posted 2 weeks ago

Apply

3.0 - 5.0 years

8 - 10 Lacs

ghaziabad

Remote

Job Title: Database Administrator (DBA) Mid Level Location: Remote Experience Required: 35 years Job Type: Full-Time Job Overview: We are seeking a Mid-Level Database Administrator (DBA) to manage and maintain our organization's database systems. The ideal candidate will have hands-on experience with Oracle, SQL Server, and/or MySQL/PostgreSQL, and will be responsible for ensuring the performance, availability, security, and integrity of databases used by mission-critical applications. Key Responsibilities: Install, configure, and maintain database systems (Oracle, SQL Server, MySQL, or PostgreSQL) Monitor database performance and proactively address issues related to speed and efficiency Implement and maintain database security (user roles, permissions, backup & recovery) Perform database tuning and optimization (queries, indexes, etc.) Conduct regular backups and restore testing Create and maintain documentation related to database structure, processes, and policies Collaborate with development and infrastructure teams for deployments and performance tuning Troubleshoot database-related issues and ensure minimal downtime Plan and implement database upgrades and migrations as needed Ensure data integrity, availability, and compliance with internal and external standards Required Skills & Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field 3–5 years of hands-on experience in database administration Strong experience with SQL Server and/or Oracle, including backup, restore, replication, clustering Proficiency in SQL and scripting for automation (T-SQL, PL/SQL, Shell/Bash, PowerShell) Experience with monitoring tools like SQL Profiler, Oracle Enterprise Manager, Nagios, SolarWinds, or similar Good understanding of database security and compliance requirements Experience in handling high-availability and disaster recovery (HA/DR) solutions Familiarity with cloud database services (AWS RDS, Azure SQL, Google Cloud SQL) is a plus Preferred Skills: Certifications like Microsoft Certified: Azure Database Administrator Associate, Oracle DBA Certified Professional, or similar Experience in CI/CD and DevOps environments Exposure to NoSQL databases (MongoDB, Cassandra) is a plus Familiarity with data warehousing concepts and tools Soft Skills: Strong problem-solving and analytical skills Excellent communication and teamwork Ability to work independently and manage time effectively High attention to detail and accuracy

Posted 2 weeks ago

Apply

7.0 - 8.0 years

15 - 19 Lacs

mumbai

Remote

As we expand our operations, we are looking for a skilled Azure DevOps Architect to join our remote team in India. Role Responsibilities : - Design and implement scalable Azure DevOps solutions. - Develop Continuous Integration and Continuous Deployment (CI/CD) pipelines. - Automate infrastructure provisioning using Infrastructure as Code (IaC) practices. - Collaborate with software development teams to enhance product delivery. - Monitor system performance and optimize resource utilization. - Ensure application security and compliance with industry standards. - Lead DevOps transformations and best practices implementation. - Provide technical guidance and support to cross-functional teams. - Identify and resolve technical issues and bottlenecks. - Document and maintain architecture designs and deployment procedures. - Stay updated with the latest technologies and advancements in Azure. - Facilitate training sessions for team members on DevOps tools. - Engage with stakeholders to gather requirements and feedback. - Participate in planning and estimation activities for projects. - Contribute to a culture of continuous improvement and innovation. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - Minimum of 7 years of experience in DevOps engineering. - Proven experience with Azure DevOps tools and services. - Strong knowledge of CI/CD tools such as Azure Pipelines, Jenkins, or GitLab CI. - Experience with Infrastructure as Code tools such as Terraform or ARM Templates. - Hands-on experience with containerization technologies like Docker and Kubernetes. - Solid understanding of cloud architecture and deployment strategies. - Proficiency in scripting languages such as PowerShell, Bash, or Python. - Familiarity with Agile methodologies and practices. - Experience with monitoring tools like Azure Monitor or Grafana. - Excellent communication and collaboration skills. - Strong analytical and problem-solving abilities. - Ability to work independently in a remote team environment. - Certifications in Azure (e.g., Azure Solutions Architect Expert) are a plus. - A background in software development is advantageous.

Posted 2 weeks ago

Apply

10.0 - 12.0 years

7 - 12 Lacs

gurugram

Work from Office

About the Role : We are seeking an experienced and highly skilled Senior AWS Engineer with over 10 years of professional experience to join our dynamic and growing team. This is a fully remote position, requiring strong expertise in serverless architectures, AWS services, and infrastructure as code. You will play a pivotal role in designing, implementing, and maintaining robust, scalable, and secure cloud solutions. Key Responsibilities : - Design & Implementation : Lead the design and implementation of highly scalable, resilient, and cost-effective cloud-native applications leveraging a wide array of AWS services, with a strong focus on serverless architecture and event-driven design. - AWS Services Expertise : Architect and develop solutions using core AWS services including AWS Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, Amazon Pinpoint, and Cognito. - Infrastructure as Code (IaC) : Develop, maintain, and optimize infrastructure using AWS CDK (Cloud Development Kit) to ensure consistent, repeatable, and version-controlled deployments. Drive the adoption and implementation of CodePipeline for automated CI/CD. - Serverless & Event-Driven Design : Champion serverless patterns and event-driven architectures to build highly efficient and decoupled systems. - Cloud Monitoring & Observability : Implement comprehensive monitoring and observability solutions using CloudWatch Logs, X-Ray, and custom metrics to proactively identify and resolve issues, ensuring optimal application performance and health. - Security & Compliance : Enforce stringent security best practices, including the establishment of robust IAM roles and boundaries, PHI/PII tagging, secure configurations with Cognito and KMS, and adherence to HIPAA standards. Implement isolation patterns and fine-grained access control mechanisms. - Cost Optimization : Proactively identify and implement strategies for AWS cost optimization, including S3 lifecycle policies, leveraging serverless tiers, and strategic service selection (e.g., evaluating Amazon Pinpoint vs. SES based on cost-effectiveness). - Scalability & Resilience : Design and implement highly scalable and resilient systems incorporating features like auto-scaling, Dead-Letter Queues (DLQs), retry/backoff mechanisms, and circuit breakers to ensure high availability and fault tolerance. - CI/CD Pipeline : Contribute to the design and evolution of CI/CD pipelines, ensuring automated, efficient, and reliable software delivery. - Documentation & Workflow Design : Create clear, concise, and comprehensive technical documentation for architectures, workflows, and operational procedures. - Cross-Functional Collaboration : Collaborate effectively with cross-functional teams, including developers, QA, and product managers, to deliver high-quality solutions. - AWS Best Practices : Advocate for and ensure adherence to AWS best practices across all development and operational activities. Required Skills & Experience : of hands-on experience as an AWS Engineer or similar role. - Deep expertise in AWS Services : Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, CloudWatch Logs, X-Ray, EventBridge, Amazon Pinpoint, Cognito, KMS. - Proficiency in Infrastructure as Code (IaC) with AWS CDK; experience with CodePipeline is a significant plus. - Extensive experience with Serverless Architecture & Event-Driven Design. - Strong understanding of Cloud Monitoring & Observability tools : CloudWatch Logs, X-Ray, Custom Metrics. - Proven ability to implement and enforce Security & Compliance measures, including IAM roles boundaries, PHI/PII tagging, Cognito, KMS, HIPAA standards, Isolation Pattern, and Access Control. - Demonstrated experience with Cost Optimization techniques (S3 lifecycle policies, serverless tiers, service selection). - Expertise in designing and implementing Scalability & Resilience patterns (auto-scaling, DLQs, retry/backoff, circuit breakers). - Familiarity with CI/CD Pipeline Concepts. - Excellent Documentation & Workflow Design skills. - Exceptional Cross-Functional Collaboration abilities. - Commitment to implementing AWS Best Practices.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

0 Lacs

bengaluru

Hybrid

This is Rajlaxmi from the HR department of ISoftStone Inc. we are looking for a TechOps Engineer with 3+ years of experience. Please find the JD below, If Interested, Please Drop CV at "rajlaxmi.chowdhury@isoftstone.com". Location- Bangalore/Remote Relevant Exp- 3+ years Overview We are seeking a highly motivated and skilled TechOps Engineer to join our team. The ideal candidate will be responsible for ensuring the smooth operation and performance of GTP services, provide technical support, troubleshooting issues, and implementing solution to optimize efficiency. This is an opportunity to work in a dynamic and innovative environment. We foster a collaborative and inclusive culture that value creativity, initiative and continuous learning. If you are a self-motivated professional with a passion for technology and a drive for excellence, we invite you to apply and be an integral part of our team. Career progression opportunities exist for suitably skilled and motivated individuals in the wider GTP function. Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field. Certified in ITIL v3 or v4 foundation is a preferred. Excellent communication skills and ability to articulate technical issues / requirements. Excellent problem-solving and troubleshooting skills. Preferred Skills: Demonstrate comprehensive understanding of ITIL processes and best practices. Demonstrate comprehensive understanding in various monitoring systems such as Dynatrace, Sentry, Grafana, Prometheus, Azure Monitor, GCP Operation Suite, etc. Proficiency in Cloud technologies (e.g. AWS, Azure, GCP). Demonstrate understanding in operating Couchbase Database, MongoDB, as well as PostgreSQL is preferred. Demonstrate understanding of backup and disaster recovery concepts and tools to ensure the availability and recoverability of production systems in the event of a disaster. Certification in relevant technologies (e.g. Microsoft Azure, GCP) is a plus. Familiarity of DevOps practices such as CI/CD workflows, experience with GitHub Actions, and proficiency in using infrastructure automation tools Knowledge of software development lifecycle. Knowledge of containerization and orchestration tools such as Kubernetes Technologies and Tools.

Posted 2 weeks ago

Apply

2.0 - 7.0 years

10 - 14 Lacs

coimbatore

Work from Office

Job Summary : We are seeking a skilled Erlang Developer to join our backend engineering team. The ideal candidate will have a strong background in Erlang, with working experience in Elixir and RabbitMQ. You will play a key role in designing, building, and maintaining scalable, fault-tolerant systems used in high-availability environments. Key Responsibilities : - Design, develop, test, and maintain scalable Erlang-based backend applications. - Collaborate with cross-functional teams to understand requirements and deliver efficient solutions. - Integrate messaging systems such as RabbitMQ to ensure smooth communication between services. - Write reusable, testable, and efficient code in Erlang and Elixir. - Monitor system performance and troubleshoot issues in production. - Ensure high availability and responsiveness of services. - Participate in code reviews and contribute to best practices in functional programming. Required Skills : - Proficiency in Erlang with hands-on development experience. - Working knowledge of Elixir and the Phoenix framework. - Strong experience with RabbitMQ and messaging systems. - Good understanding of distributed systems and concurrency. - Experience with version control systems like Git. - Familiarity with CI/CD pipelines and containerization (Docker is a plus). Preferred Qualifications : - Experience working in telecom, fintech, or real-time systems. - Knowledge of OTP (Open Telecom Platform) and BEAM VM internals. - Familiarity with monitoring tools like Prometheus, Grafana, etc.

Posted 2 weeks ago

Apply

4.0 - 7.0 years

4 - 8 Lacs

hyderabad

Work from Office

Key Responsibilities : - Lead Automation : Design and implement an automation framework to migrate workloads to Kubernetes platforms such as AWS EKS, Azure AKS, Google GKE, Oracle OKE, and OpenShift. - Develop Cloud-Native Automation Tools : Build automation tools using Go (Golang) for workload discovery, planning, and transformation into Kubernetes artifacts. - Migrate Kubernetes Across Cloud Providers : Plan and execute seamless migrations of Kubernetes workloads from one cloud provider to another (AWS - Azure, GCP - OCI, etc.) with minimal disruption. - Leverage Open-Source Technologies : Utilize Helm, Kustomize, ArgoCD, and other popular open-source frameworks to streamline cloud-native adoption. - CI/CD & DevOps Integration : Architect and implement CI/CD pipelines using Jenkins (including Jenkinsfile generation) and cloud-native tools like AWS CodePipeline, Azure DevOps, and GCP Cloud Build to support diverse Kubernetes deployments. - Security & Compliance : Define and enforce security best practices, implement zero-trust principles, and proactively address vulnerabilities in automation workflows. - Technical Leadership & Mentorship : Lead and mentor a team of developers, fostering expertise in Golang development, Kubernetes, and DevOps best practices. - Stakeholder Collaboration : Work closely with engineering, security, and cloud teams to align modernization and migration efforts with business goals and project timelines. - Performance & Scalability : Ensure high performance, scalability, and security across automation frameworks and multi-cloud Kubernetes deployments. - Continuous Innovation : Stay ahead of industry trends, integrating emerging tools and methodologies to enhance automation and Kubernetes portability. Qualifications & Experience : - Education : Bachelor's or Master's degree in Computer Science, Engineering, or a related field (or equivalent experience). - Experience : 8+ years in software development, DevOps, or cloud engineering, with 3+ years in a leadership role. - Programming Expertise : Strong proficiency in Go (Golang), Python for building automation frameworks and tools. - Kubernetes & Containers : Deep knowledge of Kubernetes (K8s), OpenShift, Docker, and container orchestration. - Cloud & DevOps : Hands-on experience with AWS, Azure, GCP, OCI, self-managed Kubernetes, OpenShift, and DevOps practices. - CI/CD & Infrastructure-as-Code : Strong background in CI/CD tools (Jenkins, Git, AWS CodePipeline, Azure DevOps, GCP Cloud Build) and Infrastructure-as-Code (IaC) with Terraform, Helm, or similar tools. - Kubernetes Migration Experience : Proven track record in migrating Kubernetes workloads between cloud providers, addressing networking, security, and data consistency challenges. - Security & Observability : Expertise in cloud-native security best practices, vulnerability remediation, and observability solutions. - Leadership & Communication : Proven ability to lead teams, manage projects, and collaborate with stakeholders across multiple domains. Preferred Skills & Certifications : - Experience in self-managed Kubernetes provisioning (e.g., kubeadm, Kubespray) and OpenShift customization (e.g., Operators). - Industry Certifications - CKA, CKAD, or cloud-specific credentials (e.g., AWS Certified DevOps Engineer). - Exposure to multi-cloud and hybrid cloud migration projects

Posted 2 weeks ago

Apply

5.0 - 9.0 years

18 - 27 Lacs

noida

Hybrid

Position Summary: The DevOps Cloud Infrastructure Level 3 Engineer plays a critical role in designing, implementing, maintaining, and supporting robust cloud infrastructure solutions within a fast-paced, innovative enterprise environment. This advanced position is tailored for an experienced professional who possesses in-depth knowledge of cloud technologies, automation, infrastructure as code (IaC), continuous integration and continuous deployment (CI/CD) pipelines, and possesses foundational networking expertise. The Level 3 designation signifies senior-level responsibilities, including ownership of complex technical issues, mentoring junior staff, and shaping the cloud infrastructure strategy for the organisation. Key Responsibilities: Cloud Infrastructure Design and Implementation: Architect and deploy scalable, highly available, and fault-tolerant cloud environments using industry-leading platforms such as AWS, Azure, or Google Cloud Platform (GCP). Automation and Configuration Management: Develop, maintain, and improve infrastructure automation using tools such as Terraform, CloudFormation, Ansible, or similar. Ensure systems can be deployed, configured, monitored, and managed via code. CI/CD Pipeline Development and Maintenance: Build and optimise CI/CD pipelines using tools like Jenkins, GitLab CI, Azure DevOps, or similar, to ensure rapid and reliable deployment of software and infrastructure changes. Monitoring, Logging, and Observability : Implement and manage monitoring, alerting, and logging solutions (e.g., Prometheus, Grafana, ELK/EFK Stack, CloudWatch, Stackdriver). Ensure high visibility into system operations and the ability to proactively respond to incidents. Incident Response and Troubleshooting: Lead the investigation and resolution of complex infrastructure and application issues. Serve as an escalation point for Level 2 engineers. Perform root cause analysis and drive continuous improvement initiatives. Security and Compliance: Implement and enforce cloud security best practices, including identity and access management (IAM), encryption, vulnerability management, and network segmentation. Contribute to compliance efforts (e.g., ISO 27001, SOC 2, GDPR) as required. Cost Optimisation: Monitor cloud usage and spending; recommend and implement strategies to manage and reduce costs without compromising performance or security. Networking: Design and manage cloud networking components such as VPCs, subnets, security groups, firewalls, and VPNs. Apply basic networking principles such as IP addressing, routing, DNS, TCP/IP, and load balancing to ensure secure and efficient connectivity. Documentation: Create and maintain technical documentation for infrastructure design, processes, and troubleshooting guides. Ensure knowledge is shared across the team. Mentorship and Collaboration: Mentor junior DevOps engineers and collaborate closely with developers, security teams, and other stakeholders to deliver high-quality solutions. Participate in code reviews, architectural discussions, and cross-functional meetings. Continuous Improvement : Research and recommend new technologies, tools, and practices that improve reliability, performance, and developer productivity. Contribute to a culture of innovation and learning. Qualifications and Experience: Education: Bachelor's degree in computer science, Information Technology, Engineering, or equivalent professional experience. Experience: Minimum 5+ years of relevant experience in DevOps, Cloud Engineering, or System Administration roles, with demonstrable expertise in cloud infrastructure at scale. Certifications (Preferred): Certifications such as AWS Certified Solutions Architect, Azure Solutions Architect Expert, Google Professional Cloud Architect, or equivalent. Networking Knowledge: Understanding of basic networking concepts, including but not limited to IP addressing, subnetting, routing, firewalls, DNS, and VPNs. Cloud Platform Proficiency: Advanced experience with at least one major cloud provider (AWS, Azure, GCP), including services such as compute, storage, networking, security, and database. Automation Tools: Proficient in infrastructure automation and configuration management tools (e.g., Terraform, Ansible, Puppet, CloudFormation). CI/CD Tools: Working knowledge of pipeline tools such as Jenkins, GitLab CI, Azure DevOps, or similar. Monitoring & Logging: Experience implementing and managing observability platforms (e.g., Prometheus, Grafana, ELK/EFK, Cloud-native solutions). Scripting: Proficiency in scripting languages such as Python, Bash, PowerShell, or similar for automation and orchestration. Operating Systems: Advanced knowledge of Linux/Unix and Windows server environments. Security Best Practices: Experience with IAM, encryption, vulnerability management, and incident response in cloud environments. Typical Projects and Tasks Migrating legacy on-premises solutions to the cloud Designing and deploying serverless architectures Implementing automated backup and disaster recovery solutions Building monitoring dashboards and proactive alerting systems Developing and enforcing security policies across cloud accounts Integrating third-party tools and APIs into cloud workflows Participating in proof-of-concept evaluations for new cloud technologies

Posted 2 weeks ago

Apply

1.0 - 6.0 years

3 - 8 Lacs

pune

Work from Office

Location: Pune (5 Days work from office) Working Days: Any 5 Days of the week Rotational Shift: 10 AM - 7 PM & 2 PM to 11 PM Key Responsibilities Provide first-level and second-level IT support for LOS/LMS applications. Troubleshoot and resolve incidents related to loan processing, integrations, and workflows. Support key modules such as KYC, PAN validation, NPA tracking, Credit Card flow, and Voucher management. Monitor system performance and ensure timely resolution of user queries and technical issues. Coordinate with vendors, business, and internal IT teams for issue resolution and enhancements. Document incidents, resolutions, and prepare knowledge base articles for recurring issues. Ensure compliance with ITIL processes Incident, Change, and Problem management. Assist business teams in understanding loan lifecycle processes (disbursement, repayment, closure, NPA handling). Required Skills & Experience 26 years of experience in IT Application Support, preferably in BFSI domain. Hands-on experience supporting Loan Management Systems (LMS) or Loan Origination Systems (LOS). Knowledge of integrations with PAN/KYC validation, credit bureau, payment gateways, and other banking systems. Familiarity with SQL queries for issue analysis and reporting. Strong analytical, troubleshooting, and communication skills. Exposure to ITIL processes (Incident/Change/Problem management).

Posted 2 weeks ago

Apply

6.0 years

40 - 42 Lacs

bengaluru

Work from Office

About Aurigo About Aurigo Aurigo is revolutionizing how the world plans, builds, and manages infrastructure projects with Masterworks , our industry-leading enterprise SaaS platform. Trusted by over 300 customers managing $300 billion in capital programs, Masterworks is setting new standards for project delivery and asset management. Recognized as one of the Top 25 AI Companies of 2024 and a Great Place to Work for three consecutive years, we are leveraging artificial intelligence to create a smarter, more connected future for customers in transportation, water and utilities, healthcare, higher education, and the government, with over 40,000 projects across North America. At Aurigo, we don’t just develop software—we shape the future. If you’re excited to join a fast-growing company and collaborate with some of the brightest minds in the industry to solve real-world challenges, let’s connect. Description: The SRE team provides hosting, operations, database, security, and scaling support to Aurigo’s flag ship products hosted on AWS Cloud Infrastructure. SRE role at Aurigo is to enable the business to deliver, operate, maintain, and scale our flagship products. In order for Aurigo to maintain such an awesome product, Aurigo has a need to design, implement and maintain highly available and responsive cloud infrastructures. Aurigo requires a dynamic Site Reliability Engineer with both Application management and Infrastructure administration skills. The engineer should be capable of delivering a highly available and reliable Application environment and will be responsible for a variety of technical, operational, and consultative activities, including system administration, release engineering tasks for our flagship products. It is critically important to the company that the applications and database systems offer the highest levels of reliability and performance. We are committed to providing 99.99% uptime. Requirements: 4 or more years of hands-on experience with AWS, Kubernetes, Ansible, Scripting - (Mandatory) Must have experience with CI/CD tools such as - Jenkins/ AzureDevOps (ADO) or similar tools. CKA (Certified Kubernetes Administration) – Good to have . Hands on experience of minimum 3 years on Kubernetes and Ansible. Hands-on experience with Ansible for automation of infrastructure provisioning and configuration management. Should have hands-on experience on Linux administration. Candidate should have in-depth understanding of cloud networking concepts such as VPC peering, VPN connectivity, EKS networking, load balancer and web application security. Should be well versed in General monitoring, release specific monitoring, Security monitoring tools such as NewRelic, Sumologic, OpenSearch/Kibanna, CrowdStrike etc. should be able to identify gaps and provide process improvements. Should have ability to conduct regular tests on Disaster management, Business continuity Plan for various application components on AWS and make improvements. Should have ability to co-ordinate between different cross functional orgs such as devs, management on various tasks, vulnerability management etc. Must have one among AWS Solutions Architect/SysOps/Developer associate level certifications. AWS Solutions Architect/DevOps Engineer Professional/Specialty certifications preferred. Competencies

Posted 2 weeks ago

Apply

4.0 - 7.0 years

12 - 22 Lacs

gurugram

Hybrid

What You Will Be Doing Enlighten, Enable and Empower a fast-growing set of multi-disciplinary teams, across multiple applications and locations. Tackle complex development, automation and business process problems. Champion Cvent standards and best practices. Ensure the scalability, performance, and resilience of Cvent products and processes. Work with product development teams, Cloud Automation and other SRE teams to ensure a holistic understanding of observability gaps and their effective and efficient identification and resolution. Identify recurring problems and anti-patterns in development, operational and security processes and help respective team to build observability for those. Develop build, test and deployment automation that seamlessly targets multiple on-premises and AWS regions. Give back by working on and contributing to Open-Source projects. What You Need for this Position Must have skills: Excellent communication skills and track record working in distributed teams A passion for and track record in making things better for your peers. Experience managing AWS services / operational knowledge of managing applications in AWS ideally via automation. Fluent in at least one scripting languages like Typescript, Javascript, Python, Ruby and Bash. Experience with SDLC methodologies (preferably Agile). Experience with Observability (Logging, Metrics, Tracing) and SLI/SLO Working with APM, monitoring, and logging tool (Datadog, New Relic, Splunk) Good understanding of containerization concepts - docker, ECS, EKS, Kubernetes. Self-motivation and the ability to work under minimal supervision Troubleshooting and responding to incidents, set a standard for others to prevent the issues in future. Good to have skills: Experience with Infrastructure as Code (IaC) tools such as CloudFormation, CDK (preferred) and Terraform. Experience managing 3 tier application stacks. Understanding of basic networking concepts. Experience on Server configuration through Chef, Puppet, Ansible or equivalent Working experience with NoSQL databases such as MongoDB, Couchbase, Pos

Posted 2 weeks ago

Apply

7.0 - 8.0 years

19 - 22 Lacs

gurugram

Remote

Role Responsibilities : - Design and implement scalable Azure DevOps solutions. - Develop Continuous Integration and Continuous Deployment (CI/CD) pipelines. - Automate infrastructure provisioning using Infrastructure as Code (IaC) practices. - Collaborate with software development teams to enhance product delivery. - Monitor system performance and optimize resource utilization. - Ensure application security and compliance with industry standards. - Lead DevOps transformations and best practices implementation. - Provide technical guidance and support to cross-functional teams. - Identify and resolve technical issues and bottlenecks. - Document and maintain architecture designs and deployment procedures. - Stay updated with the latest technologies and advancements in Azure. - Facilitate training sessions for team members on DevOps tools. - Engage with stakeholders to gather requirements and feedback. - Participate in planning and estimation activities for projects. - Contribute to a culture of continuous improvement and innovation. Qualifications : - Bachelor's degree in Computer Science, Information Technology, or related field. - Minimum of 7 years of experience in DevOps engineering. - Proven experience with Azure DevOps tools and services. - Strong knowledge of CI/CD tools such as Azure Pipelines, Jenkins, or GitLab CI. - Experience with Infrastructure as Code tools such as Terraform or ARM Templates. - Hands-on experience with containerization technologies like Docker and Kubernetes. - Solid understanding of cloud architecture and deployment strategies. - Proficiency in scripting languages such as PowerShell, Bash, or Python. - Familiarity with Agile methodologies and practices. - Experience with monitoring tools like Azure Monitor or Grafana. - Excellent communication and collaboration skills. - Strong analytical and problem-solving abilities. - Ability to work independently in a remote team environment. - Certifications in Azure (e.g., Azure Solutions Architect Expert) are a plus. - A background in software development is advantageous.

Posted 2 weeks ago

Apply

10.0 - 12.0 years

11 - 15 Lacs

mumbai

Work from Office

About the Role : We are seeking an experienced and highly skilled Senior AWS Engineer with over 10 years of professional experience to join our dynamic and growing team. This is a fully remote position, requiring strong expertise in serverless architectures, AWS services, and infrastructure as code. You will play a pivotal role in designing, implementing, and maintaining robust, scalable, and secure cloud solutions. Key Responsibilities : - Design & Implementation : Lead the design and implementation of highly scalable, resilient, and cost-effective cloud-native applications leveraging a wide array of AWS services, with a strong focus on serverless architecture and event-driven design. - AWS Services Expertise : Architect and develop solutions using core AWS services including AWS Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, Amazon Pinpoint, and Cognito. - Infrastructure as Code (IaC) : Develop, maintain, and optimize infrastructure using AWS CDK (Cloud Development Kit) to ensure consistent, repeatable, and version-controlled deployments. Drive the adoption and implementation of CodePipeline for automated CI/CD. - Serverless & Event-Driven Design : Champion serverless patterns and event-driven architectures to build highly efficient and decoupled systems. - Cloud Monitoring & Observability : Implement comprehensive monitoring and observability solutions using CloudWatch Logs, X-Ray, and custom metrics to proactively identify and resolve issues, ensuring optimal application performance and health. - Security & Compliance : Enforce stringent security best practices, including the establishment of robust IAM roles and boundaries, PHI/PII tagging, secure configurations with Cognito and KMS, and adherence to HIPAA standards. Implement isolation patterns and fine-grained access control mechanisms. - Cost Optimization : Proactively identify and implement strategies for AWS cost optimization, including S3 lifecycle policies, leveraging serverless tiers, and strategic service selection (e.g., evaluating Amazon Pinpoint vs. SES based on cost-effectiveness). - Scalability & Resilience : Design and implement highly scalable and resilient systems incorporating features like auto-scaling, Dead-Letter Queues (DLQs), retry/backoff mechanisms, and circuit breakers to ensure high availability and fault tolerance. - CI/CD Pipeline : Contribute to the design and evolution of CI/CD pipelines, ensuring automated, efficient, and reliable software delivery. - Documentation & Workflow Design : Create clear, concise, and comprehensive technical documentation for architectures, workflows, and operational procedures. - Cross-Functional Collaboration : Collaborate effectively with cross-functional teams, including developers, QA, and product managers, to deliver high-quality solutions. - AWS Best Practices : Advocate for and ensure adherence to AWS best practices across all development and operational activities. Required Skills & Experience : of hands-on experience as an AWS Engineer or similar role. - Deep expertise in AWS Services : Lambda, API Gateway, S3, DynamoDB, Step Functions, SQS, AppSync, CloudWatch Logs, X-Ray, EventBridge, Amazon Pinpoint, Cognito, KMS. - Proficiency in Infrastructure as Code (IaC) with AWS CDK; experience with CodePipeline is a significant plus. - Extensive experience with Serverless Architecture & Event-Driven Design. - Strong understanding of Cloud Monitoring & Observability tools : CloudWatch Logs, X-Ray, Custom Metrics. - Proven ability to implement and enforce Security & Compliance measures, including IAM roles boundaries, PHI/PII tagging, Cognito, KMS, HIPAA standards, Isolation Pattern, and Access Control. - Demonstrated experience with Cost Optimization techniques (S3 lifecycle policies, serverless tiers, service selection). - Expertise in designing and implementing Scalability & Resilience patterns (auto-scaling, DLQs, retry/backoff, circuit breakers). - Familiarity with CI/CD Pipeline Concepts. - Excellent Documentation & Workflow Design skills. - Exceptional Cross-Functional Collaboration abilities. - Commitment to implementing AWS Best Practices.

Posted 2 weeks ago

Apply

10.0 - 15.0 years

20 - 22 Lacs

chennai

Work from Office

Job Summary: We are seeking an experienced Tableau Administrator with a minimum of 10 years in Tableau Server administration, with hands-on experience in upgrading Tableau environments across major versions. The primary responsibility for this role is to lead and execute the upgrade of our current Tableau Server to the latest stable version, ensuring Role & responsibilities Lead the end-to-end Tableau Server upgrade process from the current version to the latest stable release. Perform pre-upgrade assessments, compatibility checks, and risk evaluations. Plan and execute full backups, validation testing, and rollback procedures. Collaborate with BI teams and stakeholders to ensure report and dashboard compatibility post-upgrade. Troubleshoot any issues that arise during or after the upgrade process. Maintain documentation of the upgrade process, configurations, and system changes. Work with Tableau Support if escalation is required during the upgrade. Optimize server performance and validate environment stability post-upgrade. Preferred candidate profile • 10+ years of professional experience as a Tableau Administrator. • Proven hands-on experience in upgrading Tableau Server (e.g., from 2021.x to 2023.x or later). • Strong knowledge of Tableau architecture, services, and configurations (TSM). • Experience with backup/restore processes, SSL, authentication (LDAP/SAML), and security best practices. • Proficiency in scripting (PowerShell, Bash, or Python) to automate tasks. • Familiarity with monitoring tools (TabMon, LogShark) and performance tuning. • Excellent documentation and communication skills.

Posted 2 weeks ago

Apply

9.0 - 14.0 years

30 - 37 Lacs

hyderabad

Work from Office

Typically, 8-16 years of professional experience and a Bachelor of Arts/Science or equivalent degree in computer science or related area of study; without a degree, three additional years of relevant professional experience (11+ years in total). Knowledge and Skills: Mandate Skills - Monitoring Tools, ITOM, AIOps, Customer Facing, Observability Has sufficient depth and breadth of technical knowledge to design and scope multiple deliverables across a number of technologies. Has demonstrated innovation and communication of new deliverables and offerings. Has led team in the delivery of multiple deliverables across multiple technologies. Ability to develop solutions that enhance the availability, performance, maintainability and agility of a particular customer's enterprise. Has contributed to the design and application of new tools. Ability to re-use existing experience to develop new solutions to take to market. Possesses an understanding, at a detailed level, of architectural dependencies of technologies in use in the customer's IT environment. Frequently uses product and application knowledge along with internals or architectural knowledge to develop solutions. A recognized expert in one or more technologies within own technical community and also at regional level. Holds a vendor or industry certification in at least one discipline area. Able to communicate with internal and external senior management confidently and demonstrate the professionalism of the job family. Ability to work in a multi- technology environment with the ability to diagnose complex technical problems to their root cause. In addition to troubleshooting skills and consulting skills, has ability to summarise prognosis and impact at practice lead level. Ability to adapt a consulting style appropriate to the situation and can identify up-sell opportunities. Be able to demonstrate a broad understanding of market dynamics, an industry area, commercial issues, and technical concerns whilst maintaining depth in core focus area. Ability to present within own area of expertise as part of a customer sales presentation, putting forward domain-specific information within the context of the company sales campaign. Has demonstrated ability to lead others in the gathering of requirements, designs, plans and estimates. Able to produce complete proposals for smaller engagements within own area of expertise. Demonstrates broad knowledge in other technical areas in order to properly manage complex integration efforts. Demonstrates application of technical expertise in successful engagements involving multiple disciplines. Able to independently complete solution implementation or application design deliverables.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Principal BizOps Engineer at Mastercard, you will be part of the Business Operations (Biz Ops) team, specifically as a Business Operations Site Reliability Engineer (SRE). Your primary responsibility will be to ensure the production readiness of Mastercard products. This involves maintaining the stability and health of the platform, supporting developers in building resilient products, and enforcing operational standards. Your role will also include engaging early in the development lifecycle to be proactive in managing production and change activities, all while maximizing customer experience and ensuring compliance and risk mitigation. You will serve as the main contact for overseeing the overall health, performance, scalability, resilience, and capacity of applications. This entails supporting services before launch, collaborating with development teams to establish monitoring strategies, and automating alerts to escalate issues proactively. You will also be involved in incident response, post-mortems, and problem-solving to optimize recovery time and enhance reliability. In addition, you will work on automating data-driven alerts, improving the CI/CD pipeline, analyzing ITSM activities, and strategizing and designing efficient solutions for various aspects such as security, resilience, networking, and deployments. Your role will require a systematic problem-solving approach, strong communication skills, and the ability to collaborate with cross-functional teams to ensure system behavior aligns with expectations. The ideal candidate for this role will have a BS degree in Computer Science or a related field, coding or scripting experience, and a curiosity for new technologies and automation. You should possess knowledge of algorithms, data structures, and large-scale distributed systems. Additionally, experience with industry-standard tools, monitoring solutions, and cloud platforms like Azure, GCP, or AWS is advantageous. Preferred qualifications include coding experience in languages such as C++, Java, Python, or Go, familiarity with CI/CD tools, and expertise in network concepts, operating systems, and security implementations. You should also demonstrate a willingness to learn, adapt to challenging opportunities, and prioritize long-term system health while balancing quick fixes. As a member of the Mastercard team, you are expected to adhere to security policies, maintain the confidentiality and integrity of information, report any security breaches, and participate in mandatory security trainings. Your role is crucial in ensuring the security and success of Mastercard's operations and products.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

ahmedabad, gujarat

On-site

We are looking for a skilled and customer-focused Cloud Engineer to assist in the deployment and management of enterprise software for our customers. In this new role, you will be responsible for enhancing our ability to deliver top-notch, scalable, and dependable cloud infrastructure primarily in AWS, with some exposure to Azure. Your main responsibilities will include provisioning and maintaining cloud infrastructure, assisting with software deployments and troubleshooting, setting up monitoring and alerting systems, collaborating with DevOps for deployment automation, contributing to internal documentation, and participating in an on-call rotation for customer support and operational incidents. To be successful in this role, you should have experience in cloud engineering or DevOps, hands-on expertise with AWS and some knowledge of Azure environments. Proficiency in Terraform, Kubernetes, Docker, Bash scripting, and Helm is required, along with a solid understanding of networking fundamentals and monitoring/logging tools. Strong troubleshooting skills, experience with customer-facing deployments, good communication skills, and the ability to work across teams are also essential. Nice to have qualifications include exposure to OpenShift, familiarity with cloud security and compliance practices, and experience in mentoring or leading technical initiatives. Joining our team will give you the opportunity to be part of a growing, customer-focused infrastructure team where you can contribute to smooth deployments and a strong cloud presence. If you enjoy hands-on infrastructure work, collaboration, and engaging with customers, we would love to hear from you.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

Join a dynamic team shaping the tech backbone of our operations, where your expertise fuels seamless system functionality and innovation. As a member of our team, your primary responsibilities will include analyzing and troubleshooting production application flows to ensure end-to-end application or infrastructure service delivery supporting the business operations of the firm. You will play a key role in improving operational stability and availability through your participation in problem management. Monitoring production environments for anomalies and addressing issues utilizing standard observability tools will be crucial to your success in this role. Additionally, you will assist in the escalation and communication of issues and solutions to the business and technology stakeholders. Furthermore, identifying trends and helping in the management of incidents, problems, and changes in support of full stack technology systems, applications, or infrastructure will be part of your daily tasks. To excel in this role, you should possess a minimum of 2 years of experience or equivalent expertise in troubleshooting, resolving, and maintaining information technology services. Prior experience in a Customer or Client Facing related role will be advantageous. Proficiency with AWS Snowflake, AWS Splunk, Oracle Database, and SQL query experience writing and modifying complex queries is essential. Strong communication skills, organizational skills, and time management skills are highly valued. Knowledge of applications or infrastructure in a large-scale technology environment, whether on premises or public cloud, will be beneficial. Exposure to observability and monitoring tools and techniques is expected, along with familiarity with processes in scope of the Information Technology Infrastructure Library (ITIL) framework. Preferred qualifications for this role include knowledge of one or more general-purpose programming languages or automation scripting. Experience with help desk ticketing systems and the ability to influence and lead technical conversations with other resolver groups as directed are also desired. Exposure to observability and monitoring tools and techniques, as well as experience in Large Language Models (LLM) and Agentic AI, would be considered advantageous for this position.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

The Applications Development Senior Programmer Analyst position requires you to actively participate in establishing and implementing new or updated application systems and programs in collaboration with the Technology team. Your main goal will be to contribute to the analysis and programming activities of application systems. Your responsibilities will include conducting tasks related to feasibility studies, estimating time and cost, IT planning, risk technology, applications development, and implementing new or revised application systems to meet specific business needs. You will be monitoring and controlling all phases of the development process, providing user and operational support on applications, and analyzing complex problems to provide evaluative judgment. As an Applications Development Senior Programmer Analyst, you will recommend and develop security measures, consult with users/clients on issues, recommend advanced programming solutions, and ensure essential procedures are followed. You will also serve as an advisor to new or lower-level analysts, operate with a limited level of direct supervision, and act as a subject matter expert to senior stakeholders and team members. In terms of technical proficiency, you should have strong experience in systems design and development of software applications, particularly in Java, Spring Framework, Spring Boot, Kafka, MQ, Micro-Service, Oracle, Mongo, Openshift, REST, Maven, Git, JUnit, TDD, Agile, and CI/CD pipeline. Proficiency in Python skills and GEN AI tools knowledge is considered a plus. You are expected to be hands-on with technologies, contribute to design and implementation, and ensure good quality in all aspects of development. Stakeholder management and effective communication with Engineering, QA, and Product/Business teams throughout the SDLC lifecycle are essential. To qualify for this role, you should have 8 to 12+ years of experience as a Software Engineer/Developer using Java 17 or higher, Spring, Springboot, microservices, design patterns, and Kubernetes. Additionally, you should have experience in software engineering best practices, data structures, object-oriented principles, cloud-native development, container orchestration tools, CI/CD pipelines, troubleshooting skills, agile software delivery, SQL Databases, MongoDB, Oracle, event-driven design, security, observability, and monitoring tools. Your role responsibilities will involve designing, implementing, and deploying software components, leading deliveries of high quality, reviewing design and code, focusing on operational excellence, and making improvements to development and testing processes. Leadership qualities such as organization skills, attention to detail, multi-tasking ability, and excellent communication skills are key requirements for this position. A Bachelors degree or equivalent experience is required for this role. This job description offers a comprehensive overview of the responsibilities and qualifications expected for the Applications Development Senior Programmer Analyst position.,

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

maharashtra

On-site

You are responsible for managing the day-to-day administration for SAN and storage environments, including collaborating with customers" storage support teams, participating in problem-solving and troubleshooting, supporting the design and operation of storage infrastructure, maintaining systems-level maintenance, tracking storage usage trends, and capacity planning for future growth. You will also be required to maintain storage documentation, procedures, and reporting, provide Storage Area Network Configuration and Administration, and allocate storage to server infrastructure. Additionally, you will manage infrastructure events, including capacity management, monitoring storage subsystem and SAN switch capacity, performing capacity analysis, analyzing performance data and trends, and tracking existing storage usage trends for capacity planning. You will also be responsible for preliminary problem diagnosis and documentation, coordinating with Hitachi Global Support for incident resolution, investigating and diagnosing raised problems for root cause analysis, performing necessary storage infrastructure maintenance, and managing the implementation of additional storage infrastructure and services. Furthermore, you will mentor and train junior admins, oversee the design, installation, configuration, and de-provision of storage solutions, manage performance for SAN and storage, provide capacity tuning and remediation recommendations, and ensure IT service continuity management, including providing disaster recovery solutions and developing and maintaining disaster recovery documentation, data movement, replication, and business resumption testing for storage. You should have at least 7 years of experience in handling activities mentioned in the scope, experience in handling Hitachi storages and Brocade switches infrastructure, exposure to operations and management activities on Hitachi Storage(s), Unified compute platforms, various Host O/S including basic cluster management, strong verbal and written communication skills, and Hitachi Technical certification is desirable. Good communication skills and the ability to manage shifts single-handedly are essential for this role. Immediate starters are preferred as the project is already live.,

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies