Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 10.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 8 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 month ago
5.0 - 8.0 years
13 - 20 Lacs
Mumbai, Navi Mumbai
Work from Office
Work Type: Work from office (sometime extended working hours) Education: A Bachelors degree inComputer Science,Information Technology, or related fields is preferred, or equivalent work experience. Position Overview: We are seeking a skilled Redis + Oracle Database Administrator (DBA) to join our team. The ideal candidate will have a strong background in managing, optimizing, and supporting Redis environments, with a focus on high availability, scalability, and performance tuning. As a Redis DBA, you will be responsible for deploying, configuring, maintaining, and monitoring Redis clusters while ensuring high availability, security, and the best possible performance. Key Responsibilities: Redis Database Management: (3-5 years) Oracle Database Management: (3+ Years) Set up, configure, and manage Redis clusters (both single-node and sharded clusters). Perform regular backups of Redis databases and ensure proper disaster recovery planning. Ensure high availability and implement Redis replication (Master- Slave, Sentinel, and Cluster setups). Configure Redis persistence mechanisms such as RDB snapshots and AOF (Append-Only File). Perform upgrades and patch management for Redis versions. Conduct regular Redis database maintenance, including key expiration management, memory management, and flushing of stale data. Minimum of 3-5 years of experience working as a Redis DBA or in a similar role managing high-performance, distributed databases. Experience with Redis clustering, replication, and failover techniques (Redis Sentinel, Redis Cluster). Expertise in Redis configuration and operations, including but not limited to Redis persistence mechanisms (RDB, AOF),memory management, and eviction policies. Familiarity with Redis monitoring tools (e.g.,Prometheus,Grafana,Redis-CLI,ELK stack). Strong understanding of Redis data types (strings, hashes, lists, sets, sorted sets, bitmaps, geospatial indexes). Experience in setting up Redis on Linux environments and configuring it for performance and reliability. Working knowledge of networking (e.g., Redis over SSL, firewalls, clustering, and multi-region replication). Configure and manage Redis Sentinel for automatic failover and monitoring. Manage Redis sharding and clustering for distributed setups to support large-scale applications. Monitoring & Performance Tuning: High Availability and Scalability: Security & Access Control: Disaster Recovery and Backup: Capacity Planning and Resource Management: Collaboration with DevOps/Engineering Teams: Documentation & Reporting: Tools and Technologies: Familiarity with Redis Sentinel for high availability and automatic failover. Experience with Redis Cluster for horizontal scaling and sharding. Familiarity with monitoring tools like Prometheus, Grafana, NewRelic, or other APM tools. Exposure to containerization (Docker) and orchestration tools like Kubernetes for deploying Redis clusters in cloud and on-prem environments. Proficient inscripting (Bash, Python, etc.) to automate Redis tasks like backups, restores, and monitoring. Collaborate with DevOps, application developers, and infrastructure teams to ensure that Redis is integrated effectively into the overall system architecture. Provide guidance on designing efficient data structures and choosing the appropriate Redis commands for application needs (e.g., strings, lists, sets, hashes, etc.). Participate in code reviews to ensure Redis best practices are followed. Preferred Skills: Experience with cloud-based Redis deployments (AWS ElastiCache, Azure Redis Cache, Google Cloud Memorystore). Experience in data security practices, including encryption, SSL, and access control management in Redis. Knowledge of other NoSQL databases (e.g., Cassandra, MongoDB, Couchbase) to understand the broader NoSQL ecosystem.
Posted 1 month ago
4.0 - 8.0 years
6 - 10 Lacs
Hyderabad
Hybrid
Key Responsibilities: 1. Cloud Infrastructure Management: o Design, deploy, and manage scalable and secure infrastructure on Google Cloud Platform (GCP). o Implement best practices for GCP IAM, VPCs, Cloud Storage, Clickhouse, Superset Apache tools onboarding and other GCP services. 2. Kubernetes and Containerization: o Manage and optimize Google Kubernetes Engine (GKE) clusters for containerized applications. o Implement Kubernetes best practices, including pod scaling, resource allocation, and security policies. 3. CI/CD Pipelines: o Build and maintain CI/CD pipelines using tools like Cloud Build, Stratus, GitLab CI/CD, or ArgoCD. o Automate deployment workflows for containerized and serverless applications. 4. Security and Compliance: o Ensure adherence to security best practices for GCP, including IAM policies, network security, and data encryption. o Conduct regular audits to ensure compliance with organizational and regulatory standards. 5. Collaboration and Support: o Work closely with development teams to containerize applications and ensure smooth deployment on GCP. o Provide support for troubleshooting and resolving infrastructure-related issues. 6. Cost Optimization: o Monitor and optimize GCP resource usage to ensure cost efficiency. o Implement strategies to reduce cloud spend without compromising performance. ________________________________________ Required Skills and Qualifications: 1. Certifications: o Must hold a Google Cloud Professional DevOps Engineer certification or Google Cloud Professional Cloud Architect certification. 2. Cloud Expertise: o Strong hands-on experience with Google Cloud Platform (GCP) services, including GKE, Cloud Functions, Cloud Storage, BigQuery, and Cloud Pub/Sub. 3. DevOps Tools: o Proficiency in DevOps tools like Terraform, Ansible, Stratus, GitLab CI/CD, or Cloud Build. o Experience with containerization tools like Docker. 4. Kubernetes Expertise: o In-depth knowledge of Kubernetes concepts such as pods, deployments, services, ingress, config maps, and secrets. o Familiarity with Kubernetes tools like kubectl, Helm, and Kustomize. 5. Programming and Scripting: o Strong scripting skills in Python, Bash, or Go. o Familiarity with YAML and JSON for configuration management. 6. Monitoring and Logging: o Experience with monitoring tools like Prometheus, Grafana, or Google Cloud Operations Suite. 7. Networking: o Understanding of cloud networking concepts, including VPCs, subnets, firewalls, and load balancers. 8. Soft Skills: o Strong problem-solving and troubleshooting skills. o Excellent communication and collaboration abilities. o Ability to work in an agile, fast-paced environment.
Posted 1 month ago
5.0 - 9.0 years
20 - 25 Lacs
Chennai
Work from Office
Skills Required: Python, Java, C/C++, Ruby, and JavaScript J2EE, NoSQL/SQL Datastore, Spring Boot, GCP/AWS/Azure & Docker/K8 RESTful APIs and microservices platform Experience with any of APM and other monitoring tools Exp 5+ CTC 28 LPA Chennai
Posted 1 month ago
5.0 - 10.0 years
5 - 9 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Job type: contract to hire Qualifications Red Hat Certified/ L2 Linux Administrator experience more than 5 to 10 Years L1 Red Hat OpenShift Administrator experience more than 2Years to 5 Years Hands on experience in VMware Virtualization , Windows , Linux , Networking , NSX-T , VRA , Open Shift. Experience in administering Red Hat OpenShift and virtualization technologies. Strong understanding of Kubernetes and container orchestration. Proficiency in virtualization platforms such as Red Hat Virtualization (RHV), VMware, or KVM. Experience with automation/configuration management using Ansible, Puppet, Chef, or similar. Familiarity with CI/CD tools and version control systems like Git. Knowledge of Linux/Unix administration and scripting (e.g., Bash, Python). Experience with monitoring solutions like Prometheus, Grafana, or Nagios. Strong problem-solving skills and the ability to work under pressure Interested candidates please share updated profile with below details 1.Update CV 2.current CTC 3.Expected CTC 4.Current Location 5.preferred Location 6.Total Experience 7. Relevant Experience 8.Interested in contract role
Posted 1 month ago
8.0 - 10.0 years
15 - 30 Lacs
Pune
Work from Office
Role Overview We are looking for experienced DevOps Engineers (8+ years) with a strong background in cloud infrastructure, automation, and CI/CD processes. The ideal candidate will have hands-on experience in building, deploying, and maintaining cloud solutions using Infrastructure-as-Code (IaC) best practices. The role requires expertise in containerization, cloud security, networking, and monitoring tools to optimize and scale enterprise-level applications. Key Responsibilities Design, implement, and manage cloud infrastructure solutions on AWS, Azure, or GCP. Develop and maintain Infrastructure-as-Code (IaC) using Terraform, CloudFormation, or similar tools. Implement and manage CI/CD pipelines using tools like GitHub Actions, Jenkins, GitLab CI/CD, BitBucket Pipelines, or AWS CodePipeline. Manage and orchestrate containers using Kubernetes, OpenShift, AWS EKS, AWS ECS, and Docker. Work on cloud migrations, helping organizations transition from on-premises data centers to cloud-based infrastructure. Ensure system security and compliance with industry standards such as SOC 2, PCI, HIPAA, GDPR, and HITRUST. Set up and optimize monitoring, logging, and alerting using tools like Datadog, Dynatrace, AWS CloudWatch, Prometheus, ELK, or Splunk. Automate deployment, configuration, and management of cloud-native applications using Ansible, Chef, Puppet, or similar configuration management tools. Troubleshoot complex networking, Linux/Windows server issues, and cloud-related performance bottlenecks. Collaborate with development, security, and operations teams to streamline the DevSecOps process. Must-Have Skills 3+ years of experience in DevOps, cloud infrastructure, or platform engineering. Expertise in at least one major cloud provider: AWS, Azure, or GCP. Strong experience with Kubernetes, ECS, OpenShift, and container orchestration technologies. Hands-on experience in Infrastructure-as-Code (IaC) using Terraform, AWS CloudFormation, or similar tools. Proficiency in scripting/programming languages like Python, Bash, or PowerShell for automation. Strong knowledge of CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, or BitBucket Pipelines. Experience with Linux operating systems (RHEL, SUSE, Ubuntu, Amazon Linux) and Windows Server administration. Expertise in networking (VPCs, Subnets, Load Balancing, Security Groups, Firewalls). Experience in log management and monitoring tools like Datadog, CloudWatch, Prometheus, ELK, Dynatrace. Strong communication skills to work with cross-functional teams and external customers. Knowledge of Cloud Security best practices, including IAM, WAF, GuardDuty, CVE scanning, vulnerability management. Good-to-Have Skills Knowledge of cloud-native security solutions (AWS Security Hub, Azure Security Center, Google Security Command Center). Experience in compliance frameworks (SOC 2, PCI, HIPAA, GDPR, HITRUST). Exposure to Windows Server administration alongside Linux environments. Familiarity with centralized logging solutions (Splunk, Fluentd, AWS OpenSearch). GitOps experience with tools like ArgoCD or Flux. Background in penetration testing, intrusion detection, and vulnerability scanning. Experience in cost optimization strategies for cloud infrastructure. Passion for mentoring teams and sharing DevOps best practices.
Posted 1 month ago
8.0 - 10.0 years
15 - 30 Lacs
Pune
Work from Office
Role Overview We are looking for experienced DevOps Engineers (8+ years) with a strong background in cloud infrastructure, automation, and CI/CD processes. The ideal candidate will have hands-on experience in building, deploying, and maintaining cloud solutions using Infrastructure-as-Code (IaC) best practices. The role requires expertise in containerization, cloud security, networking, and monitoring tools to optimize and scale enterprise-level applications. Key Responsibilities Design, implement, and manage cloud infrastructure solutions on AWS, Azure, or GCP. Develop and maintain Infrastructure-as-Code (IaC) using Terraform, CloudFormation, or similar tools. Implement and manage CI/CD pipelines using tools like GitHub Actions, Jenkins, GitLab CI/CD, BitBucket Pipelines, or AWS CodePipeline. Manage and orchestrate containers using Kubernetes, OpenShift, AWS EKS, AWS ECS, and Docker. Work on cloud migrations, helping organizations transition from on-premises data centers to cloud-based infrastructure. Ensure system security and compliance with industry standards such as SOC 2, PCI, HIPAA, GDPR, and HITRUST. Set up and optimize monitoring, logging, and alerting using tools like Datadog, Dynatrace, AWS CloudWatch, Prometheus, ELK, or Splunk. Automate deployment, configuration, and management of cloud-native applications using Ansible, Chef, Puppet, or similar configuration management tools. Troubleshoot complex networking, Linux/Windows server issues, and cloud-related performance bottlenecks. Collaborate with development, security, and operations teams to streamline the DevSecOps process. Must-Have Skills 3+ years of experience in DevOps, cloud infrastructure, or platform engineering. Expertise in at least one major cloud provider: AWS, Azure, or GCP. Strong experience with Kubernetes, ECS, OpenShift, and container orchestration technologies. Hands-on experience in Infrastructure-as-Code (IaC) using Terraform, AWS CloudFormation, or similar tools. Proficiency in scripting/programming languages like Python, Bash, or PowerShell for automation. Strong knowledge of CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, or BitBucket Pipelines. Experience with Linux operating systems (RHEL, SUSE, Ubuntu, Amazon Linux) and Windows Server administration. Expertise in networking (VPCs, Subnets, Load Balancing, Security Groups, Firewalls). Experience in log management and monitoring tools like Datadog, CloudWatch, Prometheus, ELK, Dynatrace. Strong communication skills to work with cross-functional teams and external customers. Knowledge of Cloud Security best practices, including IAM, WAF, GuardDuty, CVE scanning, vulnerability management. Good-to-Have Skills Knowledge of cloud-native security solutions (AWS Security Hub, Azure Security Center, Google Security Command Center). Experience in compliance frameworks (SOC 2, PCI, HIPAA, GDPR, HITRUST). Exposure to Windows Server administration alongside Linux environments. Familiarity with centralized logging solutions (Splunk, Fluentd, AWS OpenSearch). GitOps experience with tools like ArgoCD or Flux. Background in penetration testing, intrusion detection, and vulnerability scanning. Experience in cost optimization strategies for cloud infrastructure. Passion for mentoring teams and sharing DevOps best practices.
Posted 1 month ago
5.0 - 8.0 years
27 - 42 Lacs
Bengaluru
Work from Office
Job Summary As a Cloud Infrastructure/Site Reliability Engineer, you will be operating at the intersection of development and operations. Your role will involve engaging in and enhancing the lifecycle of cloud services - from design through deployment, operation, and refinement. You will be responsible for maintaining these services by measuring and monitoring their availability, latency, and overall system health. You will play a crucial role in sustainably scaling systems through automation and driving changes that improve reliability and velocity. As part of your responsibilities, you will administer cloud-based environments that support our SaaS/IaaS offerings, which are implemented on a microservices, container-based architecture (Kubernetes). In addition, you will oversee a portfolio of customer-centric cloud services (SaaS), ensuring their overall availability, performance, and security. You will work closely with both NetApp and cloud service provider teams. Due to the critical nature of the services we support, this position involves participation in a rotation-based on-call schedule as part of our global team. This role offers the opportunity to work in a dynamic, global environment, ensuring the smooth operation of vital cloud services. To be successful in this role, you should be a motivated self-starter and self-learner, possess strong problem-solving skills, and be someone who embraces challenges. Job Requirements • Incident Response and Troubleshooting: Address and perform root cause analysis (RCA) of complex live production incidents and cross-platform issues involving OS, Networking, and Database in cloud-based SaaS environments. Implement SRE best practices for effective resolution. • Analysis, and Infrastructure Maintenance: Continuously monitor, analyze, and measure system health, availability, and latency using tools like Prometheus, ElasticSearch, Grafana, and SolarWinds. Develop strategies to enhance system and application performance, availability, and reliability. In addition, maintain and monitor the deployment and orchestration of servers, docker containers, databases, and general backend infrastructure. • Document system knowledge as you acquire it, create runbooks, and ensure critical system information is readily accessible. • Security Management: Stay updated with security protocols and proactively identify, diagnose, and resolve complex security issues. • Automation and Efficiency: Identify tasks and areas where automation can be applied to achieve time efficiencies and risk reduction. Develop software for deployment automation, packaging, and monitoring visibility. • Issue Tracking and Resolution: Use Atlassian Jira to track and resolve issues based on their priority. • Team Collaboration and Influence: Work in tandem with other Cloud Infrastructure Engineers and developers to ensure maximum performance, reliability, and automation of our deployments and infrastructure. Additionally, consult and influence developers on new feature development and software architecture to ensure scalability. • Debugging, Troubleshooting, and Advanced support & Undertake debugging and troubleshooting of service bottlenecks throughout the entire software stack. • Directly influence the decisions and outcomes related to solution implementation: measure and monitor availability, latency, and overall system health. • Proficiency in Linux/Unix OS. • Demonstrated experience in scripting and infrastructure automation using tools such as Ansible, Python, Go. • Deep working knowledge of Containers, Kubernetes, and Serverless computing implementation. • DevOps development methodologies. • Familiarity with distributed systems design patterns using tools such as Kubernetes. • Experience with cloud platforms such as AWS, Azure, or Google Cloud. Education • A minimum of 5 - 8 years of experience is required. • A Bachelor of Science Degree in Computer Science, a master’s degree; or equivalent experience is required.
Posted 1 month ago
8.0 - 10.0 years
10 - 12 Lacs
Hyderabad
Work from Office
Key Qualifications: A bachelors degree in computer science, software engineering, or a related discipline. An advanced degree (e.g., MS) is preferred but not required. Experience is the most relevant factor. Strong software engineering foundation with deep understanding of OOP/OOD, functional programming, data structures and algorithms, software design patterns, code instrumentations, etc. 5+ years proven experience with Python, Bash, PowerShell, JavaScript, C#, and Golang (preferred). 5+ years proven experience with CI/CD tools (Azure DevOps and GitHub Enterprise) and Git (version control, branching, merging, handling pull requests) to automate build, test, and deployment processes. 5+ years of hands-on experience in security tools automation SAST/DAST (SonarQube, Fortify, Mend), monitoring/logging (Prometheus, Grafana, Dynatrace), and other cloud-native tools on AWS, Azure, and GCP. 5+ years of hands-on experience in using Infrastructure as Code (IaC) technologies like Terraform, Puppet, Azure Resource Manager (ARM), AWS Cloud Formation, and Google Cloud Deployment Manager. 2+ years of hands-on experience with cloud native services like Data Lakes, CDN, API Gateways, Managed PaaS, Security, etc. on multiple cloud providers like AWS, Azure and GCP is preferred. Strong understanding of methodologies like, XP, Lean, SAFe to deliver high quality products rapidly. General understanding of cloud providers security practices, database technologies and maintenance (e.g. RDS, DynamoDB, Redshift, Aurora, Azure SQL, Google Cloud SQL) General knowledge of networking, firewalls, and load balancers. Strong preference will be given to candidates with AI/ML and GenAI. Excellent interpersonal and organizational skills, with the ability to handle diverse situations, complex projects, and changing priorities, behaving with passion, empathy, and care.
Posted 1 month ago
2.0 - 3.0 years
4 - 6 Lacs
Bengaluru
Work from Office
Working Model: Our flexible work arrangement combines both remote and in-office work, optimizing flexibility and productivity. This position will be part of Sapiens CTIO division, for more information about it, click here: What youll do: Implement secure, resilient, and cost-efficient architecture for our cloud-native platform service. Build and maintain a cloud-native platform Infrastructure following the "infrastructure as code" principle. Maintain and optimize the application layer of a multi-DC environment. Deliver solutions, architectures, and automation for Sapiens Applications. Conduct research to bring innovative solutions to a complex environment to improve processes and tech stack. Application and infrastructure logging and monitoring solutions. What to Have for this position. Must have Skills. 2 to 3 years of experience as a DevOps Engineer. Windows / Linux 2+ years of experience administering Linux servers. Kubernetes -Hands-on experience in developing, deploying, tuning and debugging applications on Kubernetes. Experience with designing and implementing CI/CD pipelines and automation solutions like GitHub / ArgoCD, Azure DevOps is a plus. Cloud -Hands-on experience in working on public Cloud Azure, AWS. Code - Vast scripting experience in PowerShell, Python and Bash. Applications - Vast experience in working with Java web applications. IaC - At least 1-year experience with at least one automation tool - Ansible\Terraform. Security knowledge with web security aspects as WAF, certificates, OS hardening, security policies, VPNs Advantage. Monitoring - Good understanding of monitoring stack ELK/Grafana/ Prometheus/DataDog/ Azure Monitoring. Experience with a live production environment. Accountability, ownership, and independence. Great verbal and written communication skills. Good to have Skills: Experience with Packer / Chocolatey Knowledge with Azure Blueprints.
Posted 1 month ago
7.0 - 10.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 7 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 month ago
10.0 - 16.0 years
30 - 35 Lacs
Hyderabad, Pune
Hybrid
Experience level : Min 10+ yrs Notice Period : Immediate to 30 days Primary Skills : AWSDevops Required : Bachelors degree in Computer Science, Information Technology, or a related field 4+ years of experience in a DevOps or similar role Proficiency with cloud platforms such as AWS Strong experience with infrastructure-as-a-code tools ( Terraform , Ansible, Chef, Puppet) Expertise in setting up and managing CI/CD pipelines using tools such as Jenkins, GitLab CI, or CircleCI Experience with containerization and orchestration technologies (Docker, Kubernetes) Proficiency in scripting languages such as Python, Bash, or PowerShell. Familiarity with version control systems, particularly Git Experience with monitoring and logging tools such as Prometheus, Grafana, ELK stack, or Splunk Ability to troubleshoot and resolve complex infrastructure and application issues Strong problem-solving skills and attention to detail Excellent communication and collaboration abilities Ability to work in a fast-paced, dynamic environment and manage multiple task simultaneously Familiarity with integrated development environments (IDEs) like Visual Studio, IntelliJ Idea or Eclipse Basic knowledge of software development tools and practices, including continuous integration and continuous deployment (CI/CD) Understanding of database management and SQL Some experience with relational and/or NoSQL databases Some experience with microservices architecture and RESTful API design Preferred : AWS Certified DevOps Engineer, Azure DevOps Engineer Expert, or Google Professional DevOps Engineer Relevant certifications in Kubernetes, Docker or other related technologies. Terraform Certified
Posted 1 month ago
8.0 - 12.0 years
18 - 25 Lacs
Chennai
Work from Office
Looking for Site Reliability Engineer (SRE) to manage CI/CD, automate DevOps workflows, and ensure system reliability. Must have experience in Azure, Linux, APMs, SonarQube, and automation. Work closely with dev team to drive uptime and improvements. Required Candidate profile 8+ years experience in Azure DevOps Exp in APM tools: Datadog/Grafana/Prometheus/Dynatrace Exp in CI/CD with automated build & test systems Implementing Dev, test, DevOps tools in Cloud/On-Prem
Posted 1 month ago
2.0 - 5.0 years
4 - 9 Lacs
Chennai
Work from Office
Firm Overview Datazoic is a cutting-edge FinTech company revolutionizing the traditional data landscape of Wall Street. Our flagship product, Prism, combines CRM, Business Intelligence, and AI into a single predictive analytics platform tailored for investment banks and asset management firms. Datazoic is a market leader in this space, and our customers include some of the top financial institutions on Wall Street. We are looking for passionate and versatile DevOps Engineers with a strong foundation in automation, infrastructure, and monitoring. You will be part of a 150+ strong engineering team, contributing to our mission of building and maintaining powerful, actionable data analytics FinTech Platform for real-time decision-making. Role & Responsibilities Design and build robust CI/CD pipelines using tools such as Jenkins, Ansible, Shell scripts, and Python. Set up and manage observability tools including Zabbix, Loki, Fluentd, Fluent Bit, Prometheus, and Grafana. Contribute to building resilient distributed systems with high availability and business continuity in mind. Collaborate with architecture teams to implement infrastructure and system design best practices. Apply security best practices to ensure data is protected in transit, at rest, and with least-privilege access. Assist engineers with debugging infrastructure and deployment issues across development and staging environments. Document deployment architectures, scripts, and operational processes clearly and accurately. Stay current with emerging tools and technologies and demonstrate a willingness to learn independently. Qualifications 2-5 years of relevant experience in DevOps or Infrastructure Engineering. Hands-on experience with CI/CD tools such as Ansible, Jenkins, Python, and Shell scripting. Strong working knowledge of monitoring/logging tools like Zabbix, Loki, Fluentd, Fluent Bit, Prometheus, and Grafana. Experience with cloud platforms, preferably Azure or AWS. Exposure to Azure SQL and familiarity with cloud-native monitoring tools is a plus. Familiarity with containers and Kubernetes is an advantage. Understanding of FinOps principles is an added advantage. Strong verbal and written communication skills.
Posted 1 month ago
6.0 - 9.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 6 to 9+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 month ago
7.0 - 12.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Summary: As a Lead DevOps Engineer, you will be leading projects targeted at supporting production and development environments, creating new and improving existing tools and processes, automating deployment and monitoring procedures. You will lead continuous integration and deployment effort, administering source control systems, deploying and maintaining production infrastructure and applications. Main Responsibilities: Technical leadership of infrastructure projects. Driving automation of performance testing environments. Leading containerization and IaC projects. Leading automation of engineering and operations processes. Defining and implementing HA and DR strategies. Design and optimization of CI/CD pipelines. Runbooks automation. On-call support of production systems. Requirements: 7+ years of experience in SRE, DevOps, or TechOps. 5+ years of experience leading technical projects or teams. 3+ years of tools development or automation. 3+ years of containerization and orchestration experience. Proficiency in shell scripting, as well as Python or Go. Ability to define project requirements and milestones. Experience leading cross-functional projects and teams. Solid experience in managing AWS production environments. Monitoring and observability expertise: OTEL, Prometheus, Grafana tools. Experience with at least two of the following: Puppet, Salt, Ansible, Terraform.
Posted 1 month ago
3.0 - 8.0 years
30 - 35 Lacs
Bengaluru
Work from Office
locationsBangalore - Carinaposted onPosted 14 Days Ago job requisition idR-045305 The IT AI Application Platform team is seeking a Senior Site Reliability Engineer (SRE) to develop, scale, and operate our AI Application Platform based on Red Hat technologies, including OpenShift AI (RHOAI) and Red Hat Enterprise Linux AI (RHEL AI). As an SRE you will contribute to running core AI services at scale by enabling customer self-service, making our monitoring system more sustainable, and eliminating toil through automation. On the IT AI Application Platform team, you will have the opportunity to influence the complex challenges of scale which are unique to Red Hat IT managed AI platform services, while using your skills in coding, operations, and large-scale distributed system design. We develop, deploy, and maintain Red Hats next-generation Ai application deployment environment for custom applications and services across a range of hybrid cloud infrastructures. We are a global team operating on-premise and in the public cloud, using the latest technologies from Red Hat and beyond. Red Hat relies on teamwork and openness for its success. We are a global team and strive to cultivate a transparent environment that makes room for different voices. We learn from our failures in a blameless environment to support the continuous improvement of the team. At Red Hat, your individual contributions have more visibility than most large companies, and visibility means career opportunities and growth. What you will do The day-to-day responsibilities of an SRE involve working with live systems and coding automation. As an SRE you will be expected to Build and manage our large scale infrastructure and platform services, including public cloud, private cloud, and datacenter-based Automate cloud infrastructure through use of technologies (e.g. auto scaling, load balancing, etc.), scripting (bash, python and golang), monitoring and alerting solutions (e.g. Splunk, Splunk IM, Prometheus, Grafana, Catchpoint etc) Design, develop, and become expert in AI capabilities leveraging emerging industry standards Participate in the design and development of software like Kubernetes operators, webhooks, cli-tools.. Implement and maintain intelligent infrastructure and application monitoring designed to enable application engineering teams Ensure the production environment is operating in accordance with established procedures and best practices Provide escalation support for high severity and critical platform-impacting events Provide feedback around bugs and feature improvements to the various Red Hat Product Engineering teams Contribute software tests and participate in peer review to increase the quality of our codebase Help and develop peers capabilities through knowledge sharing, mentoring, and collaboration Participate in a regular on-call schedule, supporting the operation needs of our tenants Practice sustainable incident response and blameless postmortems Work within a small agile team to develop and improve SRE methodologies, support your peers, plan and self-improve What you will bring A bachelor's degree in Computer Science or a related technical field involving software or systems engineering is required. However, hands-on experience that demonstrates your ability and interest in Site Reliability Engineering are valuable to us, and may be considered in lieu of degree requirements. You must have some experience programming in at least one of these languagesPython, Golang, Java, C, C++ or another object-oriented language. You must have experience working with public clouds such as AWS, GCP, or Azure. You must also have the ability to collaboratively troubleshoot and solve problems in a team setting. As an SRE you will be most successful if you have some experience troubleshooting an as-a-service offering (SaaS, PaaS, etc.) and some experience working with complex distributed systems. We like to see a demonstrated ability to debug, optimize code and automate routine tasks. We are Red Hat, so you need a basic understanding of Unix/Linux operating systems. Desired skills 3+ years of experience of using cloud providers and technologies (Google, Azure, Amazon, OpenStack etc) 1+ years of experience administering a kubernetes based production environment 2+ years of experience with enterprise systems monitoring 2+ years of experience with enterprise configuration management software like Ansible by Red Hat, Puppet, or Chef 2+ years of experience programming with at least one object-oriented language; Golang, Java, or Python are preferred 2+ years of experience delivering a hosted service Demonstrated ability to quickly and accurately troubleshoot system issues Solid understanding of standard TCP/IP networking and common protocols like DNS and HTTP Demonstrated comfort with collaboration, open communication and reaching across functional boundaries Passion for understanding users needs and delivering outstanding user experiences Independent problem-solving and self-direction Works well alone and as part of a global team Experience working with Agile development methodologies #LI-SH4 About Red Hat is the worlds leading provider of enterprise software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact.
Posted 1 month ago
6.0 - 10.0 years
11 - 15 Lacs
Mumbai, Bengaluru
Work from Office
locationsNew DelhiMumbaiBangalore - Carinaposted onPosted 30+ Days Ago job requisition idR-047126 About the job: Red Hat's Services team is seeking an experienced and highly skilled support engineer or systems administrator with an overall 6-10 years of experience, to join us as Technical Account Manager for our Telco customers covering Red Hat OpenStack and Red Hat OpenShift Container Platform. In this role, you'll provide personalized, proactive technology engagement and guidance, and cultivate high-value relationships with clients as you seek to understand and meet their needs with the complete Red Hat portfolio of products. As a Technical Account Manager, you will provide a level of premium advisory-based support that builds, maintains, and grows long-lasting customer loyalty by tailoring support for each of our customer's environments, facilitating collaboration with their other vendors, and advocating on the customer's behalf. At the same time, you'll work closely with our Engineering, R&D, Product Management, Global Support, Sales & Services teams to debug, test, and resolve issues. What will you do: Perform technical reviews & share knowledge to proactively identify & prevent issues Understand your customers' technical infrastructures, hardware, processes, and offerings Perform initial or secondary investigations and respond to online and phone support requests Deliver key portfolio updates and assist customers with upgrades Manage customer cases and maintain clear and concise case documentation Create customer engagement plans & keep documentation on customer environments updated Ensure a high level of customer satisfaction with each qualified engagement through the complete adoption life cycle of our offerings Engage with Red Hat's field teams, customers to ensure a positive platform & cloud technology experience and a successful outcome resulting in long-term enterprise success Communicate how specific Red Hat platform & cloud solutions and our cloud road-map align to customer use cases Capture Red Hat product capabilities and identify gaps as related to customer use cases through a closed-loop process for each step of the engagement life cycle Engage with Red Hat's product engineering teams to help develop solution patterns based on customer engagements and personal experience that guide platform adoption Establish and maintain parity with Red Hats platform & cloud technologies strategy Contribute internally to the Red Hat team, share knowledge and best practices with team members, contribute to internal projects and initiatives, and serve as a subject matter expert (SME) and mentor for specific technical or process areas or process areas Travel to visit customers, partners, conferences, and other events as needed What will you bring: Bachelor's degree in science or a technical field; engineering or computer science Competent reading and writing skills in English Ability to effectively manage and grow existing enterprise customers by delivering proactive, relationship-based, best-in-class support Upstream involvement in open source projects is a plus RHOSP Proven & strong system administration and troubleshooting experience with Red Hat OpenStack Experience in a support, development, engineering, or quality assurance organization RHOCP Experience in Red Hat OpenShift Container Platform (RHOCP) or Kubernetes or Dockers cluster management Understanding of RHOCP Architecture & different types of RHOCP installations Experience in RHOCP Troubleshooting and data/logs collection Strong working knowledge of Prometheus and Grafana open source monitoring solutions will be considered a plus Administration experience of Red Hat OpenShift v4.x will be considered a plus About Red Hat is the worlds leading provider of enterprise software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact.
Posted 1 month ago
7.0 - 10.0 years
22 - 27 Lacs
Gurugram
Work from Office
As the Technical Policy entry point for project teams, you coordinate the study activities for the solution to implement. In this respect, you: Promote the Technical Policy and make the project team adopt it. Depending on the skills of the project team, you design, or coach the project team, for the software architecture and technical architecture. Coordinate and setup a collaborative framework with the project team and the experts in each architecture design field –security, network, cloud offers, DevOps, databases, monitoring, big data, Kubernetes, API’s etc- in order to define a global solution compliant with the enterprise technical policy. Coordinate and define with experts in charge, the services in build and run phases -OLA, SLA, support chain, cloud hosting entity, deployment process in DevOps mode, etc – resulting in a clear and consistent assessment of the roles and commitments in the project. You are accountable for the consistency of the complete solution. Establish the hosting budget of the implementation solution. Build a roadmap with the project team up to the service stabilization. Coach project teams during the build phase by providing contacts names and processes. Make sure that the implemented solution is the one initially defined. As a technical / software architect, Stay informed, about the evolutions in the possible solutions as per the innovations in the domain or the enterprise policy, with the permanent concern to optimize practices and reduce the costs. Propose studies on enablers, solutions, architecture templates, so as to adapt the Technical Policy to new challenges. Animate a community with the representatives of IT Portfolios, or technical architects within these portfolios, in order to share and explain the IT Technical policy. From a master’s degree in computer science, you have a significant experience as a technical architect and project manager so as to design the technical target and coach the project for hosting the apps in a cloud context. You also have software architecture basics allowing to coach the project in designing the apps in a cloud native model. You wish to stay close to technical topics, as well as develop your leadership in an Agile / DevOps context. IT skills: - Technical Architectures o OpenStack clouds, Hyperscaler clouds (Azure), Kubernetes, Service Oriented Architecture, Databases o Network & security architecture skills (load balancing, firewalls, reverse proxies, DNS, certificates, ) o Prometheus, Grafana - Software architectures o Cloud-native applications based on micro-services, Domain-Driven Design. - Application Architecture Understanding o APIs, microservices, middleware, messaging systems Tools / Methods: - Management of transversal projects - ITIL - DevOps – CI/CD chain : Gitlab CI - Agility – JIRA, Confluence Professionnal skills : Leadership Meeting management Capacity for analysis and synthesis Capacity to challenge Curiosity, hungriness for IT techniques, capacity to learn Very good english level spoken and written . Negociation capacity Creativity, proposal-oriented Taste for teamwork and transversal work Result-oriented Rigour Capacity to document Roles and Responsibilities As the Technical Policy entry point for project teams, you coordinate the study activities for the solution to implement. In this respect, you: Promote the Technical Policy and make the project team adopt it. Depending on the skills of the project team, you design, or coach the project team, for the software architecture and technical architecture. Coordinate and setup a collaborative framework with the project team and the experts in each architecture design field –security, network, cloud offers, DevOps, databases, monitoring, big data, Kubernetes, API’s etc- in order to define a global solution compliant with the enterprise technical policy. Coordinate and define with experts in charge, the services in build and run phases -OLA, SLA, support chain, cloud hosting entity, deployment process in DevOps mode, etc – resulting in a clear and consistent assessment of the roles and commitments in the project. You are accountable for the consistency of the complete solution. Establish the hosting budget of the implementation solution. Build a roadmap with the project team up to the service stabilization. Coach project teams during the build phase by providing contacts names and processes. Make sure that the implemented solution is the one initially defined. As a technical / software architect, Stay informed, about the evolutions in the possible solutions as per the innovations in the domain or the enterprise policy, with the permanent concern to optimize practices and reduce the costs. Propose studies on enablers, solutions, architecture templates, so as to adapt the Technical Policy to new challenges. Animate a community with the representatives of IT Portfolios, or technical architects within these portfolios, in order to share and explain the IT Technical policy. From a master’s degree in computer science, you have a significant experience as a technical architect and project manager so as to design the technical target and coach the project for hosting the apps in a cloud context. You also have software architecture basics allowing to coach the project in designing the apps in a cloud native model. You wish to stay close to technical topics, as well as develop your leadership in an Agile / DevOps context. IT skills: - Technical Architectures o OpenStack clouds, Hyperscaler clouds (Azure), Kubernetes, Service Oriented Architecture, Databases o Network & security architecture skills (load balancing, firewalls, reverse proxies, DNS, certificates, ) o Prometheus, Grafana - Software architectures o Cloud-native applications based on micro-services, Domain-Driven Design. - Application Architecture Understanding o APIs, microservices, middleware, messaging systems Tools / Methods: - Management of transversal projects - ITIL - DevOps – CI/CD chain : Gitlab CI - Agility – JIRA, Confluence Professionnal skills : Leadership Meeting management Capacity for analysis and synthesis Capacity to challenge Curiosity, hungriness for IT techniques, capacity to learn Very good english level spoken and written . Negociation capacity Creativity, proposal-oriented Taste for teamwork and transversal work Result-oriented Rigour Capacity to document
Posted 1 month ago
3.0 - 5.0 years
15 - 27 Lacs
Bengaluru
Work from Office
Job Summary The NetApp Keystone team is responsible for cutting-edge technologies that enable NetApp’s pay as you go offering. Keystone helps customers manage data on prem or in the cloud and have invoices that are charged in a subscription manner.As an engineer in the NetApp’s Keystone organization, you will be executing our most challenging and complex projects. You will be responsible for decomposing complex product requirements into simple solutions, understanding system interdependencies and limitations and engineering best practices. Job Requirements • Strong knowledge of Python programming language, paradigms, constructs, and idioms • Bachelor’s/master’s degree in computer science, information technology, or engineering/ or anything specific that you prefer • Knowledge of various Python frameworks and tools • 2+ year experience working with the Python programming language • Strong written and communication skills with proven fluency in English • Be proficient in writing code for backend and front end • Familiarity with database technologies such as NoSQL, Prometheus and datalake • Hands-on experience with code conversion tools like Git, • Passionate about learning new tools, languages, philosophies, and workflows • Working with generated code and code generation techniques • Knowledge of software development methodologies - SCRUM/AGILE/LEAN • Knowledge of software deployment - Docker/Kubernetes • Knowledge of software team tools - GIT/JIRA/CICD Education Minimum of 2 to 4 years experience required with B.Tech or M.Tech background
Posted 1 month ago
9.0 - 10.0 years
5 - 7 Lacs
Noida, Bengaluru
Work from Office
Requirements: 5+ years of experience in DevOps or Cloud Engineering. Expertise in AWS (EC2, S3, RDS, Lambda, IAM, VPC, Route 53, etc.) and Azure (VMs, AKS, App Services, Azure Functions, Networking, etc.). Strong experience with Infrastructure as Code (IaC) using Terraform, CloudFormation, or Bicep. Hands-on experience with CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, or Azure DevOps. Proficiency in scripting languages like Python, Bash, or PowerShell. Experience with Kubernetes (EKS, AKS) and containerization (Docker). Knowledge of monitoring and logging tools like Prometheus, Grafana, ELK Stack, CloudWatch, and Azure Monitor. Familiarity with configuration management tools like Ansible, Puppet, or Chef. Strong understanding of security best practices in cloud environments. Experience with version control systems like Git. Excellent problem-solving skills and the ability to work in a fast-paced environment.
Posted 1 month ago
5.0 - 9.0 years
17 - 20 Lacs
Bengaluru
Work from Office
locationsBangalore, Indiaposted onPosted 30+ Days Ago job requisition id30553 FICO (NYSEFICO) is a leading global analytics software company, helping businesses in 100+ countries make better decisions. Join our world-class team today and fulfill your career potential! The Opportunity A DevOps role at FICO is an opportunity to work with cutting edge cloud technologies with a team focused on delivery of secure cloud solutions and products to enterprise customers. - VP, DevOps Engineering What Youll Contribute Design, implement, and maintain Kubernetes clusters in AWS environments. Develop and manage CI/CD pipelines using Tekton, ArgoCD, Flux or similar tools. Implement and maintain observability solutions (monitoring, logging, tracing) for Kubernetes-based applications. Collaborate with development teams to optimize application deployments and performance on Kubernetes. Automate infrastructure provisioning and configuration management using AWS services and tools. Ensure security and compliance in the cloud infrastructure. What Were Seeking Proficiency in Kubernetes administration and deployment, particularly in AWS (EKS). Experience with AWS services such as EC2, S3, IAM, ACM, Route 53, ECR. Experience with Tekton for building CI/CD pipelines. Strong understanding of observability tools like Prometheus, Grafana or similar. Scripting and automation skills (e.g., Bash, GitHub workflows). Knowledge of cloud platforms and container orchestration. Experience with infrastructure as code tools (Terraform, CloudFormation). Knowledge of Helm. Understanding of security best practices in cloud and Kubernetes environments. Proven experience in delivering microservices and Kubernetes-based systems. Our Offer to You An inclusive culture strongly reflecting our core valuesAct Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Why Make a Move to FICO At FICO, you can develop your career with a leading organization in one of the fastest-growing fields in technology today Big Data analytics. Youll play a part in our commitment to help businesses use data to improve every choice they make, using advances in artificial intelligence, machine learning, optimization, and much more. FICO makes a real difference in the way businesses operate worldwide Credit Scoring FICO Scores are used by 90 of the top 100 US lenders. Fraud Detection and Security 4 billion payment cards globally are protected by FICO fraud systems. Lending 3/4 of US mortgages are approved using the FICO Score. Global trends toward digital transformation have created tremendous demand for FICOs solutions, placing us among the worlds top 100 software companies by revenue. We help many of the worlds largest banks, insurers, retailers, telecommunications providers and other firms reach a new level of success. Our success is dependent on really talented people just like you who thrive on the collaboration and innovation thats nurtured by a diverse and inclusive environment. Well provide the support you need, while ensuring you have the freedom to develop your skills and grow your career. Join FICO and help change the way business thinks! Learn more about how you can fulfil your potential at FICO promotes a culture of inclusion and seeks to attract a diverse set of candidates for each job opportunity. We are an equal employment opportunity employer and were proud to offer employment and advancement opportunities to all candidates without regard to race, color, ancestry, religion, sex, national origin, pregnancy, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Research has shown that women and candidates from underrepresented communities may not apply for an opportunity if they dont meet all stated qualifications. While our qualifications are clearly related to role success, each candidates profile is unique and strengths in certain skill and/or experience areas can be equally effective. If you believe you have many, but not necessarily all, of the stated qualifications we encourage you to apply. Information submitted with your application is subject to theFICO Privacy policy at
Posted 1 month ago
5.0 - 10.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 7 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm (Monday-26th May to Friday-30th May)
Posted 1 month ago
15.0 - 20.0 years
10 - 14 Lacs
Gurugram
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : SAP Hybris Commerce Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that project goals are met, facilitating discussions to address challenges, and guiding your team through the development process. You will also engage in strategic planning to align application development with business objectives, ensuring that the solutions provided are effective and efficient. Your role will require you to stay updated with industry trends and best practices to continuously improve application performance and user experience. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate training and knowledge-sharing sessions to enhance team capabilities.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Strong understanding of e-commerce platforms and their architecture.- Experience with integration of third-party services and APIs.- Familiarity with agile methodologies and project management tools.- Ability to troubleshoot and resolve technical issues efficiently.Performance Engineering Fundamentals- In-depth knowledge of:Latency, throughput, concurrency, scalability, resource utilization- Performance metrics:CPU usage, memory consumption, disk I/O, network latency- Understanding of bottlenecks in multi-tiered architectures- JVM tuning (GC optimization, thread pools)- Database tuning (indexing, query optimization, DB Connection pool)- Monitoring & Observability- Have knowledge of Dynatrace, New Relic, Prometheus, Grafana- Resource tuning pods, autoscaling, memory/CPU optimization, Load Balancing, Cluster Configuration- Knowledge of Akamai Caching, APG Caching- Good to have if SAP Commerce Cloud CCV2 Experience Additional Information:- The candidate should have minimum 7.5 years of experience in SAP Hybris Commerce.- This position is based at our Gurugram office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
6.0 - 11.0 years
8 - 13 Lacs
Bengaluru
Work from Office
About the Role: This role is responsible for managing and maintaining complex, distributed big data ecosystems. It ensures the reliability, scalability, and security of large-scale production infrastructure. Key responsibilities include automating processes, optimizing workflows, troubleshooting production issues, and driving system improvements across multiple business verticals. Roles and Responsibilities: Manage, maintain, and support incremental changes to Linux/Unix environments. Lead on-call rotations and incident responses, conducting root cause analysis and driving postmortem processes. Design and implement automation systems for managing big data infrastructure, including provisioning, scaling, upgrades, and patching clusters. Troubleshoot and resolve complex production issues while identifying root causes and implementing mitigating strategies. Design and review scalable and reliable system architectures. Collaborate with teams to optimize overall system performance. Enforce security standards across systems and infrastructure. Set technical direction, drive standardization, and operate independently. Ensure availability, performance, and scalability of systems and services through proactive monitoring, maintenance, and capacity planning. Resolve, analyze, and respond to system outages and disruptions and implement measures to prevent similar incidents from recurring. Develop tools and scripts to automate operational processes, reducing manual workload, increasing efficiency and improving system resilience. Monitor and optimize system performance and resource usage, identify and address bottlenecks, and implement best practices for performance tuning. Collaborate with development teams to integrate best practices for reliability, scalability, and performance into the software development lifecycle. Stay informed of industry technology trends and innovations, and actively contribute to the organization's technology communities. Develop and enforce SRE best practices and principles. Align across functional teams on priorities and deliverables. Drive automation to enhance operational efficiency. Skills Required: Over 6 years of experience managing and maintaining distributed big data ecosystems. Strong expertise in Linux including IP, Iptables, and IPsec. Proficiency in scripting/programming with languages like Perl, Golang, or Python. Hands-on experience with the Hadoop stack (HDFS, HBase, Airflow, YARN, Ranger, Kafka, Pinot). Familiarity with open-source configuration management and deployment tools such as Puppet, Salt, Chef, or Ansible. Solid understanding of networking, open-source technologies, and related tools. Excellent communication and collaboration skills. DevOps tools: Saltstack, Ansible, docker, Git. SRE Logging and monitoring tools: ELK stack, Grafana, Prometheus, opentsdb, Open Telemetry. Good to Have: Experience managing infrastructure on public cloud platforms (AWS, Azure, GCP). Experience in designing and reviewing system architectures for scalability and reliability. Experience with observability tools to visualize and alert on system performance.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France