Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
4 - 7 years
25 - 32 Lacs
Bengaluru
Hybrid
Meet Our Team: Cloud Observability Engineering collaborates with all the engineering teams at Pega and advocate for Observability solutions, establish standards and processes. Cloud Observability Engineering team is responsible for designing, developing and maintaining Observability solutions for Pega Cloud. Picture Yourself at Pega: You will be part of a highly innovative company changing how companies work and optimize their outcomes. In this role you will be part of a highly collaborative team that is well versed and skilled in their space. You will be an adaptive/flexible change agent driving data driven decisioning while seeing the benefits of your hard work turn into business outcomes. What Youll Do at Pega: Design and develop observability solutions for Pega Cloud hosted applications. Responsible for Micro services monitoring used in Pega Cloud services. Enable observability monitoring capabilities - metric collections, logging and APM (tracing) with applications hosted in Pega Cloud service. Be familiar with Open telemetry standards for distributed tracing, open metric collection. Define KPIs, SLI, SLO and SLAs for micro services. Manage and maintain monitoring Pega tools and technologies. Responsible for correlation of Pega Infrastructure and Application metrics, error codes and logs; end-to-end monitoring from Customer request, load balancer, backend services (tomcat, java application, database, ping, network, etc.); and synthetic monitoring (simulation of customer request end-to-end). Work with Engineering (Pega product/ Cloud engineering) towards achieving the common goal of availability monitoring, performance monitoring, application monitoring and enabling correlation between infrastructure and application/services/technologies. Work on automation and proactively build Pega solutions with team to enhance the current process. Provide feedback on bugs and customer experience enhancements. Operate, administer and automate solutions on Pega Cloud Stack. Responsible for problem discovery, analysis, and resolution. Use world class Pega cloud infrastructure and management technologies. Who You Are: Passionate about cloud technologies and well versed with Observability tools such as Grafana Cloud, Prometheus, New Relic etc along with AWS core concepts and services. Have knowledge with infrastructure-as-code delivery model. Having hands on DevOps, continuous integration and development (CI/CD) tools & Jenkins Managing the development of deliverables into production environments Strong analytical and problem-solving skills Excellent communication and teamwork skills. 4+ years of relevant experience What Youve Accomplished: Required Qualification Bachelor's degree in Computer Science, Information Technology, or a related field 4+ years of experience in cloud technologies (AWS) Experience with infrastructure-as-code tools & Delivery model. Knowledge in any one of the Observability tools like Grafana, Prometheus etc,. Exposure to any of the programming languages or scripting languages. Exposure to Kubernetes. Excellent communication and teamwork skills. Desired Qualification Having hands on DevOps, continuous integration and development (CI/CD) tools & Jenkins Any cloud certification (AWS, GCP or AZURE) Experience with agile methodologies Good with troubleshooting and RCA using monitoring data and log Pega Offers You: Gartner Analyst acclaimed technology leadership across our categories of products Continuous learning and development opportunities An innovative, inclusive, agile, flexible, and fun work environment Competitive global benefits program inclusive of pay + bonus incentive, employee equity in the company
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Hyderabad
Work from Office
We are looking for an experienced Senior DevOps Engineer to join our innovative and fast-paced team. The ideal candidate will have a strong background in cloud infrastructure, CI/CD pipelines, and automation. This role offers the opportunity to work with cutting-edge tools and technologies such as AWS, Docker, Kubernetes, Terraform, and Jenkins, while driving the operational efficiency of our development processes in a collaborative environment. Key Responsibilities: Infrastructure as Code: Design, implement, and manage scalable, secure infrastructure using tools like Terraform, Ansible, and CloudFormation. Cloud Management: Deploy and manage applications on AWS, leveraging cloud-native services for performance, cost efficiency, and reliability. CI/CD Pipelines: Develop, maintain, and optimize CI/CD pipelines using tools like Jenkins, GitLab CI, or CircleCI to ensure smooth software delivery. Containerization & Orchestration: Build and manage containerized environments using Docker and orchestrate deployments with Kubernetes. Monitoring & Logging: Implement monitoring and logging solutions (e.g., Prometheus, Grafana, ELK stack) to ensure system reliability and quick troubleshooting. Automation: Develop scripts and tools to automate routine operational tasks, focusing on efficiency and scalability. Security & Compliance: Ensure infrastructure and applications meet security best practices and compliance standards. Collaboration: Work closely with development teams to align infrastructure and deployment strategies with business needs. Incident Management: Troubleshoot production issues, participate in on-call rotations, and ensure high availability of systems. Documentation: Maintain clear and comprehensive documentation for infrastructure, processes, and configurations. Required Qualifications: Extensive experience in DevOps or Site Reliability Engineering (SRE) roles. Strong expertise with AWS or other major cloud platforms. Proficiency in building and managing CI/CD pipelines. Hands-on experience with Docker and Kubernetes. In-depth knowledge of Infrastructure as Code (IaC) tools like Terraform or Ansible. Familiarity with monitoring tools such as Prometheus, Grafana, or New Relic. Strong scripting skills in Python, Bash, or similar languages. Understanding of network protocols, security best practices, and system architecture. Experience in scaling infrastructure to support high-traffic, mission-critical applications. Preferred Skills: Knowledge of multi-cloud environments and hybrid cloud setups. Experience with service mesh technologies (e.g., Istio, Consul). Familiarity with database management in cloud environments. Strong problem-solving skills and a proactive mindset. Ability to mentor junior team members and lead by example. Experience working in Agile/Scrum environments. Skills : - AWS, CI/CD pipelines, Jenkins, Git, ELK, Docker, Kubernetes,Terraform
Posted 3 months ago
2 - 7 years
10 - 15 Lacs
Bengaluru
Work from Office
Proven professional experience in a DevOps role with expertise in Azure DevOps, Jenkins, Python, Shell/Bash scripting, and SQL.ELK, CI/CD, Ansible, Puppet, or Chef. Azure Monitor, Prometheus, or ELK stack.
Posted 3 months ago
2 - 4 years
6 - 12 Lacs
Bengaluru
Work from Office
Looking for immediate joiner - Remote opportunity Role Requirements: API Integration: Experience in integrating RESTful and GraphQL APIs using Node.js . Monitoring & Logging: Familiarity with tools like Prometheus, Grafana, or New Relic for system monitoring and performance tracking. API Validation: Proficiency in using Postman for API testing and validation. Incident Management: Strong troubleshooting skills to diagnose and resolve production issues efficiently.
Posted 3 months ago
5 - 7 years
7 - 17 Lacs
Mumbai
Work from Office
Required Skills & Qualifications: Bachelors Degree in Computer Science, Information Technology, or a related field. 5+ years of experience managing and administering the ELK stack (Elasticsearch, Logstash, Kibana). Strong knowledge of Elasticsearch architecture, clustering, and query optimization. Experience with Logstash, Beats, and other data ingestion tools. Proficient in Kibana dashboard development and visualizing large volumes of data. In-depth experience with data storage and indexing strategies within Elasticsearch. Familiarity with Elasticsearch security, including role-based access control (RBAC), SSL/TLS encryption, and integrating with third-party security tools. Strong scripting skills in languages like Bash, Python, or Groovy for automation tasks. Understanding of Linux/Unix system administration (e.g., CentOS, Ubuntu). Experience with containerization and orchestration tools like Docker and Kubernetes is a plus. Familiarity with cloud platforms (AWS, Azure, GCP) and deploying ELK on cloud infrastructure is an advantage. Experience with monitoring and alerting tools integrated with ELK (e.g., Prometheus, Grafana). Ability to troubleshoot and resolve complex issues in a production environment. Excellent communication skills, with the ability to document and explain complex technical concepts clearly.
Posted 3 months ago
9 - 14 years
35 - 50 Lacs
Hyderabad
Hybrid
- C#/.NET development - Leading Platform Engineering or Site Reliability Engineering teams - SaaS platform - Azure, AWS or GCP - Terraform - Kubernetes - DataDog, Prometheus, Grafana
Posted 3 months ago
5 - 10 years
6 - 9 Lacs
Hyderabad
Remote
About the Role: As a Senior Software Developer at Aark Connect, you'll engage in contract-based projects, collaborating closely with our teams to deliver high-quality solutions tailored to client-specific requirements. You'll have the chance to showcase your expertise, adapt rapidly to new challenges, and contribute directly to impactful projects. Key Responsibilities: - Engage actively in diverse software development projects, aligning closely with evolving client expectations. - Quickly assimilate project requirements and deliver high-quality code efficiently. - Collaborate seamlessly with both offshore and onshore teams, ensuring clear communication and timely deliveries. - Participate proactively during project onboarding and requirement analysis phases. Expectations: - Be readily available to commence projects with a two-week notice period, allowing adequate preparation to understand the scope and requirements thoroughly. - Communicate clearly and maintain responsiveness throughout active project periods. - Effectively manage project deliverables alongside your current professional commitments, if applicable. Qualifications: - Minimum 5 years of professional software development experience. - Solid expertise in modern development frameworks (Java, .NET, JavaScript, Angular, React, etc.). - Excellent communication and collaboration skills. - Ability to thrive in flexible, contract-based work environments. Work Environment: - Remote or hybrid flexibility available (preferred base: Hyderabad). - Night shift working hours (7:00 PM - 3:30 AM IST). Why Join Aark Connect? - Flexible, project-driven assignments enabling continuous learning. - Exposure to global projects and innovative technology environments. - Advance notification and preparation period for smooth project onboarding. - Opportunity to become part of a supportive, collaborative offshore team.
Posted 3 months ago
5 - 10 years
22 - 35 Lacs
Chennai, Bengaluru, Hyderabad
Work from Office
We are looking for 10 Software Engineer IIs (Data) to play a key role in delivering software products within a high-impact engineering team. You will collaborate with Product Managers, User Experience Designers, Architects, and other team members to modernize and build products aligned with the product teams vision and strategy. These products will leverage multi-cloud platforms, human-centered design, Agile, and DevOps principles to deliver industry-leading solutions at high velocity and exceptional quality. Youll be working alongside talented software engineering professionals, leading by example, mentoring others, and thriving in a fast-paced environment by embracing inclusive behaviors, demonstrating attention to detail, and navigating ambiguity. Team Overview The Data and AI enablement teams focus on accelerating end-to-end data and AI adoption across advanced analytics, product teams, and sales and operations functions. The broader team is responsible for data strategy, availability, and adoption via self-service tools, including AI model registry, activation of models for business, multichannel testing, and customer relationship measurement all in alignment with Responsible AI principles. The Sales Hub team supports seamless execution and distribution of sales transactions. It plays a pivotal role in delivering accurate data promptly, enabling better inventory management, financial planning, sales performance improvement, customer engagement, risk mitigation, and competitive advantage. Key Responsibilities Contribute to the delivery of complex solutions by breaking down big problems into smaller pieces Actively participate in team planning activities Ensure quality and integrity of the SDLC and identify opportunities to improve team practices through recommended tools and methods Triage complex issues independently Stay informed of the technology landscape and help plan delivery of broad business needs across multiple applications Set a consistent example of agile development practices and coach peers on cross-functional collaboration Mentor junior engineers and help new hires ramp up Contribute to and enhance internal libraries and tools Understand the business domain supported by your applications Proactively communicate status and issues to leadership Identify risks and challenges in your work and team deliverables Collaborate across teams to solve customer-centric problems creatively Show commitment to critical delivery deadlines Basic Qualifications 3+ years of relevant professional experience with a bachelors degree OR equivalent At least 1+ years in cloud engineering and architecture with AWS services (e.g., EC2, Lambda, S3, RDS, API Gateway, etc.) 2+ years of experience in microservices architecture using Java, Kafka, and NoSQL databases 2+ years of Java and Spring Boot development with a strong foundation in software engineering principles Experience building and deploying microservices Familiarity with CI/CD pipelines and tools such as Jenkins, GitLab CI, or AWS CodePipeline Ability to understand business problems and apply engineering design principles to solve for scalability, performance, and security Proven experience working directly with engineers, product managers, and stakeholders Strong communication and collaboration skills Preferred Qualifications Experience in omni-channel retail or sales environments Familiarity with Docker containerization and automated testing practices Passion for keeping up with new technologies and industry trends Proactive learning mindset and knowledge-sharing attitude Experience with monitoring/logging tools like Prometheus, Grafana, ELK stack, or AWS CloudWatch Understanding of security best practices in cloud environments Technical Skills (Tools, Technologies, Frameworks, Platforms) Programming Languages & Frameworks: Spring Boot Java Cloud Platforms & Services (AWS-focused): AWS EC2 AWS Lambda AWS S3 AWS RDS AWS API Gateway AWS CodePipeline AWS CloudWatch Microservices Architecture: Microservices Design & Development Service-to-service communication API design and implementation Databases: NoSQL Databases Data Streaming / Messaging Systems: Kafka DevOps & CI/CD Tools: Jenkins GitLab CI CI/CD Pipelines (general understanding and implementation) Containerization & Virtualization: Docker Monitoring and Logging Tools: Prometheus Grafana ELK Stack (Elasticsearch, Logstash, Kibana) AWS CloudWatch (mentioned above under AWS) Applied Technical Skills (Practices, Design Principles, Methodologies, etc.) Software Engineering Principles: Modular and scalable architecture Object-oriented programming (OOP) Code quality, maintainability, and reusability best practices Agile Development Methodologies: Scrum/Kanban practices Agile ceremonies (Planning, Standups, Retrospectives) DevOps Practices: Continuous Integration / Continuous Deployment Infrastructure as Code mindset System Design and Architecture: Breaking down complex systems into smaller manageable components Designing solutions that scale and perform under load Cloud Engineering & Architecture Principles: Multi-cloud awareness Designing for resilience, fault-tolerance, and cost-efficiency AI/ML Enablement (Supporting Systems, not core model development): AI model registry integration Model activation pipelines Responsible AI principles (adoption, governance) Data Engineering & Analytics Support: Data availability and strategy design Self-service data tooling enablement Multichannel testing frameworks Customer relationship measurement systems Collaboration & Communication: Working with cross-functional teams (Product, UX, Architecture) Problem-solving with cross-team dependencies Proactive communication and status reporting Quality Engineering: Automated Testing practices Ensuring high-quality software delivery through unit testing, integration testing, and regression testing Security Best Practices (Cloud-focused): Secure deployment patterns Identity and access management in cloud setups
Posted 3 months ago
3 - 8 years
1 - 5 Lacs
Mumbai, Bengaluru, Kolkata
Hybrid
Experience- 3 to 10 Years. Job Description Mandatory skill sets 1. Strong understanding of CI/CD processes and hands-on experience in Jenkins/Azure DevOps/GitHub Actions 2. Strong hands-on experience in Docker and Kubernetes 3. Experience managing deployments in Kubernetes environment 4. Strong experience in Terraform/Cloudformation/ARM templates 5. Strong experience in any of the development and/or scripting language 6. Valid certification in any of CKA/CKAD/DevOps/Any cloud architect associate or professional Good to have skillsets: 1. Experience in SAST and SCA tools like Checkmarx/Veracode/Fortify/Blackduck 2. Experience in Elastic/Prometheus+Grafana stack 3. Experience in service mesh technology Role & responsibilities Preferred candidate profile Perks and benefits
Posted 3 months ago
10 - 15 years
37 - 45 Lacs
Bengaluru
Work from Office
Bachelors degree in Computer Science/Information Technology, or in a related technical field or equivalent technology experience. 10+ years experience in software development 8+ years of experience in DevOps Experience with the following Cloud Native tools: Git, Jenkins, Grafana, Prometheus, Ansible, Artifactory, Vault, Splunk, Consul, Terraform, Kubernetes Working knowledge of Containers, i.e., Docker Kubernetes, ideally with experiencetransitioning an organization through its adoption Demonstrable experience with configuration, orchestration, and automation tools suchas Jenkins, Puppet, Ansible, Maven, and Ant to provide full stack integration Strong working knowledge of enterprise platforms, tools and principles including WebServices, Load Balancers, Shell Scripting, Authentication, IT Security, and PerformanceTuning Demonstrated understanding of system resiliency, redundancy, failovers, and disasterrecovery Experience working with a variety of vendor APIs including cloud, physical and logicalinfrastructure devices Strong working knowledge of Cloud offerings & Cloud DevOps Services (EC2, ECS, IAM, Lambda, Cloud services, AWS CodeBuild, CodeDeploy, Code Pipeline etc or Azure DevOps, API management, PaaS) Experience managing and deploying Infrastructure as Code, using tools like Terraform Helm charts etc. Manage and maintain standards for Devops tools used by the team
Posted 3 months ago
5 - 10 years
25 - 35 Lacs
Bengaluru
Hybrid
Were always looking for talented and creative engineers to join our team. Event & Streaming Group offers a relaxed but fast environment where creative and collaborative talented people are rewarded. We are very active and passionate about catching up and introducing cutting-edge technology from OSS (Open-Source Software). Our Solution for Data Engineering and Event Management are being used for various services in Rakuten, Inc and continue to grow, following up needs of system for data-driven strategy. Userss requirements and needs are changing continuously, and Our Solution are also evolving fast to catch up their needs and support. Role: We are in search of a talented Engineer, which would work with members in India and Japan. In Event & Streaming Group where are collecting and engineering tremendous data using data engineering solutions, you will get to play a core role in administrating, monitoring and problem resolution on current data engineering platform, and the cutting-edge data engineering technology R&D. Responsibilities: Administration and Maintenance for Data Pipeline System that transfer and wrangle terabyte of data from various service using ELK, Apache Kafka, Apache NiFi. Collaboration with SRE Tm in Japan and India. Implement Automated Operation System. L1/L2 Incident Response. Requirements: Excellent Hands-on experience with Linux . (At least more than 3-years) Must have experience in administrating and maintaining one of the following: Apache Kafka, ELK, NIFI Cluster in production . (At least more than 1-years) Apache Pulsar or Confluent Kafka (At least more than 1-years) Hands-on experience with one of Apache Pulsar or Confluent Kafka on K8S . (At least more than 1-years) Hands-on Experience with one of deployment system like Chef, Ansible , etc Hands-on Experience with one of metrics collection system like Prometheus, Graphite , etc Experience on one of programing languages in J ava (or Scala), Python, or ShellScript (At least more than 1-years) Must have experience in administrating and maintaining client-server backend system in production. (At least more than 1-years) Must be self-organized and gritty on continuous improvements of the platform Must be a self-starter and good collaborator with good communication skills. Preferred Knowledge, Skills and Abilities: Hands-on experience with HDP (HDFS, Hive/HiveLLAP, MapReduce, Spark on Yarn) or CDP Hands-on experience or great knowledge with Docker, Kubernetes . Hands-on experience or great knowledge with GCP, AWS, Azure Fluent or Business level of Japanese. Looking for immediate joiners / can join with in 30-days Rakuten is committed to cultivating and preserving a culture of inclusion and connectedness. We are able to grow and learn better together with a diverse team and inclusive workforce. The collective sum of the individual differences, life experiences, knowledge, innovation, self-expression, and talent that our employees invest in their work represents not only part of our culture, but our reputation and Rakutens achievement as well. In recruiting for our team, we welcome the unique contributions that you can bring in terms of their education, opinions, culture, ethnicity, race, sex, gender identity and expression, nation of origin, age, languages spoken, veterans status, color, religion, disability, sexual orientation, and beliefs.”
Posted 3 months ago
6 - 10 years
20 - 35 Lacs
Pune
Hybrid
Senior SRE - SaaS Our SRE role spans software, systems, and operations engineering. If your passion is building stable, scalable systems for a growing set of innovative products, as well as helping to reduce the friction for deploying these products for our engineering team, Pattern is the place for you. Come help us build a best-in-class platform for amazing growth. This position requires working during US business hours. ( MST Time Zone) Key Responsibilities Infrastructure and Automation Design, build, and manage scalable and reliable infrastructure in AWS (Postgres, Redis, Docker, Queues, Kinesis Streams, S3, etc.) Develop Python or shell scripts for automation, reducing operational toil. Implement and maintain CI/CD pipelines for efficient build and deployment processes using Github Actions. Monitoring and Incident Response Establish robust monitoring and alerting systems using observability methods, logs, and APM tools. Participate in on-call rotations to respond to incidents, troubleshoot problems, and ensure system reliability. Perform root cause analysis on production issues and implement preventative measures to mitigate future incidents. Cloud Administration Manage AWS resources, including Lambda functions, SQS, SNS, IAMs, RDS, etc. Perform Snowflake administration and set up backup policies for various databases. Reliability Engineering Define Service Level Indicators (SLIs) and measure Service Level Objectives (SLOs) to maintain high system reliability. Utilise Infrastructure as Code (IaC) tools like Terraform for managing and provisioning infrastructure. Collaboration and Empowerment Collaborate with development teams to design scalable and reliable systems. Empower development teams to deliver value quickly and accurately. Document system architectures, procedures, run books and best practices. Assist developers in creating automation scripts and workflows to streamline operational tasks and deployments. Innovative Infrastructure Solutions Spearhead the exploration of innovative infrastructure solutions and technologies aligned with industry best practices. Embrace a research-based approach to continuously improve system reliability, scalability, and performance. Encourage a culture of experimentation to test and implement novel ideas for system optimization. Required Qualifications : Bachelors degree in a technical field or relevant work experience 6+ years of experience in engineering, development, DevOps/SRE fields 3+ years experience deploying and managing systems using Amazon Web Services 3+ years’ experience on Software as a Service (SaaS) application . Proven “doer” attitude with ability to self-start, take a project to completion. Demonstrate project ownership. Familiarity with container orchestration tools like Kubernetes, Fargate, etc. Familiarity with Infrastructure as Code tooling like Terraform, CloudFormation, Ansible, Puppet Experience working with CI/CD automated deployments using tools like Github Actions, Jenkins, CircleCI Experience working on observability tools like Datadog, NewRelic, Dynatrace, Grafana, Prometheus, etc. Experience with Linux server management, bash scripting, SSH keys, SSL/TLS certificates, MFA, cron, and log files Deep understanding of AWS networking (VPCs, subnets, security groups, route tables, internet gateways, NAT gateways, NACLs), IAM policies, DNS, Route53, and domain management Strong problem-solving and troubleshooting skills Attention to Details: Thoroughness in accomplishing tasks, ensuring accuracy and quality in all aspects of work. Excellent communication and collaboration abilities Desire to help take Pattern to the next level through exploration and innovation Preferred Qualifications : Experience in deploying applications on ECS, Fargate with ELB/ALB and Auto Scaling Groups. Experience in deploying serverless applications with Lambda, API Gateway, Cognito, CloudFront. Experience in deploying applications built using JavaScript, Ruby, Go, Python. Experience with Infrastructure as Code (IaC) using Terraform. Experience with database administration for Snowflake, Postgres. AWS Certification would be a plus. A focus on adopting security best practices while building great tools. Shift Detail: This position requires working during the US shift hours, which corresponds to the night shift in India. Primarily to support Mountain Time. Flexibility to work during these hours is essential to ensure seamless collaboration with the US-based teams and to provide timely support for any incidents or operational tasks.
Posted 3 months ago
6 - 10 years
20 - 35 Lacs
Pune
Hybrid
Senior SRE - SaaS Our SRE role spans software, systems, and operations engineering. If your passion is building stable, scalable systems for a growing set of innovative products, as well as helping to reduce the friction for deploying these products for our engineering team, Pattern is the place for you. Come help us build a best-in-class platform for amazing growth. Key Responsibilities Infrastructure and Automation Design, build, and manage scalable and reliable infrastructure in AWS (Postgres, Redis, Docker, Queues, Kinesis Streams, S3, etc.) Develop Python or shell scripts for automation, reducing operational toil. Implement and maintain CI/CD pipelines for efficient build and deployment processes using Github Actions. Monitoring and Incident Response Establish robust monitoring and alerting systems using observability methods, logs, and APM tools. Participate in on-call rotations to respond to incidents, troubleshoot problems, and ensure system reliability. Perform root cause analysis on production issues and implement preventative measures to mitigate future incidents. Cloud Administration Manage AWS resources, including Lambda functions, SQS, SNS, IAMs, RDS, etc. Perform Snowflake administration and set up backup policies for various databases. Reliability Engineering Define Service Level Indicators (SLIs) and measure Service Level Objectives (SLOs) to maintain high system reliability. Utilise Infrastructure as Code (IaC) tools like Terraform for managing and provisioning infrastructure. Collaboration and Empowerment Collaborate with development teams to design scalable and reliable systems. Empower development teams to deliver value quickly and accurately. Document system architectures, procedures, run books and best practices. Assist developers in creating automation scripts and workflows to streamline operational tasks and deployments. Innovative Infrastructure Solutions Spearhead the exploration of innovative infrastructure solutions and technologies aligned with industry best practices. Embrace a research-based approach to continuously improve system reliability, scalability, and performance. Encourage a culture of experimentation to test and implement novel ideas for system optimization. Required Qualifications : Bachelors degree in a technical field or relevant work experience 6+ years of experience in engineering, development, DevOps/SRE fields 3+ years experience deploying and managing systems using Amazon Web Services 3+ years experience on Software as a Service (SaaS) application. Proven “doer” attitude with ability to self-start, take a project to completion. Demonstrate project ownership. Familiarity with container orchestration tools like Kubernetes, Fargate, etc. Familiarity with Infrastructure as Code tooling like Terraform, CloudFormation, Ansible, Puppet Experience working with CI/CD automated deployments using tools like Github Actions, Jenkins, CircleCI Experience working on observability tools like Datadog, NewRelic, Dynatrace, Grafana, Prometheus, etc. Experience with Linux server management, bash scripting, SSH keys, SSL/TLS certificates, MFA, cron, and log files Deep understanding of AWS networking (VPCs, subnets, security groups, route tables, internet gateways, NAT gateways, NACLs), IAM policies, DNS, Route53, and domain management Strong problem-solving and troubleshooting skills Attention to Details: Thoroughness in accomplishing tasks, ensuring accuracy and quality in all aspects of work. Excellent communication and collaboration abilities Desire to help take Pattern to the next level through exploration and innovation Preferred Qualifications : Experience in deploying applications on ECS, Fargate with ELB/ALB and Auto Scaling Groups. Experience in deploying serverless applications with Lambda, API Gateway, Cognito, CloudFront. Experience in deploying applications built using JavaScript, Ruby, Go, Python. Experience with Infrastructure as Code (IaC) using Terraform. Experience with database administration for Snowflake, Postgres. AWS Certification would be a plus. A focus on adopting security best practices while building great tools.
Posted 3 months ago
4 - 8 years
12 - 16 Lacs
Bengaluru
Work from Office
Your Role: Software developer in the cloud storage area, implementing and consuming APIs in the IBM cloud infrastructure environment (IaaS). Motivated self-starter who loves to solve challenging problems and feels comfortable managing multiple and changing priorities, and meeting deadlines in an entrepreneurial environment Highly organized, detail-oriented, excellent time management skills and able to effectively prioritize tasks in a fast-paced, high-volume, and evolving work environment Responsibilities are: Designing and developing storage integrations to enable and support cloud platform business efforts. Participate in troubleshooting and fixing issues in existing cloud storage environment. Required to produce code that is secure, scalable, and reliable, supported by unit tests, functional tests, and technical documentation. Required to participate in code reviews for your peers' development work, triage and solve live customer issues, and participate in all scrum activities. Additionally, monitor, measure, and improve code and data performance for the application you help to develop. Available for occasional on-call shifts during weekdays and weekends All of this will take place in a strong team environment, which necessitates strong communication. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Required Technical and Professional Expertise 4-8 years of industrial experience Strong systems management experience in Linux/UNIX systems (RHEL preferred) Experience in Linux networking technologies, and routing protocols (BGP, FRR) Experience in Docker and containerization technologies Experience with cloud computing technologies such as AWS, VMware, Azure Experience with application deployment using CI/CD Experience with monitoring tools such as Prometheus, Sysdig, Grafana, etc. Preferred technical and professional experience Experience with Linux virtualization technologies such as KVM, Xen and QEMU Experience with Ceph, NFS, iSCSI, or object storage technologies Excellent Git skills (merges, rebase, branching, forking, submodules) Excellent with Python, Ansible, Terraform, Jenkins Microservices design and development in Kubernetes and GoLang (preferably) Experience with k8s CRDs, k8s controller programming with watcher informer model
Posted 3 months ago
3 - 6 years
8 - 18 Lacs
Noida
Remote
Role: Platform Engineer (Golang) Location: Remote Work timing: IST Hours Job Description: They need to have experience developing a platform, kubernetes platform specifically Minimum Qualifications: 2+ years of experience developing scalable applications using Golang Administration experience in container orchestration platforms, preferably using Kubernetes, and demonstrated by the Certified Kubernetes Administrator (CKA) certificate Experience in observability (monitoring, logging, and tracing) for cloud-based environments, using CNCF tools such as Grafana, Prometheus, Thanos, Jaeger, or SaaS tools such as Datadog Proficiency in operating public cloud services using infrastructure-as-code tools like Crossplane, Terraform or CloudFormation Expertise in maintaining self-service CI/CD platforms and supporting techniques like Trunk-Based development, GitOps-based deployments, or automated canary releases Experience with relational databases (Postgres/MySQL) Business-level communication in English Strong knowledge of fundamental AWS services with a certification as an AWS Associate or above Passion for staying up-to-date with the latest industry topics in Site Reliability Engineering (SRE), the Cloud Native Computing Foundation (CNCF), DevOps, and the AWS Well-Architected Framework Knowledge of common application architecture patterns: distributed systems, microservices, asynchronous processing, event-driven systems, and others Experience building Kubernetes controllers or operators at scale with Golang in AWS, GCP, Azure, or On-prem. Experience with Kubernetes core systems and APIs Please share your Resume at asingh21@fcsltd.com
Posted 3 months ago
5 - 8 years
13 - 18 Lacs
Hyderabad
Work from Office
As a devops engineer, you will design, implement, and manage Kubernetes clusters for our telecom/networking applications. Developing and maintaining CI/CD pipelines for automated build, testing, and deployment. Monitoring and optimizing the performance and scalability of our Kubernetes infrastructure. Implementing and maintaining monitoring and alerting systems to proactively identify and resolve issues. Leading incident response and troubleshooting efforts, including root cause analysis. Automating operational tasks and processes to improve efficiency. Collaborating with development teams to integrate and deploy applications to Kubernetes. Contributing to the development and maintenance of our platform's security posture. Participating in on-call rotations to provide support for production systems. Leveraging network/telecom domain knowledge to effectively triage and resolve network-related issues. Contributing to development efforts by writing code and implementing new features (added advantage). Staying up-to-date with the latest Kubernetes and DevOps technologies and best practices. What were looking for: We are seeking a highly motivated and experienced Engineer with a strong background in Kubernetes and DevOps practices to join our team. This role will focus on building, maintaining, and scaling our network/telecom infrastructure and services in a kubernetes/Openshift based environment. You will play a key role in ensuring the reliability, performance, and security of our platform, working closely with development, operations, and other engineering teams. Experience with triaging and troubleshooting complex issues is essential, as is a willingness to contribute to development efforts. You'll need to have: Bachelors degree or four or more years of work experience. Four or more years of relevant work experience. Four or more years of experience in DevOps engineering or a related role. Proven experience with Kubernetes and containerization technologies (e.g., Docker). Experience with CI/CD tools (e.g., Jenkins, GitLab ). Strong understanding of networking concepts and protocols (e.g., TCP/IP, BGP, MPLS). Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack). Experience with cloud computing platforms (e.g., AWS, Azure, GCP) is an added advantage. Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills. Experience in the telecom/networking domain is essential. Experience with scripting languages (e.g., Python, Bash) is highly desirable. Experience with development and coding is a significant advantage. Even better if you have one or more of the following: Experience with a high-performance, high-availability environment. Experience with Network technologies like SDN/NFV Strong analytical, debugging skills. Good communication and presentation skills. Relevant certifications.
Posted 3 months ago
6 - 8 years
8 - 10 Lacs
Pune
Work from Office
AWS Engineer budget -30L pune/hyd 6+yrs About The Role : About The Role : We are looking for anAWS Engineerto join our team and play a key role in designing, deploying, and maintaining cloud infrastructure. The ideal candidate will have a deep understanding ofAWS architecture, experience withInfrastructure as Code (Terraform), proficiency incode delivery using GitHub, and the ability tosupport on-call operationsto ensure high availability and reliability of our cloud services. Key Responsibilities AWS Architecture & Design: Design and implement scalable, secure, and cost-effective AWS solutions. Optimize cloud services to improve performance, reliability, and security. Work closely with engineering teams to architect cloud-native solutions. Code Delivery & GitHub Management: Maintain and optimize CI/CD pipelines for seamless deployment. Manage repositories, branches, and pull requests in GitHub. Ensure best practices in version control and automation. Terraform & Infrastructure as Code (IaC): Develop, maintain, and manage Terraform configurations for AWS infrastructure. Automate cloud deployments and enforce infrastructure best practices. Perform regular code reviews and collaborate on improvements. On-Call & Operational Support: Participate in an on-call rotation to troubleshoot and resolve incidents. Monitor system performance, logs, and alerts to proactively address issues. Document operational procedures and incident response strategies. Required Skills & Qualifications Strong expertise in AWS services(EC2, S3, Lambda, RDS, IAM, VPC, etc.). Experience with Terraformfor managing AWS infrastructure. Proficiency in GitHubfor code management and collaboration. Experience with CI/CD tools(GitHub Actions, Jenkins, or similar). Familiarity with monitoring and logging tools(CloudWatch, Prometheus, Datadog, etc.). Understanding of networking and securityin AWS environments. Strong problem-solving skills and ability to work in an on-call environment. Experience with scripting and automation using Python, Bash, or similar languages.
Posted 3 months ago
8 - 13 years
25 - 30 Lacs
Chennai
Work from Office
Overview We are looking for a highly skilled Tech Lead Engineer to spearhead our data and application migration projects. The ideal candidate will have in-depth knowledge of cloud migration strategies, especially with AWS, and hands-on experience in large-scale migration initiatives. This role requires strong leadership abilities, technical expertise, and a keen understanding of both the source and target platforms. Responsibilities Key Responsibilities: Lead end-to-end migration projects, including planning, design, testing, and implementation. Collaborate with stakeholders to define migration requirements and goals. Perform assessments of existing environments to identify the scope and complexity of migration tasks. Design and architect scalable migration strategies, ensuring minimal downtime and business continuity. Oversee the migration of on-premises applications, databases, and data warehouses to cloud infrastructure. Ensure the security, performance, and reliability of migrated workloads. Provide technical leadership and guidance to the migration team, ensuring adherence to best practices. Troubleshoot and resolve any technical challenges related to the migration process. Collaborate with cross-functional teams, including infrastructure, development, and security. Document migration procedures and lessons learned for future reference. Requirements Primary Skills: Cloud Migration Expertise (AWS): Strong experience in AWS migration services such as AWS Database Migration Service (DMS), Lambda, Step Functions, Trigger, AWS Migration Hub, AWS Application Migration Service, and AWS DataSync. In-depth knowledge of AWS services including EC2, S3, RDS, Lambda, and VPC. Experience with AWS Well-Architected Framework and implementing security best practices. Data and Application Migration: Extensive experience in data migration tools (e.g., AWS DMS, Snowball, rsync, etc.). Hands-on experience in migrating legacy On-Prem or Cloud applications and monolithic systems to cloud-native architectures. Automation & Scripting: Proficiency in automation frameworks and scripting languages such as Terraform, CloudFormation, Ansible, Python, or Shell scripting for infrastructure provisioning and configuration management. Secondary Skills: DevOps Tools: Experience with CI/CD pipelines (e.g., Jenkins, GitLab CI) and containerization (e.g., Docker, Kubernetes). Networking & Security: Understanding of networking concepts such as VPN, DNS, load balancing, and firewalls. Familiarity with cloud security tools and compliance standards (e.g., IAM, KMS, encryption at rest/in transit). Project Management: Experience with project management methodologies such as Agile/Scrum. Familiarity with project tracking tools like JIRA, Trello, or Asana. Monitoring & Optimization: Experience with monitoring tools such as CloudWatch, Prometheus, or Grafana. Knowledge of performance tuning and optimization post-migration.
Posted 3 months ago
4 - 6 years
6 - 8 Lacs
Kochi
Work from Office
Job Summary: We are seeking an experienced DevOps Engineer with over 4 years of hands-on experience in managing and automating IT infrastructure and application deployment workflows. The ideal candidate will have expertise in Linux, cloud platforms, containerization, Infrastructure as Code (IaC), scripting, and monitoring tools. As a DevOps Engineer, you will collaborate with cross-functional teams to enhance our CI/CD pipelines, optimize cloud environments, and ensure the reliability and scalability of our infrastructure. Responsibilities: Design, implement, and manage CI/CD pipelines to automate deployment processes. Configure and maintain cloud environments (AWS, Azure, GCP) for scalability, performance, and cost-efficiency. Build and maintain containerized applications using Docker and Kubernetes. Implement Infrastructure as Code (IaC) practices with tools like Terraform and Ansible for consistent and reproducible infrastructure management. Monitor and troubleshoot production issues using Grafana, Prometheus, and other monitoring tools. Automate and streamline operations using scripting languages (Python, Bash). Collaborate with development and QA teams to establish best practices for build, deployment, and release processes. Ensure system security, performance, and availability through proactive monitoring and optimization. Document and maintain operational procedures, best practices, and knowledge base for DevOps tools and processes. Required Skills: Operating Systems: Proficiency in Linux systems administration and shell scripting. Cloud Platforms: Experience with one or more major cloud providers (AWS, Azure, GCP). Containerization: Strong understanding of Docker, Kubernetes for orchestrating containerized applications. Programming & Scripting Languages: Experience with Python and scripting languages (Bash). Version Control: Solid experience with Git for version control. Infrastructure as Code (IaC): Hands-on experience with Terraform and Ansible for managing infrastructure. Monitoring and Logging: Proficiency in using monitoring tools such as Grafana, Prometheus, or similar. CI/CD Tools: Experience with CI/CD pipelines (Jenkins, GitLab CI, etc.). Preferred Qualifications: Relevant certifications in AWS, Azure, GCP, or Kubernetes. Strong troubleshooting skills and a proactive approach to problem-solving. Experience in security best practices for cloud environments. Familiarity with Agile methodologies and DevOps culture.
Posted 3 months ago
3 - 5 years
12 - 22 Lacs
Chennai, Pune
Work from Office
Role & responsibilities Cloud SRE Engineer Key Responsibilities: Design, implement, and maintain highly available and scalable infrastructure on Azure and GCP. Develop and deploy comprehensive observability, monitoring, and incident response systems. Automate infrastructure management, scaling, and deployment processes using Infrastructure-as-Code (IaC) tools like Terraform and ARM. Collaborate with development teams to design resilient deployment architectures and ensure production readiness. Implement proactive performance monitoring and capacity planning strategies. Develop automated recovery and self-healing mechanisms for cloud infrastructure. Establish and enforce best practices for SRE and cloud infrastructure management. Ensure compliance, security, and governance standards across cloud environments. Required Skills: Expertise in observability tools like Prometheus, Grafana, and Datadog . Knowledge of cloud services on Azure and GCP. Hands-on experience with CI/CD tools and deployment automation. Solid understanding of cloud networking, security, and resource management. Strong scripting skills in Python, Bash, or PowerShell. Excellent troubleshooting, problem-solving, and communication skills. Preferred Qualifications: SRE certifications or relevant cloud certifications. Experience with multi-tenant deployments and high-scale environments. Familiarity with hybrid cloud and complex deployment scenarios. Cloud certifications for Azure and GCP
Posted 3 months ago
4 - 8 years
6 - 10 Lacs
Chennai
Work from Office
What youll be doing... As a devops engineer, you will design, implement, and manage Kubernetes clusters for our telecom/networking applications. Developing and maintaining CI/CD pipelines for automated build, testing, and deployment. Monitoring and optimizing the performance and scalability of our Kubernetes infrastructure. Implementing and maintaining monitoring and alerting systems to proactively identify and resolve issues. Leading incident response and troubleshooting efforts, including root cause analysis. Automating operational tasks and processes to improve efficiency. Collaborating with development teams to integrate and deploy applications to Kubernetes. Contributing to the development and maintenance of our platform's security posture. Participating in on-call rotations to provide support for production systems. Leveraging network/telecom domain knowledge to effectively triage and resolve network-related issues. Contributing to development efforts by writing code and implementing new features (added advantage). Staying up-to-date with the latest Kubernetes and DevOps technologies and best practices. What were looking for: We are seeking a highly motivated and experienced Engineer with a strong background in Kubernetes and DevOps practices to join our team. This role will focus on building, maintaining, and scaling our network/telecom infrastructure and services in a kubernetes/Openshift based environment. You will play a key role in ensuring the reliability, performance, and security of our platform, working closely with development, operations, and other engineering teams. Experience with triaging and troubleshooting complex issues is essential, as is a willingness to contribute to development efforts. You'll need to have: Bachelors degree or four or more years of work experience. Four or more years of relevant work experience. Four or more years of experience in DevOps engineering or a related role. Proven experience with Kubernetes and containerization technologies (e.g., Docker). Experience with CI/CD tools (e.g., Jenkins, GitLab ). Strong understanding of networking concepts and protocols (e.g., TCP/IP, BGP, MPLS). Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack). Experience with cloud computing platforms (e.g., AWS, Azure, GCP) is an added advantage. Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills. Experience in the telecom/networking domain is essential. Experience with scripting languages (e.g., Python, Bash) is highly desirable. Experience with development and coding is a significant advantage. Even better if you have one or more of the following: Experience with a high-performance, high-availability environment. Experience with Network technologies like SDN/NFV Strong analytical, debugging skills. Good communication and presentation skills. Relevant certifications.
Posted 3 months ago
3 - 7 years
8 - 18 Lacs
Bengaluru
Work from Office
Job description We are looking for talented DevOps Engineers to join our growing team working on innovative projects in enterprise scale AI applications and solutions. You will be responsible for building and setting up new development tools and infrastructure utilizing knowledge in continuous integration, delivery, and deployment (CI/CD), Cloud technologies, Container Orchestration and Integration with OIDC/SAML Identity Providers. Build and test end-to-end CI/CD pipelines, ensuring that systems are safe against security threats. Your typical day will involve working on various aspects of DevOps, including continuous integration, deployment, and security, to ensure smooth and efficient software development processes. Technical Qualifications Well versed in Python (Programming Language), YAML, JSON, Shell Scripting, GitHub Actions, Docker Containerization, Kubernetes Cluster Administration, and Deployment using Helm Charts. Experience with infrastructure as code tools such as Terraform or CloudFormation. Experience with configuration management tools such as Ansible or Puppet Ability to write scripts and automate tasks using scripting languages such as Bash or Python. Collaborate with development teams to design and implement CI/CD pipelines. Automate the deployment, scaling, and management of applications and infrastructure. Implement and maintain monitoring and alerting systems to ensure high availability and performance. Troubleshoot and resolve issues related to CI/CD pipelines, infrastructure, and applications. Proficiency with Monitoring, Log Analytics, and integrating with tools like Grafana and Prometheus stack for a comprehensive monitoring and alerting framework Experience with cloud platforms (AWS, GCP, or Azure). Good Understanding of Basic Auth, two-way SSL, oauth2 or JWT token-based security. Experience with Bitbucket, Git/Hub , or other version control systems. Ability to develop and extend CI/CD pipelines (Jenkins, Docker ) Generic Qualifications Ensure the security and compliance of the infrastructure and applications. Comfortable working with cross-functional teams product managers, client team, and software developers to deliver quality solutions. Ability to perform independently. Interest in staying current with technology and mentoring others. Ability to perform upgrades / patches to existing systems as needed Others: Willing to relocate or commute to North Bangalore for work.
Posted 3 months ago
10 - 16 years
10 - 20 Lacs
Mumbai Suburbs, Mumbai, Delhi
Work from Office
Education B.E./B.Tech/MCA in Computer Science Experience 3 to 6 Years of Experience in Kubernetes/GKE/AKS/OpenShift Administration Mandatory Skills ( Docker and Kubernetes) Should have good understanding of various components of various types of kubernetes clusters (Community/AKS/GKE/OpenShift) Should have provisioning experience of various type of kubernetes clusters (Community/AKS/GKE/OpenSHIFT) Should have Upgradation and monitoring experience of variouos type of kubernetes clusters (Community/AKS/GKE/OpenSHIFT) Should have good experience on Conatiner Security Should have good experience of Container storage Should have good experience on CICD workflow (Preferable Azure DevOps, Ansible and Jenkin) Should have goood experiene / knowlede of cloud platforms preferably Azure / Google / OpenStack Should have good experience of container runtimes like docker/cotainerd Should have basic understanding of application life cycle management on container platform Should have good understatning of container registry Should have good understanding of Helm and Helm Charts Should have good understanding of container monitoring tools like Prometheus, Grafana and ELK Should have good exeperince on Linux operating system Should have basis understanding of enterprise networks and container networks Should able to handle Severity#2 and Severity#3 incidents Good communication skills Should have capability to provide the support Should have analytical and problem solving capabilities, ability to work with teams Should have experince on 24*7 operation support framework) Should have knowledge of ITIL Process Preferred Skills/Knowledge Container Platforms - Docker, Kubernetes, GKE, AKS OR OpenShift Automation Platforms - Shell Scripts, Ansible, Jenkin Cloud Platforms - GCP/AZURE/OpenStack Operating System - Linux/CentOS/Ubuntu Container Storage and Backup Desired Skills 1. Certified Kubernetes Administrator OR 2. Certified Redhat OpenShift Administrator 3. Certification of administration of any Cloud Platform will be an added advantage Soft Skills 1. Must have good troubleshooting skills 2. Must be ready to learn new technologies and acquire new skills 3. Must be a Team Player 4. Should be good in Spoken and Written English
Posted 3 months ago
2 - 7 years
4 - 9 Lacs
Bengaluru
Work from Office
About the role As a DevOps Engineer, you will implement technologies and processes and develop tooling to aid and supplement our off-the-shelf tools. Because we operate within a dynamic Agile development process, DevOps Engineers collaborate with multi-functional teams consisting of: Product Engineering, Product Owners, Developers and QA Resources. This role requires fundamental knowledge of DevOps principles and tools. You will contribute to our team's success by applying your knowledge and skills and collaborating with senior team members throughout the organization. Responsibilities: Collaborate effectively with Product and Development teams to understand needs, evaluate alternative business solutions, and prioritize duties Assist in creating and reviewing technical requirements derived from Product team's stories Question and clarify requirements presented by the Product team during development. Ensure code reviews are regularly conducted and contribute to testing as needed Support teams with bug fixes for released functionality Develop tooling to complement product and development team capabilities Implement and maintain continuous integration and continuous delivery (CI/CD) processes, technologies, and tools Troubleshoot deployment incidents as required Develop and maintain accurate technical documentation Collaborate across multiple engineering teams to enhance development practices Promote DevOps best practices to both technical and non-technical teams Share knowledge about Lean/Agile practices, encouraging collaboration and communication Assist the DevOps team in implementing tooling and processes for optimal efficiency Serve as a trusted Technical Advisor during customer meetings Mentor and coach less experienced DevOps team members and Delivery teams in tooling, technology, and practices Present ideas for system improvements and think creatively Qualifications & Skills Bachelors degree in Computer Science, Engineering or related, plus minimum of 2 years experience; or an advanced degree; or equivalent work experience Desire to work in a fast-paced environment with the ability to coach less experienced team members and self-manage Knowledge of Software Development Life Cycle (SDLC) Excellent communication and stakeholder management skills Experience applying sound DevOps principles Previous exposure to cloud-based CI/CD solutions Familiarity with Agile/Scrum development Knowledge of API-driven, extensible, loosely coupled systems Understanding of automation, security, stability, and scalability in private and public cloud environments Outstanding problem-solving and troubleshooting abilities Self-starter and quick learner Preferred Tools knowledge: Build systems, code management CI/CD GitHub, Azure DevOps, Jenkins Automation technologies Helm, Ansible, Terraform, Chef, Puppet Cloud infrastructure platforms AWS, Azure (Multitenancy knowledge) Containers Docker, Kubernetes Programming languages yml, Java, JavaScript, Node, Go, Ruby, C# Scripting languages Bash, Python, PowerShell, Ruby Security Sonarqube, Blackduck, Snyk Operational monitoring DataDog, ELK stack, Prometheus, Grafana, AppDynamics, New Relic, Splunk Operating systems Linux, Windows Application security and Vault technologies - Hashicorp Vault, AWS Certificate Manager, Azure Key Vault Automated testing Selenium, Cucumber Virtualization technologies vSphere/VMWare, Hyper-VGrpc/API/ RESTFlux, Garden, Kafka, Eventstore, MinIO
Posted 3 months ago
10 - 14 years
12 - 16 Lacs
Bengaluru
Work from Office
Strong knowledge of Linux-based system administration. Sound knowledge of Kubernetes (K3 & K8) environment. Knowledge to configure Network level configuration for cloud-deployed distributed application environment, containerized and orchestrated using Kubernetes. Knowledge to configure Cloud Infra System Architecture Experience in HA setup and cluster setup ,DC-DR Setup. Knowledge of Kafka ,Ansible, shell scripting. Knowledge of Prometheus, Grafana monitoring setup. Experience in VMware setup. Windows-based system administration exposure is a plus Experience of Git. In addition , if the candidate has knowledge of PostgreSQL will be an added advantage and couple of questions to find better set of profiles and quality resumes.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Prometheus is a popular monitoring and alerting tool used in the field of DevOps and software development. In India, the demand for professionals with expertise in Prometheus is on the rise. Job seekers looking to build a career in this field have a promising outlook in the Indian job market.
These cities are known for their vibrant tech industry and have a high demand for professionals skilled in Prometheus.
The salary range for Prometheus professionals in India varies based on experience levels. Entry-level positions can expect to earn around ₹5-8 lakhs per annum, whereas experienced professionals can earn up to ₹15-20 lakhs per annum.
A typical career path in Prometheus may include roles such as: - Junior Prometheus Engineer - Prometheus Developer - Senior Prometheus Engineer - Prometheus Architect - Prometheus Consultant
As professionals gain experience and expertise, they can progress to higher roles with increased responsibilities.
In addition to Prometheus, professionals in this field are often expected to have knowledge and experience in: - Kubernetes - Docker - Grafana - Time series databases - Linux system administration
Having a strong foundation in these related skills can enhance job prospects in the Prometheus domain.
As you explore opportunities in the Prometheus job market in India, remember to continuously upgrade your skills and stay updated with the latest trends in monitoring and alerting technologies. With dedication and preparation, you can confidently apply for roles in this dynamic field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2