Jobs
Interviews

1633 Grafana Jobs - Page 47

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 8.0 years

11 - 12 Lacs

Hyderabad

Work from Office

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 7 to 8+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 1 month ago

Apply

8.0 - 13.0 years

20 - 30 Lacs

Bangalore Rural, Bengaluru

Work from Office

Immediate Hiring: Java + Observability Engineer (Apache Storm) Location : Bengaluru | Architect Level Only Immediate Joiners We are looking for a skilled and experienced Java + Observability Engineer with expertise in Apache Storm to join our team in Bengaluru . This is an exciting opportunity for professionals passionate about modern observability stacks and distributed systems. Key Skills Required : Java (Version 8/11 or higher) Observability Tools : Prometheus, Grafana, OpenTelemetry, ELK, Jaeger, Zipkin, New Relic Containerization : Docker, Kubernetes CI/CD Pipelines Experience designing and building scalable systems as an Architect Hands-on with Apache Storm Note : This role is open to immediate joiners only . If you're ready to take on a challenging architect-level role and make an impact, send your resume to sushil@saisservices.com

Posted 1 month ago

Apply

8.0 - 13.0 years

22 - 37 Lacs

Bengaluru

Hybrid

Role & responsibilities : Java (11/8 or higher), Observability tool:Prometheus/ Grafana/OpenTelemetry/ELK/Jaeger/Zipkin New Relic,Docker/Kubernetes,CI/CD pipline,Architect Preferred candidate profile

Posted 1 month ago

Apply

5.0 - 10.0 years

13 - 15 Lacs

Pune, Chennai, Bengaluru

Work from Office

Grafana specialist to lead the creation of robust dashboards for comprehensive end-to-end monitoring. Strong background in production support monitoring, with a keen understanding of the metrics that matter to both technology teams and management. Required Candidate profile 5y-Build Grafana dashboards for monitoring Use Prometheus&exporters for real-time data Integrate multi-source data&alerts Create Unix/Python scripts for log automation Manage Jira/ServiceNow dashboard

Posted 1 month ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

Hyderabad

Work from Office

Job Description Arcadis Development teams within our Intelligence division deliver complex solutions and push the limits of technology solutions. Our talented groups of systems professionals do more than just write code and debug they make a significant impact on the design and development of state-of-the-art projects. We are looking for a DevOps Engineer to join our growing and dynamic product team. Responsibilities: Ensuring availability, performance, security, and scalability of production systems. Troubleshooting system issues causing downtime or performance degradation with expertise in Agile software development methodologies. Implementing CI/CD pipelines, automating configuration management, and using Ansible playbooks. Enforcing DevOps practices in collaboration with software developers. Enhancing development and release processes through automation. Automating alerts for system availability and performance monitoring. Collaboration on defining security requirements and conducting tests to identify weaknesses. Building, securing, and maintaining on-premises and cloud infrastructures. Prototyping solutions, evaluating new tools, and engaging in incident handling and root cause analysis. Leading the automation effort and maintaining servers to the latest security standards. Understanding source code security vulnerabilities and maintaining infrastructure code bases using Puppet. Supporting and improving Docker-based development practices. Contributing to maturing DevOps culture, showcasing a methodical approach to problem-solving, and following agile practices. Qualifications 2+ year of hands-on experience in DevOps in Linux based systems Proficiency in Cloud technologies (AWS, Azure, GCP), CI-CD tools (Jenkins, Ansible, Github, Docker, etc.) and Linux user administration. Expertise on OpenShift along with managing infrastructure as code using Ansible and Docker Experience in setup and management of DB2 database Experience on Maximo Application Suite is desirable Experience in setup of application and infrastructure monitoring tools like Prometheus, Grafana, CAdvisor, Node Exporter, and Sentry. Experience of working with log analysis and monitoring tools in a distributed application scenario, independent analysis of problems and implementation of solutions. Experience of Change Management and Release Management in Agile methodology, DNS Management is desirable Experience in routine security scanning for malicious software and suspicious network activity, along with protocol analysis to identify and remedy network performance issues.

Posted 1 month ago

Apply

2.0 - 4.0 years

4 - 6 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Job Summary: We are looking for a skilled and proactive DevOps Engineer with 2+ years of experience in managing and automating cloud infrastructure, ensuring deployment security, and supporting CI/CD pipelines. The ideal candidate is proficient in tools like Docker, Kubernetes, Terraform, and has hands-on experience with observability stacks such as Prometheus and Grafana. You will work closely with engineering teams to maintain uptime for media services, support ML model pipelines, and drive full-cycle Dev & Ops best practices. Key Responsibilities: Design, deploy, and manage containerized applications using Docker and Kubernetes. Automate infrastructure provisioning and management using Terraform on AWS or GCP. Implement and maintain CI/CD pipelines with tools like Jenkins, ArgoCD, or GitHub Actions. Set up and manage monitoring, logging, and alerting systems using Prometheus, Grafana, and related tools. Ensure high availability and uptime for critical services, including media processing pipelines and APIs. Collaborate with development and ML teams to support model deployment workflows and infrastructure needs. Drive secure deployment practices, access control, and environment isolation. Troubleshoot production issues and participate in on-call rotations where required. Contribute to documentation and DevOps process optimization for better agility and resilience. Qualifications: 2+ years of experience in DevOps, SRE, or cloud infrastructure roles. Hands-on experience with Docker, Kubernetes, and Terraform. Solid knowledge of CI/CD tooling (e.g., Jenkins, ArgoCD, GitHub Actions). Experience with observability tools such as Prometheus and Grafana. Familiarity with AWS or GCP infrastructure, including networking, compute, and IAM. Strong understanding of deployment security, versioning, and full lifecycle support. Preferred Qualifications : Experience supporting media pipelines or AI/ML model deployment infrastructure. Understanding of DevSecOps practices and container security tools (e.g., Trivy, Aqua). Scripting skills (Bash, Python) for automation and tooling. Experience in managing incident response and performance optimization. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, India

Posted 1 month ago

Apply

1.0 - 4.0 years

1 - 4 Lacs

Hyderabad

Work from Office

Working knowledge of Jenkins, Terraform, Puppet & JIRA Basic knowledge of AWS or Azure Linux Administration & Shell scripting Moderate Knowledge in Grafana and ELK Good Communication Skill required

Posted 1 month ago

Apply

1.0 - 4.0 years

1 - 4 Lacs

Hyderabad

Work from Office

Working knowledge of Jenkins, Terraform, Puppet & JIRA Basic knowledge of AWS or Azure Linux Administration & Shell scripting Moderate Knowledge in Grafana and ELK Good Communication Skill required

Posted 1 month ago

Apply

8.0 - 13.0 years

30 - 35 Lacs

Pune

Remote

What You'll Do We are looking for experienced Machine Learning Engineer with a background in software development and a deep enthusiasm for solving complex problems. You will lead a dynamic team dedicated to designing and implementing a large language model framework to power diverse applications across Avalara. Your responsibilities as a Senior Technical Lead will span the entire development lifecycle, including conceptualization, prototyping and delivery of the LLM platform features. You will be reporting to Senior Manager, Software Engineering What Your Responsibilities Will Be You have a blend of technical skills in the fields of AI & Machine Learning especially with LLMs and a deep-seated understanding of software development practices where you'll work with a team to ensure our systems are scalable, performant and accurate. We are looking for engineers who can think quick and have a background in implementation. Your responsibilities will include: Build on top of the foundational framework for supporting Large Language Model Applications at Avalara Experience with LLMs - like GPT, Claude, LLama and other Bedrock models Leverage best practices in software development, including Continuous Integration/Continuous Deployment (CI/CD) along with appropriate functional and unit testing in place. Inspire creativity by researching and applying the latest technologies and methodologies in machine learning and software development. Write, review, and maintain high-quality code that meets industry standards, contributing to the project's. Lead code review sessions, ensuring good code quality and documentation. Mentor junior engineers, encouraging a culture of collaboration Proficiency in developing and debugging software with a preference for Python, though familiarity with additional programming languages is valued and encouraged. What You'll Need to be Successful 8+ years of experience building Machine Learning models and deploying them in production environments as part of creating solutions to complex customer problems. Bachelor's degree with computer science exposure Proficiency working in cloud computing environments (AWS, Azure, GCP), Machine Learning frameworks, and software development best practices. With technological innovations in AI & ML(esp. GenAI). Expertise in design patterns, data structures, distributed systems, and experience with cloud technologies. Good analytical, design and debugging skills. Technologies you will work with: Python, LLMs, MLFlow, Docker, Kubernetes, Terraform, AWS, GitLab, Postgres, Prometheus, Grafana This is a remote role.

Posted 1 month ago

Apply

2.0 - 4.0 years

4 - 6 Lacs

Bengaluru

Work from Office

ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it , our most valuable asset is our people. Here you ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage an d passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. Platform and Product Team is shaping one of the key growth vector area for ZS, our engagement, comprising of clients from industries like Quick service restaurants, Technology, Food & Beverage, Hospitality, Travel, Insurance, Consumer Products Goods & other such industries across North America, Europe & South East Asia region. Platform and Product India team currently has presence across New Delhi, Pune and Bengaluru offices and is continuously expanding further at a great pace. Platform and Product India team works with colleagues across clients and geographies to create and deliver real world pragmatic solutions leveraging AI SaaS products & platforms, Generative AI applications, and other Advanced analytics solutions at scale. What You ll Do: Experience with cloud technologies AWS, Azure or GCP Create container images and maintain container registries. Create, update, and maintain production grade applications on Kubernetes clusters and cloud. Inculcate GitOps approach to maintain deployments. Create YAML scripts, HELM charts for Kubernetes deployments as required. Take part in cloud design and architecture decisions and support lead architects build cloud agnostic applications. Create and maintain Infrastructure-as-code templates to automate cloud infrastructure deployment Create and manage CI/CD pipelines to automate containerized deployments to cloud and K8s. Maintain git repositories, establish proper branching strategy, and release management processes. Support and maintain source code management and build tools. Monitoring applications on cloud and Kubernetes using tools like ELK, Grafana, Prometheus etc. Automate day to day activities using scripting. Work closely with development team to implement new build processes and strategies to meet new product requirements. Troubleshooting, problem solving, root cause analysis, and documentation related to build, release, and deployments. Ensure that systems are secure and compliant with industry standards. What You ll Bring A master s or bachelor s degree in computer science or related field from a top university. 2-4+ years of hands-on experience in DevOps Hands-on experience designing and deploying applications to cloud (Aws / Azure/ GCP) Expertise on deploying and maintaining applications on Kubernetes Technical expertise in release automation engineering, CI/CD or related roles. Hands on experience in writing Terraform templates as IaC, Helm charts, Kubernetes manifests Should have strong hold on Linux commands and script automation. Technical understanding of development tools, source control, and continuous integration build systems, e.g. Azure DevOps, Jenkins, Gitlab, TeamCity etc. Knowledge of deploying LLM models and toolchains Configuration management of various environments. Experience working in agile teams with short release cycles. Good to have programming experience in python / go. Characteristics of a forward thinker and self-starter that thrives on new challenges and adapts quickly to learning new knowledge. Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered.

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Bengaluru

Work from Office

Job Title: Site Reliability Engineer Department: Engineering / Infrastructure Reports To: SRE Manager / DevOps Lead Location: Bangalore, India Role Summary The Site Reliability Engineer (SRE) will be responsible for ensuring the availability, performance, and scalability of critical systems. This role involves managing CI/CD pipelines, monitoring production environments, automating operations, and driving platform reliability improvements in collaboration with development and infrastructure teams. Key Responsibilities Manage alerts and monitoring of critical production systems. Operate and enhance CI/CD pipelines and improve deployment and rollback strategies. Work with central platform teams on reliability initiatives. Automate testing, regression, and build tooling for operational efficiency. Execute NFR testing on production systems. Plan and implement Debian version migrations with minimal disruption. Required Qualifications & Skills CI/CD and Packaging Tools: Hands-on experience with Jenkins, Docker, JFrog for packaging and deployment. Operating System Expertise: Experience in Debian OS migration and upgrade processes. Monitoring Systems: Knowledge of Grafana, Nagios, and other observability tools. Configuration Management: Proficiency with Ansible, Puppet, or Chef. Version Control: Working knowledge of Git and related version control systems. Kubernetes: Deep understanding of Kubernetes architecture, deployment pipelines, and debugging. Ability to deploy components with detailed insights into: Configuration parameters and system requirements Monitoring and alerting needs Performance tuning Designing for high availability and fault tolerance Networking: Understanding of TCP/IP, UDP, Multicast, Broadcast. Experience with TCPDump, Wireshark for network diagnostics. Linux & Databases: Strong skills in Linux tools and scripting. Familiarity with MySQL and NoSQL database systems. Soft Skills Strong problem-solving and analytical skills Effective communication and collaboration with cross-functional teams Ownership mindset and accountability Adaptability to fast-paced and dynamic environments Detail-oriented and proactive approach Preferred Qualifications Bachelor’s degree in Computer Science, Engineering, or related technical field Certifications in Kubernetes (CKA/CKAD), Linux, or DevOps practices Experience with cloud platforms (AWS, GCP, Azure) Exposure to service mesh, observability stacks, or SRE toolkits Key Relationships Internal: DevOps, Infrastructure, Software Development, QA, Security Teams External: Tool vendors, platform service providers (if applicable) Role Dimensions Impact on uptime and reliability of business-critical services Ownership of CI/CD and production deployment processes Contributor to cross-team reliability and scalability initiatives Success Measures (KPIs) System uptime and availability (SLA adherence) Mean Time to Detect (MTTD) and Mean Time to Resolve (MTTR) incidents Deployment success rate and rollback frequency Automation coverage of operational tasks Completion of OS migration and infrastructure upgrade projects Competency Framework Alignment Technical Mastery: Infrastructure, automation, CI/CD, Kubernetes, monitoring Execution Excellence: Timely project delivery, process improvements Collaboration: Cross-functional team engagement and support Resilience: Problem solving under pressure and incident response Innovation: Continuous improvement of operational reliability and performance

Posted 1 month ago

Apply

4.0 - 7.0 years

11 - 16 Lacs

Pune

Hybrid

So, what’s the role all about? As a Sr. Cloud Services Automation Engineer, you will be responsible for designing, developing, and maintaining robust end-to-end automation solutions that support our customer onboarding processes from an on-prem software solution to Azure SAAS platform and streamline cloud operations. You will work closely with Professional Services, Cloud Operations, and Engineering teams to implement tools and frameworks that ensure seamless deployment, monitoring, and self-healing of applications running in Azure. How will you make an impact? Design and develop automated workflows that orchestrate complex processes across multiple systems, databases, endpoints, and storage solutions in on-prem and public cloud. Design, develop, and maintain internal tools/utilities using C#, PowerShell, Python, Bash to automate and optimize cloud onboarding workflows. Create integrations with REST APIs and other services to ingest and process external/internal data. Query and analyze data from various sources such as, SQL databases, Elastic Search indices and Log files (structured and unstructured) Develop utilities to visualize, summarize, or otherwise make data actionable for Professional Services and QA engineers. Work closely with test, ingestion, and configuration teams to understand bottlenecks and build self-healing mechanisms for high availability and performance. Build automated data pipelines with data consistency and reconciliation checks using tools like PowerBI/Grafana for collecting metrics from multiple endpoints and generating centralized and actionable dashboards. Automate resource provisioning across Azure services including AKS, Web Apps, and storage solutions Experience in building Infrastructure-as-code (IaC) solutions using tools like Terraform, Bicep, or ARM templates Develop end-to-end workflow automation in customer onboarding journey that spans from Day 1 to Day 2 with minimal manual intervention Have you got what it takes? Bachelor’s degree in computer science, Engineering, or related field (or equivalent experience). Proficiency in scripting and programming languages (e.g., C#, .NET, PowerShell, Python, Bash). Experience working with and integrating REST APIs Experience with IaC and configuration management tools (e.g., Terraform, Ansible) Familiarity with monitoring and logging solutions (e.g., Azure Monitor, Log Analytics, Prometheus, Grafana). Familiarity with modern version control systems (e.g., GitHub). Excellent problem-solving skills and attention to detail. Ability to work with development and operations teams, to achieve desired results, on common projects Strategic thinker and capable of learning new technologies quickly Good communication with peers, subordinates and managers You will have an advantage if you also have: Experience with AKS infrastructure administration. Experience orchestrating automation with Azure Automation tools like Logic Apps. Experience working in a secure, compliance driven environment (e.g. CJIS/PCI/SOX/ISO) Certifications in vendor or industry specific technologies. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7454 Reporting into: Director Role Type: Individual Contributor

Posted 1 month ago

Apply

5.0 - 10.0 years

8 - 15 Lacs

Pune

Remote

What You'll Do We are looking for experienced Machine Learning Engineers with a background in software development and a deep enthusiasm for solving complex problems. You will lead a dynamic team dedicated to designing and implementing a large language model framework to power diverse applications across Avalara. Your responsibilities will span the entire development lifecycle, including conceptualization, prototyping and delivery of the LLM platform features. You will have a blend of technical skills in the fields of AI & Machine Learning especially with LLMs and a deep-seated understanding of software development practices where you'll work with a team to ensure our systems are scalable, performant and accurate. You will be reporting to Senior Manager, AI/ML. What Your Responsibilities Will Be We are looking for engineers who can think quick and have a background in implementation. Your responsibilities will include: Build on top of the foundational framework for supporting Large Language Model Applications at Avalara Experience with LLMs - like GPT, Claude, LLama and other Bedrock models Leverage best practices in software development, including Continuous Integration/Continuous Deployment (CI/CD) along with appropriate functional and unit testing in place. Inspire creativity by researching and applying the latest technologies and methodologies in machine learning and software development. Write, review, and maintain high-quality code that meets industry standards. Lead code review sessions, ensuring good code quality and documentation. Mentor junior engineers, encouraging a culture of collaboration. Proficiency in developing and debugging software with a preference for Python, though familiarity with additional programming languages is valued and encouraged. What You'll Need to be Successful Bachelor's/Master's degree in computer science with 5+ years of industry experience in software development, along with experience building Machine Learning models and deploying them in production environments. Proficiency working in cloud computing environments (AWS, Azure, GCP), Machine Learning frameworks, and software development best practices. Work with technological innovations in AI & ML(esp. GenAI) Experience with design patterns and data structures. Good analytical, design and debugging skills. Technologies you will work with: Python, LLMs, MLFlow, Docker, Kubernetes, Terraform, AWS, GitLab, Postgres, Prometheus, Grafana

Posted 1 month ago

Apply

4.0 - 7.0 years

11 - 16 Lacs

Pune

Hybrid

So, what’s the role all about? In this position we are looking for a strong DevOps Engineer to work with Professional Services teams, Solution Architects, and Engineering teams. Managing an On-prem to Azure Cloud onboarding, Cloud Infra & DevOps solutions.The Engineer will work with US and Pune Cloud Services and Operations Team as well as other support teams across the Globe. We are seeking a talented DevOps Engineer with strong PowerShell scripting skills to join our team. As a DevOps Engineer, you will be responsible for developing and implementing cloud automation workflows and enhancing our cloud monitoring and self-healing capabilities as well as managing our infrastructure and ensuring its reliability, scalability, and security. We encourageInnovative ideas,Flexible work methods,Knowledge collaboration,good vibes! How will you make an impact? Define, build and manage the automated cloud workflows enhancing overall customer experience in Azure SAAS environment saving time, cost and resources. Automate Pre-Post Host/Tenant Upgrade checklists and processes with automation in Azure SAAS environment. Implement, and manage the continuous integration and delivery pipeline to automate software delivery processes. Collaborate with software developers to ensure that new features and applications are deployed in a reliable and scalable manner. Automation of DevOps pipeline and provisioning of environments. Manage and maintain our cloud infrastructure, including provisioning, configuration, and monitoring of servers and services. Provide technical guidance and support to other members of the team. Manage Docker containers and Kubernetes clusters to support our microservices architecture and containerized applications. Implement and manage networking, storage, security, and monitoring solutions for Docker and Kubernetes environments. Experience with integration of service management, monitoring, logging and reporting tools like ServiceNow, Grafana, Splunk, Power BI etc. Have you got what it takes? 4-7 years of experience as a DevOps engineer with Azure preferably. Strong understanding of Kubernetes & Docker, Ansible, Terraform, Azure SAAS Infrastructure. Strong understanding of DevOps tools such as AKS, Azure DevOps, GitHub, GitHub Actions, and logging mechanisms. Working knowledge of all Azure Services and compliances like CJIS/PCI/SOC etc. Exposure to enterprise software architectures, infrastructures, and integration with Azure (or any other cloud solution) Experience with Application Monitoring Metrics Hands on experience with PowerShell, Bash & Python etc. Should have good knowledge on Linux and windows servers. Comprehensive knowledge of design metrics, analytics tools, benchmarking activities, and related reporting to identify best practices. Consistently demonstrates clear and concise written and verbal communication. Passionately enthusiastic about DevOps & cloud technologies. Ability to work independently, multi-task, and take ownership of various parts of a project or initiative. Azure Certifications in DevOps and Architecture is good to have. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7452 Reporting into: Director Role Type: Individual Contributor

Posted 1 month ago

Apply

3.0 - 5.0 years

3 - 6 Lacs

Pune

Work from Office

What You'll Do: CI/CD Pipeline Management: Design, implement, and maintain robust CI/CD pipelines (e.g., Jenkins, GitLab CI, Azure DevOps, CircleCI) to automate the build, test, and deployment processes across various environments (Dev, QA, Staging, Production). Infrastructure as Code (IaC): Develop and manage infrastructure using IaC tools (e.g., Terraform, Ansible, CloudFormation, Puppet, Chef) to ensure consistency, repeatability, and scalability of our cloud and on-premise environments. Cloud Platform Management: Administer, monitor, and optimize resources on cloud platforms (e.g., AWS, Azure, GCP), including compute, storage, networking, and security services. Containerization & Orchestration: Implement and manage containerization technologies (e.g., Docker) and orchestration platforms (e.g., Kubernetes) for efficient application deployment, scaling, and management. Monitoring & Alerting: Set up and maintain comprehensive monitoring, logging, and alerting systems (e.g., Prometheus, Grafana, ELK Stack, Nagios, Splunk, Datadog) to proactively identify and resolve performance bottlenecks and issues. Scripting & Automation: Write and maintain scripts (e.g., Python, Bash, PowerShell, Go, Ruby) to automate repetitive tasks, improve operational efficiency, and integrate various tools. Version Control: Manage source code repositories (e.g., Git, GitHub, GitLab, Bitbucket) and implement branching strategies to facilitate collaborative development and version control. Security & Compliance (DevSecOps): Integrate security best practices into the CI/CD pipeline and infrastructure, ensuring compliance with relevant security policies and industry standards. Troubleshooting & Support: Provide Level 2 support, perform root cause analysis for production incidents, and collaborate with development teams to implement timely fixes and preventive measures. Collaboration: Work closely with software developers, QA engineers, and other stakeholders to understand their needs, provide technical guidance, and foster a collaborative and efficient development lifecycle. Documentation: Create and maintain detailed documentation for infrastructure, processes, and tools.

Posted 1 month ago

Apply

4.0 - 7.0 years

9 - 12 Lacs

Pune

Hybrid

So, what’s the role all about? In NiCE as a Senior Software professional specializing in designing, developing, and maintaining applications and systems using the Java programming language. They play a critical role in building scalable, robust, and high-performing applications for a variety of industries, including finance, healthcare, technology, and e-commerce How will you make an impact? Working knowledge of unit testing Working knowledge of user stories or use cases Working knowledge of design patterns or equivalent experience. Working knowledge of object-oriented software design. Team Player Have you got what it takes? Bachelor’s degree in computer science, Business Information Systems or related field or equivalent work experience is required. 4+ year (SE) experience in software development Well established technical problem-solving skills. Experience in Java, spring boot and microservices. Experience with Kafka, Kinesis, KDA, Apache Flink Experience in Kubernetes operators, Grafana, Prometheus Experience with AWS Technology including (EKS, EMR, S3, Kinesis, Lambda’s, Firehose, IAM, CloudWatch, etc) You will have an advantage if you also have: Experience with Snowflake or any DWH solution. Excellent communication skills, problem-solving skills, decision-making skills Experience in Databases Experience in CI/CD, git, GitHub Actions Jenkins based pipeline deployments. Strong experience in SQL What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 6965 Reporting into: Tech Manager Role Type: Individual Contributor

Posted 1 month ago

Apply

8.0 - 13.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Key Job Responsibilities and Duties: The core premise for the Booking SRE lies in treating operational and reliability problems of software systems as a software engineering problem. We code our way out of problems where operations are concerned addressing availability, scalability, latency, and efficiency challenges within the vast infrastructure here at Booking. We expect our SRE engineers to be software engineers that optimize systems rather than be system operators. You will impact millions of people all over the globe with your creative solutions You work in one of the biggest e-commerce companies in the world You will solve exciting problems at scale by writing and deploying code across tens of thousands of servers Ensuring an everything as code mindset for yourself and your team You will have the opportunity to collaborate with many of the worlds leading SREs You will be free to launch your own ideas and solutions within our sophisticated production environment Here are some of the tools and technologies we use to achieve this: Python, Go, Puppet, Kubernetes, Elasticsearch, Prometheus, HAProxy, Cassandra, Kafka etc What youll be doing: Design, develop and implement software that improves the stability, scalability, availability and latency of the Booking.com products; Take ownership of one or more services and have the freedom to do what is best for our business and customers; Solve problems occurring with our highly available production systems and build solutions and automation to prevent them from happening again; Build effective monitoring to supervise the health of your system, and jump in to handle outages; Build and run capacity tests to manage the growth of your systems; Plan for reliability by designing systems to work across our multinational data centers; Develop tools to assist the product development teams with successfully deploying 1000s of change sets every day; Be an advocate of engineering standard processes; Share the on-call rotation and be an escalation contact for incidents: Contribute to Booking.com's growth through interviewing, on-boarding, or other recruitment efforts. What youll bring: 8 years + hands-on experience in software and site reliability engineering within the technology sector. Coupled with expertise with building, operating and maintaining sophisticated and scalable systems. Solid experience in at least one programming language. We use Java, Python, Go, Ruby, Perl; Experience with Infrastructure as Code technologies; Knowledge of cloud computing fundamentals; Solid foundation in Linux administration and troubleshooting; Understanding of Service level agreements and objectives; Additional experience in OpenStack, Kubernetes, Networking, Security or Storage is desirable; Supervising / observability technologies like Prometheus, Graphite, Grafana, Kibana, Elasticsearch are a plus; Good interpersonal skills Proficient command of the English language, both written and spoken

Posted 1 month ago

Apply

7.0 - 12.0 years

14 - 24 Lacs

Chennai

Work from Office

Job Description Bachelors degree in computer science, computer engineering, or related technologies. Seven years of experience in systems engineering within the networking industry. Expertise in Linux deployment, scripting and configuration. Expertise in TCP/IP communications stacks and optimizations Experience with ELK (Elasticsearch, Logstash, Kibana), Grafana data streaming (e.g., Kafka), and software visualization. Experience in analyzing and debugging code defects in the Production Environment. Proficiency in version control systems such as GIT. Ability to design comprehensive test scenarios for systems usability, execute tests, and prepare detailed reports on effectiveness and defects for production teams. Full-cycle Systems Engineering experience covering Requirements capture, architecture, design, development, and system testing. Demonstrated ability to work independently and collaboratively within cross-functional teams. Proficient in installing, configuring, debugging, and interpreting performance analytics to monitor, aggregate, and visualize key performance indicators over time. Proven track record of directly interfacing with customers to address concerns and resolve issues effectively. Strong problem-solving skills, capable of driving resolutions autonomously without senior engineer support. Experience in configuring MySQL and PostgreSQL, including setup of replication, troubleshooting, and performance improvement. Proficiency in networking concepts such as network architecture, protocols (TCP/IP, UDP), routing, VLANs, essential for deploying new system servers effectively. Proficiency in scripting language Shell/Bash, in Linux systems. Proficient in utilizing, modifying, troubleshooting, and updating Python scripts and tools to refine code. Excellent written and verbal communication skills. Ability to document processes, procedures, and system configurations effectively. Ability to Handle Stress and Maintain Quality. This includes resilience to effectively manage stress and pressure, as well as a demonstrated ability to make informed decisions, particularly in high-pressure situations. Excellent written and verbal communication skills. It includes the ability to document processes, procedures, and system configurations effectively. It is required for this role to be on-call 24/7 to address service-affecting issues in production. It is required to work during the business hours of Chicago, aligning with local time for effective coordination and responsiveness to business operations and stakeholders in the region.

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Educational Requirements MCA,MTech,Master of Business Administration,Bachelor of Engineering,BCA,BTech Service Line Cloud & Infrastructure Services Responsibilities As Tools SME Tools in SolarWinds/Splunk/Dynatrace/Devpops tool will work on Design, Setup and Configuration of Observability Platforms with Correlation, Anomaly Detection, Visualization and Dashboards, AI ops, Devops, Tool Integration : Collaborate with DevOps architects, development teams, and operations teams to understand their tool requirements and identify opportunities for optimizing the DevOps toolchain. Evaluate and recommend new tools and technologies that can enhance our DevOps capabilities context, considering factors like cost, integration, and local support. Lead the implementation, configuration, and integration of various DevOps tools, including CI/CD platforms (e.g., Jenkins, GitLab CI, Azure DevOps), infrastructure-as-code (IaC) tools (e.g., Terraform, Ansible), containerization and orchestration tools (e.g., Docker, Kubernetes), monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack), and testing framework Establish standards and best practices for the usage and management of the DevOps toolset Ensure the availability, performance, and stability of the DevOps toolchain Perform regular maintenance tasks, including upgrades, patching, and backups of the DevOps tools. Provide technical support and troubleshooting assistance to development and operations teams regarding the usage of the DevOps tools. Monitor the health and performance of the toolset and implement proactive measures to prevent issues. Design and implement integrations between different tools in the DevOps pipeline to create seamless and automated workflows Develop automation scripts and utilities to streamline tool provisioning, configuration, and management within the environment. Work with development teams to integrate testing and security tools into the CI/CD pipeline. Additional Responsibilities: Besides the professional qualifications of the candidates, we place great importance in addition to various forms personality profile. These include: High analytical skills A high degree of initiative and flexibility High customer orientation High quality awareness Excellent verbal and written communication skills Technical and Professional Requirements: At least 6+ years of experience in Solarwinds or Splunk or Dynatrace or Devlops Toolset Proven experience with several key DevOps tools, including CI/CD platforms (e.g., Jenkins, GitLab CI, Azure DevOps), IaC tools (e.g., Terraform, Ansible), containerization (Docker, Kubernetes), and monitoring tools (e.g., Prometheus, Grafana, ELK stack). Good level knowledge of Linux environment Good working knowledge on YAML and Python Good working knowledge in Event correlation and Observability Good Communication skills Good analytical and problem-solving skills Preferred Skills: Technology->Infra_ToolAdministration-Others->Solarwinds Technology->Infra_ToolAdministration-Others->Splunk Admin Technology->DevOps->DevOps Architecture Consultancy Technology->Dynatrace->Digital Performance Management Tool

Posted 1 month ago

Apply

10.0 - 15.0 years

22 - 37 Lacs

Bengaluru

Work from Office

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role As an ELK (Elastic, Logstash & Kibana) Data Engineer, you would be responsible for developing, implementing, and maintaining the ELK stack-based solutions for Kyndryl’s clients. This role would be responsible to develop efficient and effective, data & log ingestion, processing, indexing, and visualization for monitoring, troubleshooting, and analysis purposes. Responsibilities: Design, implement, and maintain scalable data pipelines using ELK Stack (Elasticsearch, Logstash, Kibana) and Beats for monitoring and analytics. Develop data processing workflows to handle real-time and batch data ingestion, transformation and visualization. Implement techniques like grok patterns, regular expressions, and plugins to handle complex log formats and structures. Configure and optimize Elasticsearch clusters for efficient indexing, searching, and performance tuning. Collaborate with business users to understand their data integration & visualization needs and translate them into technical solutions Create dynamic and interactive dashboards in Kibana for data visualization and insights that can enable to detect the root cause of the issue. Leverage open-source tools such as Beats and Python to integrate and process data from multiple sources. Collaborate with cross-functional teams to implement ITSM solutions integrating ELK with tools like ServiceNow and other ITSM platforms. Anomaly detection using Elastic ML and create alerts using Watcher functionality Extract data by Python programming using API Build and deploy solutions in containerized environments using Kubernetes. Monitor Elasticsearch clusters for health, performance, and resource utilization Automate routine tasks and data workflows using scripting languages such as Python or shell scripting. Provide technical expertise in troubleshooting, debugging, and resolving complex data and system issues. Create and maintain technical documentation, including system diagrams, deployment procedures, and troubleshooting guides If you're ready to embrace the power of data to transform our business and embark on an epic data adventure, then join us at Kyndryl. Together, let's redefine what's possible and unleash your potential. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Experience: Minimum of 5 years of experience in ELK Stack and Python programming Graduate/Postgraduate in computer science, computer engineering, or equivalent with minimum of 10 years of experience in the IT industry. ELK Stack : Deep expertise in Elasticsearch, Logstash, Kibana, and Beats. Programming : Proficiency in Python for scripting and automation. ITSM Platforms : Hands-on experience with ServiceNow or similar ITSM tools. Containerization : Experience with Kubernetes and containerized applications. Operating Systems : Strong working knowledge of Windows, Linux, and AIX environments. Open-Source Tools : Familiarity with various open-source data integration and monitoring tools. Knowledge of network protocols, log management, and system performance optimization. Experience in integrating ELK solutions with enterprise IT environments. Strong analytical and problem-solving skills with attention to detail. Knowledge in MySQL or NoSQL Databases will be added advantage Fluent in English (written and spoken). Preferred Technical and Professional Experience “Elastic Certified Analyst” or “Elastic Certified Engineer” certification is preferrable Familiarity with additional monitoring tools like Prometheus, Grafana, or Splunk. Knowledge of cloud platforms (AWS, Azure, or GCP). Experience with DevOps methodologies and tools. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 1 month ago

Apply

2.0 - 7.0 years

8 - 14 Lacs

Bengaluru

Work from Office

Job Posting : Back-End Developer Your Role : As a Back-End Developer, you'll collaborate with the development team to build and maintain scalable, secure, and high-performing back-end systems for our SaaS products. You will play a key role in designing and implementing microservices architectures, integrating databases, and ensuring seamless operation of cloud-based applications. Responsibilities : - Design, develop, and maintain robust and scalable back-end solutions using modern frameworks and tools. - Create, manage, and optimize microservices architectures, ensuring efficient communication between services. - Develop and integrate RESTful APIs to support front-end and third-party systems. - Design and implement database schemas and optimize performance for SQL and NoSQL databases. - Support deployment processes by aligning back-end development with CI/CD pipeline requirements. - Implement security best practices, including authentication, authorization, and data protection. - Collaborate with front-end developers to ensure seamless integration of back-end services. - Monitor and enhance application performance, scalability, and reliability. - Keep up-to-date with emerging technologies and industry trends to improve back-end practices. Your Qualifications : Must-Have Skills : - Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field. - Proven experience as a Back-End Developer with expertise in modern frameworks such as Node.js, Express.js, or Django. - Expertise in .NET frameworks including development in C++ and C# for high performance databases - Strong proficiency in building and consuming RESTful APIs. - Expertise in database design and management with both SQL (e.g., PostgreSQL, MS SQL Server) and NoSQL (e.g., MongoDB, Cassandra) databases. - Hands-on experience with microservices architecture and containerization tools like Docker and Kubernetes. - Strong understanding of cloud platforms like Microsoft Azure, AWS, or Google Cloud for deployment, monitoring, and management. - Proficiency in implementing security best practices (e.g., OAuth, JWT, encryption techniques). - Experience with CI/CD pipelines and tools such as Jenkins, GitHub Actions, or Azure DevOps. - Familiarity with Agile methodologies and participation in sprint planning and reviews. Good-to-Have Skills : - Experience with time-series databases like TimescaleDB or InfluxDB. - Experience with monitoring solutions like Datadog or Splunk. - Experience with real-time data processing frameworks like Kafka or RabbitMQ. - Familiarity with serverless architecture and tools like Azure or AWS Lambda Functions. - Expertise in Java backend services and microservices - Hands-on experience with business intelligence tools like Grafana or Kibana for monitoring and visualization. - Knowledge of API management platforms like Kong or Apigee. - Experience with integrating AI/ML models into back-end systems. - Familiarity with MLOps pipelines and managing AI/ML workloads. - Understanding of iPaaS (Integration Platforms as a Service) and related technologies. Key Competencies & Attributes : - Strong problem-solving and analytical skills. - Exceptional organizational skills with the ability to manage multiple priorities. - Adaptability to evolving technologies and industry trends. - Excellent collaboration and communication skills to work effectively in cross-functional teams. - Ability to thrive in self-organizing teams with a focus on transparency and trust.

Posted 1 month ago

Apply

5.0 - 8.0 years

15 - 25 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Warm Greetings from SP Staffing!! Role: Azure Devops Experience Required :5 to 8 yrs Work Location :Hyderabad/Pune/Bangalore Required Skills, Azure Devops Terraform Bash/Powershell/Pytgon Promethesus/Grafana Interested candidates can send resumes to nandhini.spstaffing@gmail.com

Posted 1 month ago

Apply

5.0 - 7.0 years

11 - 12 Lacs

Hyderabad

Work from Office

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 5 to 7+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Pune

Work from Office

Since its inception in 2003, driven by visionary college students transforming online rent payment, Entrata has evolved into a global leader serving property owners, managers, and residents. Honored with prestigious awards like the Utah Business Fast 50, Silicon Slopes Hall of Fame - Software Company - 2022, Women Tech Council Shatter List, our comprehensive software suite spans rent payments, insurance, leasing, maintenance, marketing, and communication tools, reshaping property management worldwide. Our 2200+ global team members embody intelligence and adaptability, engaging actively from top executives to part-time employees. With offices across Utah, Texas, India, Israel, and the Netherlands, Entrata blends startup innovation with established stability, evident in our transparent communication values and executive town halls. Our product isn't just desirable; it's industry essential. At Entrata, we passionately refine living experiences, uphold collective excellence, embrace > Job Summary Entrata Software is seeking a DevOps Engineer to join our R&D team in Pune, India. This role will focus on automating infrastructure, streamlining CI/CD pipelines, and optimizing cloud-based deployments to improve software delivery and system reliability. The ideal candidate will have expertise in Kubernetes, AWS, Terraform, and automation tools to enhance scalability, security, and observability. Success in this role requires strong problem-solving skills, collaboration with development and security teams, and a commitment to continuous improvement. If you thrive in fast-paced, Agile environments and enjoy solving complex infrastructure challenges, we encourage you to apply! Key Responsibilities Design, implement, and maintain CI/CD pipelines using Jenkins, GitHub Actions, and ArgoCD to enable seamless, automated software deployments. Deploy, manage, and optimize Kubernetes clusters in AWS, ensuring reliability, scalability, and security. Automate infrastructure provisioning and configuration using Terraform, CloudFormation, Ansible, and scripting languages like Bash, Python, and PHP. Monitor and enhance system observability using Prometheus, Grafana, and ELK Stack to ensure proactive issue detection and resolution. Implement DevSecOps best practices by integrating security scanning, compliance automation, and vulnerability management into CI/CD workflows. Troubleshoot and resolve cloud infrastructure, networking, and deployment issues in a timely and efficient manner. Collaborate with development, security, and IT teams to align DevOps practices with business and engineering objectives. Optimize AWS cloud resource utilization and cost while maintaining high availability and performance. Establish and maintain disaster recovery and high-availability strategies to ensure system resilience. Improve incident response and on-call processes by following SRE principles and automating issue resolution. Promote a culture of automation and continuous improvement, identifying and eliminating manual inefficiencies in development and operations. Stay up-to-date with emerging DevOps tools and trends, implementing best practices to enhance processes and technologies. Ensure compliance with security and industry standards, enforcing governance policies across cloud infrastructure. Support developer productivity by providing self-service infrastructure and deployment automation to accelerate the software development lifecycle. Document processes, best practices, and troubleshooting guides to ensure clear knowledge sharing across teams. Minimum Qualifications 3+ years of experience as a DevOps Engineer or similar role. Strong proficiency in Kubernetes, Docker, and AWS. Hands-on experience with Terraform, CloudFormation, and CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD, ArgoCD). Solid scripting and automation skills with Bash, Python, PHP, or Ansible. Expertise in monitoring and logging tools such as NewRelic, Prometheus, Grafana, and ELK Stack. Understanding of DevSecOps principles, security best practices, and vulnerability management. Strong problem-solving skills and ability to troubleshoot cloud infrastructure and deployment issues effectively. Preferred Qualifications Experience with GitOps methodologies using ArgoCD or Flux. Familiarity with SRE principles and managing incident response for high-availability applications. Knowledge of serverless architectures and AWS cost optimization strategies. Hands-on experience with compliance and governance automation for cloud security. Previous experience working in Agile, fast-paced environments with a focus on DevOps transformation. Strong communication skills and ability to mentor junior engineers on DevOps best practices. If you're passionate about automation, cloud infrastructure, and building scalable DevOps solutions ,

Posted 1 month ago

Apply

6.0 - 11.0 years

11 - 16 Lacs

Pune

Work from Office

Project description We are looking for a seasoned Performance Test Engineer to join our dynamic team. Your role will involve working closely with a group of talented software developers to create new APIs and ensure the smooth functioning of existing ones within the Azure environment. Responsibilities Understand the non-functional requirements (NFRs) from NFR documents, meeting with business and platform owners. Understand business and the infrastructure involved in the project. Understand the critical business scenarios from developers and the business. Prepare the Performance Test Strategy and Test Plan. Communicate with the business/development team manager regularly through daily/weekly reports. Develop the test scripts and workload modelling. Execute sanity tests, load test, soak test, stress test (as required by project). Organise the meeting with all the relevant teams (developers/infra etc.) to monitor core applications during the test execution. Execute the tests and analyse the test results. Prepare the test summary report. Skills Must have 6+ years of experience in performance engineering. Expert in Microfocus LoadRunner and Apache JMeter along with programming/ scripting experience in C/C++, Java, Perl, Python, SQL. Proven performance testing experience across multiple platform architectures and technologies such as micro-services, REST APIs is advantageous as is an exposure to project experience moving workloads to cloud environments including (AWS or Azure) Exposure to open source data visualisation tools. Experience in working with APM tools like AppDynamics. Nice to have Core Banking, Jira, Agile, Grafana Banking domain experience Other Languages EnglishC1 Advanced Seniority Senior

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies