Home
Jobs
Companies
Resume

209 Elk Jobs - Page 2

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: β‚Ή0
Max: β‚Ή10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 10.0 years

15 - 30 Lacs

Pune

Work from Office

Naukri logo

Role Overview We are looking for experienced DevOps Engineers (8+ years) with a strong background in cloud infrastructure, automation, and CI/CD processes. The ideal candidate will have hands-on experience in building, deploying, and maintaining cloud solutions using Infrastructure-as-Code (IaC) best practices. The role requires expertise in containerization, cloud security, networking, and monitoring tools to optimize and scale enterprise-level applications. Key Responsibilities Design, implement, and manage cloud infrastructure solutions on AWS, Azure, or GCP. Develop and maintain Infrastructure-as-Code (IaC) using Terraform, CloudFormation, or similar tools. Implement and manage CI/CD pipelines using tools like GitHub Actions, Jenkins, GitLab CI/CD, BitBucket Pipelines, or AWS CodePipeline. Manage and orchestrate containers using Kubernetes, OpenShift, AWS EKS, AWS ECS, and Docker. Work on cloud migrations, helping organizations transition from on-premises data centers to cloud-based infrastructure. Ensure system security and compliance with industry standards such as SOC 2, PCI, HIPAA, GDPR, and HITRUST. Set up and optimize monitoring, logging, and alerting using tools like Datadog, Dynatrace, AWS CloudWatch, Prometheus, ELK, or Splunk. Automate deployment, configuration, and management of cloud-native applications using Ansible, Chef, Puppet, or similar configuration management tools. Troubleshoot complex networking, Linux/Windows server issues, and cloud-related performance bottlenecks. Collaborate with development, security, and operations teams to streamline the DevSecOps process. Must-Have Skills 3+ years of experience in DevOps, cloud infrastructure, or platform engineering. Expertise in at least one major cloud provider: AWS, Azure, or GCP. Strong experience with Kubernetes, ECS, OpenShift, and container orchestration technologies. Hands-on experience in Infrastructure-as-Code (IaC) using Terraform, AWS CloudFormation, or similar tools. Proficiency in scripting/programming languages like Python, Bash, or PowerShell for automation. Strong knowledge of CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, or BitBucket Pipelines. Experience with Linux operating systems (RHEL, SUSE, Ubuntu, Amazon Linux) and Windows Server administration. Expertise in networking (VPCs, Subnets, Load Balancing, Security Groups, Firewalls). Experience in log management and monitoring tools like Datadog, CloudWatch, Prometheus, ELK, Dynatrace. Strong communication skills to work with cross-functional teams and external customers. Knowledge of Cloud Security best practices, including IAM, WAF, GuardDuty, CVE scanning, vulnerability management. Good-to-Have Skills Knowledge of cloud-native security solutions (AWS Security Hub, Azure Security Center, Google Security Command Center). Experience in compliance frameworks (SOC 2, PCI, HIPAA, GDPR, HITRUST). Exposure to Windows Server administration alongside Linux environments. Familiarity with centralized logging solutions (Splunk, Fluentd, AWS OpenSearch). GitOps experience with tools like ArgoCD or Flux. Background in penetration testing, intrusion detection, and vulnerability scanning. Experience in cost optimization strategies for cloud infrastructure. Passion for mentoring teams and sharing DevOps best practices.

Posted 1 week ago

Apply

5.0 - 8.0 years

20 - 32 Lacs

Pune, Gurugram

Hybrid

Naukri logo

About the role: Site Reliability Engineer is one of the critical role in the technology team and the person working in this team will be responsible for application performance, availability, reliability and system uptime. Candidate is responsible to provide consultation and strategic recommendations by quickly assessing and remediating complex platform availability issues. Site Reliability Engineer LEAD will dive head-first into creating or applying innovative solutions and techniques that advance the reliability of Digital products. Role & responsibilities Key responsibilities: β€’ Installation/deployment of new releases , environments for applications. β€’ Build and maintain highly scalable, large scale deployments globally β€’ Co-Create and maintain architecture for 100% uptime. E.g. creating alternate connectivity. β€’ Practice sustainable incident response/management and blameless post-mortems. β€’ Monitor and maintain production environment stability. β€’ Own entire platforms (prod environments) Deploying, automating, maintaining and managing production systems, to ensure the availability, performance, scalability and security of productions systems β€’ Engage in and improve the whole lifecycle of services from inception and design, through deployment, operation and refinement. β€’ Support services before they go live through activities such as system design consulting, developing software platforms and frameworks, capacity planning and launch reviews. β€’ Maintain services once they are live by measuring and monitoring availability, latency and overall system health. β€’ Scale systems sustainably through mechanisms like automation and evolve systems by pushing for changes that improve reliability and velocity. β€’ Collaborate with Agile teams in defining technical requirements and best practices with containerized and cloud-native applications β€’ Represent production support and site reliability in stand-ups, planning sessions, code reviews, and architecture reviews β€’ Help evolve our configuration management (CM) efforts and our move to containers β€’ Help the operations head in selecting the enthusiastic and technically knowledgeable team and guide the existing team members. Preferred candidate profile β€’ Should have good knowhow of application, middleware, Databases (posgres, mongo, mysql etc.), infra, OS. β€’ Should have good understanding in Docker and Kubernetes. β€’ Should have an understanding of CI/CD and DevOps tools like Jenkins, Ansible, Shell scripting etc β€’ Monitoring and Logging: Experience with monitoring and logging tools (e.g. Nagios / appdynamics, ELK, Prometheus). β€’ Good Experience of distributed systems RabbitMQ, Kafka, Redis etc. β€’ Should have an experience of working on Linux, Weblogic/tomcat, Jboss and middleware technology. β€’ Should have worked on high traffic & highly scalable systems in past β€’ Knowledge on fundamental aspects for release automation (packaging, dependencies, promotion, deployment, compliance) β€’ Experience on project management tools such as JIRA and insight on quality analysis as well.

Posted 1 week ago

Apply

8.0 - 10.0 years

16 - 20 Lacs

Chennai

Work from Office

Naukri logo

DesignationTeam Lead - Devops Number of Positions 1 (Chennai) Educational Qualification Graduate degree in Indian or foreign equivalent required from an accredited institution Experience and Age Criteria 8-10 years of experience with Information Technology Age: Between 30 to 40 Job Profile Should possess good knowledge on software configuration management systems. Should have expertise in implementing CI/CD pipeline and related tools for cloud and on-prem infrastructure. Facilitating the development process and operations. Should be aware of latest technologies and industry trends with the ability to inspire and guide and manage small teams. Logical thinking and problem-solving skills along with an ability to collaborate would be a must. Should have good knowledge of SDLC and agile methodologies. Re-defining architecture by analysing the current system and following new and best practices. Automation of the manual processes. Expertise in Shell Scripting, Python etc. Working knowledge of Open-Source Software Tomcat, NodeJS, HTTPD, Nginx etc. Primary Skills: Gitlab, Jenkins, ELK, Artifactory Tools, AWS, Docker Kubernetes and other DevOps tools and Enablers like Sonarqube, Fortify, etc. Secondary Skills Shell Scripting, Python, SQL, Ansible, Terraform Competency Code Management System SVN / GitHub Build Maven / Ant / Gradle Cloud - AWS/GCP/Azure Repository Artifactory / Nexus Code Quality – SonarQube / Junit / PMD Continuous Integration Tools – Jenkins Continuous Deployment – Ansible / Chef Containerization – Docker / Kubernetes / OpenShift / Pivotal Scripting - Shell Scripting / Power Shell / Python Operating System - Unix, Linux (Red Hat Enterprise) Logging and Monitoring – Splunk / ELK / APM & Diagnostic Tools / Cloud monitoring solutions Test Automation – Selenium / Cucumber / Appium Last date for Application : 05th July 2024

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Job Profile Summary & Description: A Platform Operations Engineer is responsible for supporting multiple applications, consisting of different technologies, in an Enterprise Hosted environment. The individual provides escalation support to Multiple Platforms and its Services. They perform monthly/quarterly/yearly upgrades of the applications in the environment and work within teams to create solutions to identified issues. They are also responsible for communication to the end users. Shift Timing: Monthly Rotational Support Role: 24/7 Roles / Responsibilities: Fully functional and self-directed Resolve issues, manage workload, and balance priorities through frequent interruptions while meeting specific, time sensitive deadlines. Analyze clients/team requests to solve short- and long-term technical issues. Engineer solutions to meet companys SLA's and meet client expectations. Monitor and assist to tune applications in the environment through project initiatives, enhancements, and integration. Perform upgrades of the applications in the Hosted environment. Provides formal mentorship. High complexity assignments- owner. Moderate complexity assignments - owner (1 or >)l Low complexity assignments - provide oversight/review Regularly lead self and others and/or established as Product SME and/or established as specialist Sees the whole picture and adjusts work accordingly. Mentor others with less experience. Work with Senior Platform Operations Engineer to create and maintain documentation for all production environments and review regularly. Engage with Sr. Engineers/Team to document Standard Operating procedures, design changes and review prior to installation/implementation. Required Qualification: Typically requires a minimum 1 - 3 years of related experience in a professional job role with a bachelors degree in computer science or related field or 3 years and a master's degree; or a PhD without experience; or equivalent work experience. Basic knowledge on OS, Database & Networking concepts. Excellent problem-solving and communication abilities Strong knowledge on linux administration & linux server administration Thorough understanding of protocols such as DNS, HTTP, LDAP, SMTP, and SNMP Extensive understanding of Linux, including RedHat, CentOS, Rocky Linux Strong understanding on web servers, application servers, DNS & Mail servers. Should be good at any of the scripting languages. Shell/Python Knowledge on configuration management tools like puppet, Ansible is a plus. Industry certifications for application/s being supported are a plus. Experience with configuring and managing zones. Experience providing day to day administration and monitoring of servers to include: Provide support to ensure Linux Servers are operational. Provide file and system security management, log analysis, and statistical report generation. Analyse Security Scans and assist with vulnerability remediation. Good analytical ability (Basic knowledge of ELK, Basic knowledge of Tableau) Should be ready to work in rotational shifts.

Posted 1 week ago

Apply

4.0 - 9.0 years

15 - 25 Lacs

Hyderabad

Remote

Naukri logo

Remote position ELK developer hands on experience in delivering search solution. Should have excellent skills in Elasticsearch and Java. Experience in On-premises Elasticsearch, Enterprise search & Kibana installation & Configuration

Posted 1 week ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

A Platform Operations Engineer is responsible for supporting multiple applications, consisting of different technologies, in an Enterprise Hosted environment. The individual provides escalation support to Multiple Platforms and its Services. They perform monthly/quarterly/yearly upgrades of the applications in the environment and work within teams to create solutions to identified issues. They are also responsible for communication to the end users. Job Location: Hyderabad Shift Timing: Rotational Shift (24/7) Required Certification: RedHat certifications (RHCSA / RHCE) Duties & Responsibilities: Fully functional and self-directed Resolve issues, manage workload, and balance priorities through frequent interruptions while meeting specific, time sensitive deadlines. Analyze clients/team requests to solve short- and long-term technical issues. Engineer solutions to meet companys SLA's and meet client expectations Monitor and assist to tune applications in the environment through project initiatives, enhancements and integration. Perform upgrades of the applications in the Hosted environment. Provides formal mentorship High complexity assignments- owner Moderate complexity assignments - owner (1 or >)l Low complexity assignments - provide oversight/review Regularly lead self and others and/or established as Product SME and/or established as specialist Sees the whole picture and adjusts work accordingly. Mentors others with less experience. Work with Senior Platform Operations Engineer to create and maintain documentation for all production environments and review regularly.??? Engage with Sr. Engineers/Team to document Standard Operating procedures, design changes and review prior to installation/implementation. Required Qualifications: Typically requires a minimum of 3 - 5 years of related experience in a professional job role with a Bachelor's degree in Computer Science or related field.; or 3 years and a Master's degree; or a PhD without experience; or equivalent work experience Basic knowledge on OS, Database & Networking concepts. Excellent problem-solving and communication abilities Strong knowledge on linux administration & linux server administration Thorough understanding of protocols such as DNS, HTTP, LDAP, SMTP, and SNMP Extensive understanding of Linux, including RedHat, CentOS, Rocky Linux Strong understanding on web servers, application servers, DNS & Mail servers. Should be good at any of the scripting languages. Shell/Python Knowledge on configuration management tools like puppet, Ansible is a plus. Industry certifications for application/s being supported are a plus. Experience with configuring and managing zones. Experience providing day to day administration and monitoring of servers to include: Provide support to ensure Linux Servers are operational. Provide file and system security management, log analysis, and statistical report generation. Analyze Security Scans and assist with vulnerability remediation. Good analytical ability (Basic knowledge of ELK, Basic knowledge of Tableau) Should be ready to work in rotational shifts.

Posted 1 week ago

Apply

9.0 - 10.0 years

5 - 7 Lacs

Noida, Bengaluru

Work from Office

Naukri logo

Requirements: 5+ years of experience in DevOps or Cloud Engineering. Expertise in AWS (EC2, S3, RDS, Lambda, IAM, VPC, Route 53, etc.) and Azure (VMs, AKS, App Services, Azure Functions, Networking, etc.). Strong experience with Infrastructure as Code (IaC) using Terraform, CloudFormation, or Bicep. Hands-on experience with CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, or Azure DevOps. Proficiency in scripting languages like Python, Bash, or PowerShell. Experience with Kubernetes (EKS, AKS) and containerization (Docker). Knowledge of monitoring and logging tools like Prometheus, Grafana, ELK Stack, CloudWatch, and Azure Monitor. Familiarity with configuration management tools like Ansible, Puppet, or Chef. Strong understanding of security best practices in cloud environments. Experience with version control systems like Git. Excellent problem-solving skills and the ability to work in a fast-paced environment.

Posted 2 weeks ago

Apply

8.0 - 9.0 years

5 - 7 Lacs

Noida, Bengaluru

Work from Office

Naukri logo

Requirements: 5+ years of experience in DevOps or Cloud Engineering. Expertise in AWS (EC2, S3, RDS, Lambda, IAM, VPC, Route 53, etc.) and Azure (VMs, AKS, App Services, Azure Functions, Networking, etc.). Strong experience with Infrastructure as Code (IaC) using Terraform, CloudFormation, or Bicep. Hands-on experience with CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, or Azure DevOps. Proficiency in scripting languages like Python, Bash, or PowerShell. Experience with Kubernetes (EKS, AKS) and containerization (Docker). Knowledge of monitoring and logging tools like Prometheus, Grafana, ELK Stack, CloudWatch, and Azure Monitor. Familiarity with configuration management tools like Ansible, Puppet, or Chef. Strong understanding of security best practices in cloud environments. Experience with version control systems like Git. Excellent problem-solving skills and the ability to work in a fast-paced environment.

Posted 2 weeks ago

Apply

8.0 - 12.0 years

35 - 50 Lacs

Bengaluru

Work from Office

Naukri logo

Job Summary We are seeking a highly skilled Principal Infra Developer with 8 to 12 years of experience to join our team. The ideal candidate will have expertise in Splunk Admin SRE Grafana ELK and Dynatrace AppMon. This hybrid role requires a proactive individual who can contribute to our infrastructure development projects and ensure the reliability and performance of our systems. The position does not require travel and operates during day shifts. Responsibilities Systems Engineer Splunk or ElasticSearch Admin Job Requirements Build Deploy and Manage the Enterprise Lucene DB systems Splunk Elastic to ensure that the legacy physical Virtual systems and container infrastructure for businesscritical services are being rigorously and effectively served for high quality logging services with high availability. Support periodic Observability and infrastructure monitoring tool releases and tool upgrades Environment creation Performance tuning of large scale Prometheus systems Serve as Devops SRE for the internal observability systems in Visas various data centers across the globe including in Cloud environment Lead the evaluation selection design deployment and advancement of the portfolio of tools used to provide infrastructure and service monitoring. Ensure tools utilized can provide the critical visibility on modern architectures leveraging technologies such as cloud containers etc. Maintain upgrade and troubleshoot issues with SPLUNK clusters. Monitor and audit configurations and participate in the Change Management process to ensure that unauthorized changes do not occur. Manage patching and updates of Splunk hosts andor Splunk application software. Design develop recommend and implement Splunk dashboards and alerts in support of the Incident Response team. Ensure monitoring team increases use of automation and adopts a DevOpsSRE mentality Qualification 6plus years of enterprise system logging and monitoring tools experience with a desired 5plus years in a relevant critical infrastructure of Enterprise Splunk and Elasticsearch 5plus yrs of working experience as Splunk Administrator with Cluster Building Data Ingestion Management User Role Management Search Configuration and Optimization. Strong knowledge on opensource logging and monitoring tools. Experience with containers logging and monitoring solutions. Experience with Linux operating system management and administration Familiarity with LANWAN technologies and clear understanding of basic network concepts services Strong understanding of multitier application architectures and application runtime environments Monitoring the health and performance of the Splunk environment and troubleshooting any issues that arise. Worked in 247 on call environment. Knowledge of Python and other scripting languages and infrastructure automation technologies such as Ansible is desired Splunk Admin Certified is a plus

Posted 2 weeks ago

Apply

6.0 - 10.0 years

27 - 42 Lacs

Chennai

Work from Office

Naukri logo

Job Summary We are seeking an experienced Infra Dev Specialist with 6 to 10 years of experience to join our team. The ideal candidate will have expertise in SRE Grafana ELK Dynatrace AppMon and Splunk. This role involves working in a hybrid model with day shifts. The candidate will play a crucial role in ensuring the reliability and performance of our infrastructure contributing to the overall success of our projects and the positive impact on society. Responsibilities Lead the design implementation and maintenance of infrastructure solutions to ensure high availability and performance. Oversee the monitoring and alerting systems using tools like Grafana ELK Dynatrace AppMon and Splunk. Provide expertise in Site Reliability Engineering (SRE) to enhance system reliability and scalability. Collaborate with cross-functional teams to identify and resolve infrastructure issues promptly. Develop and maintain automation scripts to streamline infrastructure management tasks. Implement best practices for infrastructure security and compliance. Conduct regular performance tuning and optimization of infrastructure components. Monitor system health and performance and proactively address potential issues. Create and maintain detailed documentation of infrastructure configurations and procedures. Participate in on-call rotations to provide 24/7 support for critical infrastructure components. Drive continuous improvement initiatives to enhance infrastructure reliability and efficiency. Mentor and guide junior team members in best practices and technical skills. Contribute to the overall success of the company by ensuring the reliability and performance of our infrastructure. Qualifications Possess strong expertise in SRE principles and practices. Have extensive experience with monitoring and alerting tools such as Grafana ELK Dynatrace AppMon and Splunk. Demonstrate proficiency in scripting languages for automation purposes. Exhibit strong problem-solving skills and the ability to work under pressure. Show excellent communication and collaboration skills. Have a solid understanding of infrastructure security and compliance requirements. Display a proactive approach to identifying and addressing potential issues. Hold a relevant certification in SRE or related fields. Possess a strong commitment to continuous learning and improvement. Demonstrate the ability to mentor and guide junior team members. Have a proven track record of successfully managing and optimizing infrastructure components. Show a strong commitment to contributing to the overall success of the company. Exhibit a passion for ensuring the reliability and performance of infrastructure solutions. Certifications Required Certified SRE Practitioner Grafana Certified ELK Stack Certification Dynatrace Certified Associate Splunk Core Certified User

Posted 2 weeks ago

Apply

3.0 - 6.0 years

5 - 15 Lacs

Chennai

Work from Office

Naukri logo

We are looking for a Network Automation Engineer with a strong foundation in Python, DevOps , and GUI automation to transform our network operations. This role is central to automating key functions like service provisioning, monitoring, and fault resolution using advanced frameworks and tools. You will work closely with Network, SRE, and DevOps teams to build robust automation solutions that enhance efficiency and reliability across the organization. Key Responsibilities Network Automation & Scripting Develop and maintain Python scripts for network provisioning, configuration, and monitoring. Automate workflows using APIs, CLI, Netconf, REST, and Ansible. DevOps & CI/CD Integration Design CI/CD pipelines using Jenkins, GitLab, or Ansible AWX. Manage containerized applications with Docker and Kubernetes. Use Terraform and other Infrastructure-as-Code (IaC) tools. GUI Automation & RPA Automate GUI-based tasks with tools like Selenium, PyAutoGUI, AutoIt. Develop RPA scripts for repetitive manual processes. Monitoring & Observability Implement observability tools: Prometheus, Grafana, ELK stack. Automate alerts and incident responses using Python scripting. Collaboration & Documentation Work closely with cross-functional teams. Create and maintain technical documentation, playbooks, and standard operating procedures. Required Skills & Experience Programming & Scripting Python (advanced scripting, APIs, multithreading, libraries) Bash, PowerShell, YAML Network & Systems Knowledge Protocols: TCP/IP, SDH, VoIP, SIP, Routing & Switching Experience with routers, switches, firewalls Familiarity with BSS/OSS, NMS/EMS systems, 5G networks, and virtualized platforms (vBlock, CNIS). DevOps Tools & Platforms Ansible, Terraform, Docker, Kubernetes CI/CD: Jenkins, GitLab, Git APIs: REST, SNMP, Netconf, gRPC GUI & RPA Automation Tools: Selenium, PyAutoGUI, Pywinauto, AutoIt. Integration with APIs, data sources, and enterprise tools. Monitoring & Logging Prometheus, Grafana, ELK, Splunk, OpenTelemetry Preferred Qualifications Experience with cloud networking (AWS, GCP, Azure). Knowledge of AI/ML-based network automation. Exposure to orchestration tools (ONAP, Cisco NSO, OpenStack). What We Offer Technically challenging projects in a dynamic environment. Opportunity to work on cutting-edge network infrastructure. Competitive compensation and benefits. Culture of innovation, collaboration, and continuous learning.

Posted 2 weeks ago

Apply

6.0 - 10.0 years

27 - 42 Lacs

Bengaluru

Work from Office

Naukri logo

SRE - CI/CD pipelines; Docker, Kubernetes; Prometheus, Grafana, Splunk and ELK , Linux and Windows Job Summary We are seeking an experienced Infra Dev Specialist with 6 to 10 years of experience to join our team. The ideal candidate will have a strong background in Jenkins Jenkins X Azure DevOps AWS DevOps JenkinsCloudBees CircleCI and Bamboo. This role involves working in a hybrid model with day shifts and no travel requirements. The candidate will play a crucial role in developing and maintaining our infrastructure automation processes. Responsibilities Description ArchitectImplementManage infrastructure to support Kubernetes clusters in all environments Experience with CICD tool like Jenkins Ansible chef or similar tools. Experience with Automation tools and script using Shell Powershell or similar scripting technology. Experience with Windows and DotNet8 framework and build tools Provide support for all application environments as well as the continuous integration build environment Developimplement container monitoring strategy Act as the Docker technical SME for our Continuous Integration Team Participate in requirements gathering sessions Evaluate andor recommend purchases of network hardware software and peripheral equipment Coordinate and conduct project architecture infrastructure review meetings Developimplement container scaling strategy

Posted 2 weeks ago

Apply

10.0 - 14.0 years

35 - 50 Lacs

Chennai

Work from Office

Naukri logo

Job Summary We are seeking a highly skilled Principal Infra Developer with 10 to 14 years of experience to join our team. The ideal candidate will have expertise in SRE Grafana EKS JBOSS and Managing the teams with client interaction. Experience in Property Casualty Insurance is a plus. This hybrid role involves rotational shifts and does not require travel. Responsibilities Strong experience in AWS EKS. Having Good knowledge on creating Kubernetes Cluster pods namespace replicas daemon sets replica controller and set up kubectl. Working Knowledge on AWS EKS EC2 IAM MSK. Good working knowledge on Docker github setting up pipelines troubleshooting related issues. Working knowledge on monitoring tools such as AppDynamics ELK Grafana Nagios. Working knowledge on Rancher vault and Argocd. Good knowledge in networking concepts. Strong troubleshooting skills for triaging and fixing application issues on k8s cluster Hands on experience on installing configuring and maintenance of Jboss EAP 6x 7x in various environments domain based and standalone setup. Strong experience in configuring and administering Connection pools for JDBC connections and JMS queues in Jboss EAP. Strong experience in deploying applications JAR WAR EAR and maintain load balancing High availability and failover functionality in clustered environment through command line in JBoss EAP Extensive experience in troubleshooting by using thread Dumps heap dumps for Jboss server issues. Good experience on SSl certificates creation for JBoss 5x 6x 7x Experience in providing technical assistance for performance tuning and troubleshooting techniques of Java Application. Good to have deployment procedures of J2EE applications and code to JBoss Application server. Good knowledge on installation maintenance and integration of Webservers like Apache Web server OHS Nginx. Good knowledge in scripting .Automation using Ansible Bash and Terraform.

Posted 2 weeks ago

Apply

7.0 - 9.0 years

27 - 42 Lacs

Pune

Work from Office

Naukri logo

Primary & Mandatory Skill: Kubernetes Administrator and Helm Chart Certification Mandatory: CKA (Certified Kubernetes Administrator) OR CKAD (Certified Kubernetes Application Developer) Level: SA/M Client Round (Yes/ No): Yes Location Constraint if any : PAN India Shift timing: General shift JD: Should have very good understanding of various components of various types of Kubernetes clusters (Community/AKS/GKE/OpenShift) Should have provisioning experience of various type of Kubernetes clusters(Community/AKS/GKE/OpenShift) Should have Upgradation and monitoring experience of various type of Kubernetes clusters (Community/AKS/GKE/OpenShift) Should have good experience of sizing the Kubernetes clusters Should have very good experience on Container Security & Container Storage Should have hands-on development experience on "GO or JavaScript or Java" Should have very good experience on CICD workflow (Preferable Azure DevOps, Ansible and Jenkin) Should have good experience / knowledge of cloud platforms preferably Azure / Google / OpenStack Should have good understanding of application life cycle management on container platform Should have very good understating of container registry Should have very good understanding of Helm and Helm Charts Should have very good understanding of container monitoring tools like Prometheus, Grafana and ELK Should have very good experience on Linux operating system Should have basis understanding of enterprise networks and container networks Should be able to handle Severity#1 and Severity#2 incidents very good communication skills Should have analytical and problem-solving capabilities, ability to work with teams Good to have knowledge of ITIL Process

Posted 2 weeks ago

Apply

6.0 - 10.0 years

27 - 42 Lacs

Noida

Work from Office

Naukri logo

Terraform, Jenkins, Artifactory, ELK stack for monitoring, GCP cloud aware, Kubernetes, Anthos, GCP Administration Job Summary Join our dynamic team as an Infra Dev Specialist where you will leverage your expertise in Artifactory Anthos ELK Stack Kubernetes Jenkins GCP Ansible Terraform and DevOps to drive innovation in the Consumer Lending domain. With a hybrid work model and rotational shifts you will play a crucial role in optimizing infrastructure and enhancing system performance contributing to our mission of delivering exceptional financial solutions. Responsibilities Implement and manage infrastructure solutions using Artifactory Anthos and ELK Stack to ensure seamless integration and efficient operations. Collaborate with cross-functional teams to design and deploy Kubernetes clusters enhancing scalability and reliability of applications. Utilize Jenkins for continuous integration and continuous deployment processes streamlining development workflows and reducing time-to-market. Optimize cloud infrastructure on GCP ensuring cost-effective and secure solutions that align with business objectives. Develop and maintain automation scripts using Ansible and Terraform improving system provisioning and configuration management. Drive DevOps practices to enhance collaboration between development and operations teams fostering a culture of continuous improvement. Analyze and troubleshoot system issues providing timely resolutions to minimize downtime and ensure business continuity. Monitor system performance and implement enhancements to improve efficiency and user experience. Collaborate with stakeholders in the Consumer Lending domain to understand requirements and deliver tailored solutions that meet business needs. Participate in rotational shifts to provide 24/7 support ensuring high availability and reliability of infrastructure services. Contribute to the development of best practices and standards for infrastructure management promoting consistency and quality across projects. Engage in ongoing learning and development to stay updated with the latest technologies and industry trends. Support hybrid work model initiatives balancing remote and on-site work to maximize productivity and team collaboration. Qualifications Possess strong technical skills in Artifactory Anthos ELK Stack Kubernetes Jenkins GCP Ansible Terraform and DevOps essential for effective infrastructure management. Demonstrate expertise in the Consumer Lending domain understanding industry-specific requirements and challenges. Exhibit proficiency in cloud solutions particularly GCP to design and implement scalable and secure infrastructure. Showcase experience in automation tools like Ansible and Terraform crucial for efficient system provisioning and configuration. Display knowledge of DevOps practices fostering collaboration and continuous improvement within teams. Hold a minimum of 6 years and a maximum of 10 years of relevant experience ensuring a solid foundation in infrastructure development. Adapt to rotational shifts providing consistent support and maintaining high availability of services.

Posted 2 weeks ago

Apply

6.0 - 10.0 years

27 - 42 Lacs

Noida

Work from Office

Naukri logo

Terraform, Jenkins, Artifactory ,Kubernetes, GCP Job Summary Join our dynamic team as an Infra Dev Specialist where you will leverage your expertise in Artifactory Anthos ELK Stack Kubernetes Jenkins GCP Ansible Terraform and DevOps to drive innovation in the Consumer Lending domain. With a hybrid work model and rotational shifts you will play a crucial role in optimizing infrastructure and enhancing system performance contributing to our mission of delivering exceptional financial solutions. Responsibilities Implement and manage infrastructure solutions using Artifactory Anthos and ELK Stack to ensure seamless integration and efficient operations. Collaborate with cross-functional teams to design and deploy Kubernetes clusters enhancing scalability and reliability of applications. Utilize Jenkins for continuous integration and continuous deployment processes streamlining development workflows and reducing time-to-market. Optimize cloud infrastructure on GCP ensuring cost-effective and secure solutions that align with business objectives. Develop and maintain automation scripts using Ansible and Terraform improving system provisioning and configuration management. Drive DevOps practices to enhance collaboration between development and operations teams fostering a culture of continuous improvement. Analyze and troubleshoot system issues providing timely resolutions to minimize downtime and ensure business continuity. Monitor system performance and implement enhancements to improve efficiency and user experience. Collaborate with stakeholders in the Consumer Lending domain to understand requirements and deliver tailored solutions that meet business needs. Participate in rotational shifts to provide 24/7 support ensuring high availability and reliability of infrastructure services. Contribute to the development of best practices and standards for infrastructure management promoting consistency and quality across projects. Engage in ongoing learning and development to stay updated with the latest technologies and industry trends. Support hybrid work model initiatives balancing remote and on-site work to maximize productivity and team collaboration. Qualifications Possess strong technical skills in Artifactory Anthos ELK Stack Kubernetes Jenkins GCP Ansible Terraform and DevOps essential for effective infrastructure management. Demonstrate expertise in the Consumer Lending domain understanding industry-specific requirements and challenges. Exhibit proficiency in cloud solutions particularly GCP to design and implement scalable and secure infrastructure. Showcase experience in automation tools like Ansible and Terraform crucial for efficient system provisioning and configuration. Display knowledge of DevOps practices fostering collaboration and continuous improvement within teams. Hold a minimum of 6 years and a maximum of 10 years of relevant experience ensuring a solid foundation in infrastructure development. Adapt to rotational shifts providing consistent support and maintaining high availability of services.

Posted 2 weeks ago

Apply

8.0 - 12.0 years

35 - 50 Lacs

Kolkata

Work from Office

Naukri logo

Terraform, Jenkins, Artifactory, ELK stack for monitoring, GCP cloud aware, Kubernetes, Anthos, GCP Administration Job Summary We are seeking an experienced Infra Ops Specialist with 8 to 12 years of experience to join our team. The ideal candidate will have expertise in Anthos Kubernetes GCP ELK Stack Artifactory Jenkins and Terraform. This role requires domain experience in Commercial Lending. The work model is hybrid with rotational shifts. Travel is not required. Responsibilities Lead the implementation and management of Anthos and Kubernetes environments to ensure high availability and scalability. Oversee the deployment and maintenance of GCP infrastructure to support business applications. Provide expertise in ELK Stack for monitoring logging and analysis of system performance. Manage Artifactory repositories to ensure efficient storage and retrieval of artifacts. Utilize Jenkins for continuous integration and continuous deployment (CI/CD) pipelines to streamline development processes. Implement and manage Terraform scripts for infrastructure as code (IaC) to automate provisioning and management of cloud resources. Collaborate with development and operations teams to ensure seamless integration and delivery of applications. Monitor system performance and troubleshoot issues to ensure optimal operation of infrastructure. Develop and maintain documentation for infrastructure configurations processes and procedures. Ensure compliance with security policies and best practices in all aspects of infrastructure management. Provide support for commercial lending applications ensuring high availability and performance. Participate in rotational shifts to provide 24/7 support for critical infrastructure. Contribute to the continuous improvement of infrastructure operations to enhance efficiency and reliability. Qualifications Possess strong technical skills in Anthos Kubernetes GCP ELK Stack Artifactory Jenkins and Terraform. Have a solid understanding of commercial lending domain and its specific infrastructure requirements. Demonstrate experience in managing hybrid work environments and rotational shifts. Exhibit excellent problem-solving skills and the ability to troubleshoot complex infrastructure issues. Show proficiency in scripting and automation to streamline infrastructure management tasks. Have strong communication skills to collaborate effectively with cross-functional teams.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

15 - 25 Lacs

Pune, Gurugram, Bengaluru

Work from Office

Naukri logo

Key Responsibilities: Conduct threat modeling, code reviews, and security assessments of applications and products. Perform vulnerability analysis and collaborate with development teams to remediate issues. Integrate security tools (SAST, DAST, SCA) into CI/CD pipelines. Develop and enforce security policies, guidelines, and standards. Conduct risk assessments for new features, technologies, and vendors. Stay updated on emerging threats, vulnerabilities, and industry best practices. Support incident response efforts and post-mortem analysis when required. Required Skills & Qualifications: 48 years of experience in cybersecurity or product/application security. Strong understanding of OWASP Top 10, secure coding principles, and SDLC. Hands-on experience with static and dynamic analysis tools (e.g., Checkmarx, Veracode, Burp Suite). Familiarity with cloud platforms (AWS, Azure, or GCP) and securing cloud-native applications. Experience with scripting (Python, Bash, etc.) for automation and tooling. Good understanding of authentication/authorization protocols (OAuth, SAML, etc.). Bachelor’s degree in Computer Science, Information Security, or related field. Nice-to-Have (Preferred): Certifications like CEH, OSCP, CISSP, or AWS Security Specialty. Experience with containers and Kubernetes security. Knowledge of threat modeling frameworks (e.g., STRIDE, MITRE ATT&CK). Exposure to DevSecOps practices.

Posted 2 weeks ago

Apply

21.0 - 31.0 years

50 - 70 Lacs

Bengaluru

Work from Office

Naukri logo

What we’re looking for As a member of the infrastructure team at Survey Monkey, you will have a direct impact in designing, engineering and maintaining our Cloud, Messaging and Observability Platform. Solutioning with best practices, deployment processes, architecture, and support the ongoing operation of our multi-tenant AWS environments. This role presents a prime opportunity for building world-class infrastructure, solving complex problems at scale, learning new technologies and offering mentorship to other engineers. What you'll be working on Architect, build, and operate AWS environments at scale with well-established industry best practices. Automating infrastructure provisioning, DevOps, and/or continuous integration/delivery. Provide Technical Leadership & Mentorship Mentor and guide senior engineers to build technical expertise and drive a culture of excellence in software development. Foster collaboration within the engineering team, ensuring the adoption of best practices in coding, testing, and deployment. Review code and provide constructive feedback to ensure code quality and adherence to architectural principles. Collaboration & Cross-Functional Leadership Collaborate with cross-functional teams (Product, Security, and other Engineering teams) to drive the roadmap and ensure alignment with business objectives. Provide technical leadership in meetings and discussions, influencing key decisions on architecture, design, and implementation. Innovation & Continuous Improvement Propose, evaluate, and integrate new tools and technologies to improve the performance, security, and scalability of the cloud platform. Drive initiatives for optimizing cloud resource usage and reducing operational costs without compromising performance. Write libraries and APIs that provide a simple, unified interface to other developers when they use our monitoring, logging, and event-processing systems. Participate in on-call rotation. Support and partner with other teams on improving our observability systems to monitor site stability and performance We’d love to hear from people with 12+ years of relevant professional experience with cloud platforms such as AWS, Heroku. Extensive experience leading design sessions and evolving well-architected environments in AWS at scale. Extensive experience with Terraform, Docker, Kubernetes, scripting (Bash/Python/Yaml), and helm. Experience with Splunk, OpenTelemetry, CloudWatch, or tools like New Relic, Datadog, or Grafana/Prometheus, ELK (Elasticsearch/Logstash/Kibana). Experience with metrics and logging libraries and aggregators, data analysis and visualization tools – Specifically Splunk and Otel. Experience instrumenting PHP, Python, Java and Node.js applications to send metrics, traces, and logs to third-party Observability tooling. Experience with GitOps and tools like ArgoCD/fluxcd. Interest in Instrumentation and Optimization of Kubernetes Clusters. Ability to listen and partner to understand requirements, troubleshoot problems, or promote the adoption of platforms. Experience with GitHub/GitHub Actions/Jenkins/Gitlab in either a software engineering or DevOps environment. Familiarity with databases and caching technologies, including PostgreSQL, MongoDB, Elasticsearch, Memcached, Redis, Kafka and Debezium. Preferably experience with secrets management, for example Hashicorp Vault. Preferably experience in an agile environment and JIRA. SurveyMonkey believes in-person collaboration is valuable for building relationships, fostering community, and enhancing our speed and execution in problem-solving and decision-making. As such, this opportunity is hybrid and requires you to work from the SurveyMonkey office in Bengaluru 3 days per week. #LI - Hybrid

Posted 2 weeks ago

Apply

5.0 - 8.0 years

7 - 11 Lacs

Chennai

Work from Office

Naukri logo

Overview DevOps Engineer \u2013 OpenShift (OCP) Specialist Job Summary: FSS is seeking a highly skilled DevOps Engineer with hands-on experience in Red Hat OpenShift Container Platform (OCP) and associated tools like Argo CD, Jenkins, and Data Grid. The ideal candidate will drive automation, manage containerized environments, and ensure smooth CI/CD pipelines across hybrid infrastructure to support our financial technology solutions. Required Skills & Qualifications: Technical Skills: Strong hands-on experience with OpenShift (v4.x) administration and operations. Proficiency in CI/CD tools: Jenkins, Argo CD, GitHub Actions, GitLab CI/CD. Deep understanding of Kubernetes, Docker, and container orchestration. Experience with Red Hat Data Grid or other in-memory data grids. Skilled in IaC tools: Terraform, Ansible, CloudFormation. Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK, Splunk). Proficient in scripting languages: Bash, Python, or Shell. Soft Skills: Excellent problem-solving and analytical skills. Strong communication and collaboration abilities across cross-functional teams. Candidates should be able to work independently. Candidate should be able to provide solution based on customer requirements and work with customer\u2019s DevOps team during the project implementation. Responsibilities Key Responsibilities: OpenShift Platform Engineering: Deploy, manage, and maintain applications on OpenShift Container Platform. Configure and manage Operators, Helm charts, and OpenShift GitOps (Argo CD). Manage Red Hat Data Grid deployments and integrations. Support OCP cluster upgrades, patching, and troubleshooting. CI/CD Implementation & Automation: Design, implement, and manage CI/CD pipelines using Jenkins and Argo CD. Ensure seamless code integration, testing, and deployment processes with development teams. Infrastructure as Code (IaC): Automate infrastructure provisioning with tools like Terraform and Ansible. Manage hybrid infrastructure across on-prem and public clouds (AWS, Azure, or GCP). Monitoring & Performance Optimization: Implement and manage observability stacks (Prometheus, Grafana, ELK, etc.) for OCP and underlying services. Proactively identify and resolve system performance bottlenecks. Security & Compliance: Enforce security best practices in containerized and cloud environments. Conduct vulnerability assessments and ensure compliance with industry standards. Collaboration & Support: Collaborate with developers, QA, and IT teams to optimize DevOps workflows. Provide ongoing support and incident response for production and non-production environments. Qualifications BE, B-tech,MCA or Equivalent degree Payment gateway, Bank reconciliation, Card, Payment gateway Essential skills Technical Skills: Strong hands-on experience with OpenShift (v4.x) administration and operations. Proficiency in CI/CD tools: Jenkins, Argo CD, GitHub Actions, GitLab CI/CD. Deep understanding of Kubernetes, Docker, and container orchestration. Experience with Red Hat Data Grid or other in-memory data grids. Skilled in IaC tools: Terraform, Ansible, CloudFormation. Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK, Splunk). Proficient in scripting languages: Bash, Python, or Shell. Soft Skills: Excellent problem-solving and analytical skills. Strong communication and collaboration abilities across cross-functional teams. Candidates should be able to work independently. Candidate should be able to provide solution based on customer requirements and work with customer\u2019s DevOps team during the project implementation. Desired skills Technical Skills: Strong hands-on experience with OpenShift (v4.x) administration and operations. Proficiency in CI/CD tools: Jenkins, Argo CD, GitHub Actions, GitLab CI/CD. Deep understanding of Kubernetes, Docker, and container orchestration. Experience with Red Hat Data Grid or other in-memory data grids. Skilled in IaC tools: Terraform, Ansible, CloudFormation. Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK, Splunk). Proficient in scripting languages: Bash, Python, or Shell. Soft Skills: Excellent problem-solving and analytical skills. Strong communication and collaboration abilities across cross-functional teams. Candidates should be able to work independently. Candidate should be able to provide solution based on customer requirements and work with customer\u2019s DevOps team during the project implementation.

Posted 2 weeks ago

Apply

3.0 - 10.0 years

22 - 26 Lacs

Hyderabad

Work from Office

Naukri logo

Skillsoft is the global leader in eLearning. Trusted by the world's leading organizations, including 65% of the Fortune 500. Our 100,000+ courses, videos and books are accessed over 100 million times every month, across more than 100 countries. At Skillsoft, we believe knowledge is the fuel for innovation and innovation is the fuel for business growth. Join us in our quest to democratize learning and help individuals unleash their edge. Are you ready to shape the future of learning through cutting-edge AI? As a Principal AI/Machine Learning Engineer at Skillsoft, you’ll dive into the heart of innovation, crafting intelligent systems that empower millions worldwide. From designing generative AI solutions to pioneering agentic workflows, you’ll collaborate with multiple teams to transform knowledge into a catalyst for growthβ€”unleashing your edge while helping others do the same. Join us in redefining eLearning for the world’s leading organizations! Responsibilities: Hands-on AI/ML software engineer Prompt engineering, agentic workflow development and testing Work with product owners to understand requirements and guide new features Collaborate to identify new feature impacts Evaluate new AI/ML technology advancements and socialize finding Research, prototype, and select appropriate COTS and develop in-house AI/ML technology Consult with external partners to review and guide development and integration of AI technology Collaborate with teams to design, and guide AI development, and enhancements Document designs and implementation to ensure consistency and alignment with standards Create documentation including system and sequence diagrams Create appropriate data pipelines for AI/ML training and inference Analyze, curate, cleanse, and preprocess data Utilize and apply generative AI to increase productivity for yourself and the organization Periodically explore new technologies and design patterns with proof-of-concept Participate in developing best practices and improving operational processes Present research and work to socialize and share knowledge across the organization Contribute to patentable AI innovations Environment, Tools & Technologies: Agile/Scrum Operating Systems – Mac, Linux JavaScript, Node.js, Python PyTorch, Tensorflow, Keras, OpenAI, Anthropic, and friends Langchain, Langgraph, etc. APIs GraphQL, REST Docker, Kubernetes Amazon Web Services (AWS), MS Azure SQL: Postgres RDS NoSQL: Cassandra, Elasticsearch (VectorDb) Messaging – Kafka, RabbitMQ, SQS Monitoring – Prometheus, ELK GitHub, IDE (your choice) Skills & Qualifications: (8+ years experience) Experience with LLMs and fine-tuning models Development experience including unit testing Design and documentation experience of new APIs, data models, service interactions Familiarity with and ability to explain: o system and API security techniques o data privacy concerns o microservices architecture o vertical vs horizontal scaling o Generative AI, NLP, DNN, auto-encoders, etc. Attributes for Success: Proactive, Independent, Adaptable Collaborative team player Customer service minded with an ownership mindset Excellent analytic and communication skills Ability and desire to coach and mentor other developers Passionate, curious, open to new ideas, and ability to research and learn new technologies

Posted 2 weeks ago

Apply

8.0 - 10.0 years

40 - 45 Lacs

Hyderabad

Work from Office

Naukri logo

Skillsoft is the global leader in eLearning. Trusted by the world's leading organizations, including 65% of the Fortune 500. Our 100,000+ courses, videos and books are accessed over 100 million times every month, across more than 100 countries. At Skillsoft, we believe knowledge is the fuel for innovation and innovation is the fuel for business growth. Join us in our quest to democratize learning and help individuals unleash their edge. Are you ready to shape the future of learning through cutting-edge AI? As a Principal AI/Machine Learning Engineer at Skillsoft, you’ll dive into the heart of innovation, crafting intelligent systems that empower millions worldwide. From designing generative AI solutions to pioneering agentic workflows, you’ll collaborate with multiple teams to transform knowledge into a catalyst for growthβ€”unleashing your edge while helping others do the same. Join us in redefining eLearning for the world’s leading organizations! Responsibilities: Hands-on AI/ML software engineer Prompt engineering, agentic workflow development and testing Work with product owners to understand requirements and guide new features Collaborate to identify new feature impacts Evaluate new AI/ML technology advancements and socialize finding Research, prototype, and select appropriate COTS and develop in-house AI/ML technology Consult with external partners to review and guide development and integration of AI technology Collaborate with teams to design, and guide AI development, and enhancements Document designs and implementation to ensure consistency and alignment with standards Create documentation including system and sequence diagrams Create appropriate data pipelines for AI/ML training and inference Analyze, curate, cleanse, and preprocess data Utilize and apply generative AI to increase productivity for yourself and the organization Periodically explore new technologies and design patterns with proof-of-concept Participate in developing best practices and improving operational processes Present research and work to socialize and share knowledge across the organization Contribute to patentable AI innovations Environment, Tools & Technologies: Agile/Scrum Operating Systems – Mac, Linux JavaScript, Node.js, Python PyTorch, Tensorflow, Keras, OpenAI, Anthropic, and friends Langchain, Langgraph, etc. APIs GraphQL, REST Docker, Kubernetes Amazon Web Services (AWS), MS Azure SQL: Postgres RDS NoSQL: Cassandra, Elasticsearch (VectorDb) Messaging – Kafka, RabbitMQ, SQS Monitoring – Prometheus, ELK GitHub, IDE (your choice) Skills & Qualifications: (8+ years experience) Experience with LLMs and fine-tuning models Development experience including unit testing Design and documentation experience of new APIs, data models, service interactions Familiarity with and ability to explain: o system and API security techniques o data privacy concerns o microservices architecture o vertical vs horizontal scaling o Generative AI, NLP, DNN, auto-encoders, etc. Attributes for Success: Proactive, Independent, Adaptable Collaborative team player Customer service minded with an ownership mindset Excellent analytic and communication skills Ability and desire to coach and mentor other developers Passionate, curious, open to new ideas, and ability to research and learn new technologies

Posted 2 weeks ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Education Qualification: Bachelor's degree in Computer Science or related field or higher with minimum 4 years of relevant experience. Position Description:. 5 - 8 years of experience in implementation of Elastic Search based project development.Design and implement highly scalable ELK (Elastic Search,Logstash and Kibana) stack solutions.Strong Knowledge of object-oriented JAVA programming .Design Concepts & Design patterns and secure API's using Webservices .Strong knowledge of Elastic Search,Kibana,Banana, Dashboards.Work experience in any front end like HTML,CSS,Bootstrap and JavaScript, React and AngularBuild and manage DevOps automation using Ansible and Python scripts for ELK and Java services product stack.Work experience in DB side like MySQL,PostgreSQL,Oracle,Cassandra,Elastic Search and MongoDB.Good knowledge on debugging the application.Good knowledge on deployment and configuration part.Very good Troubleshooting & Analytical skills.Experience with scripting in UNIX,Linux and Windows environments.Must have excellent communication skills. Skils : ELK (Elastic Search; Logstash, and Kibana); ETL; Cloud Computing; Infrastructure architecture; ReactBehavioral Competencies : Proven experience of delivering process efficiencies and improvements Clear and fluent English (both verbal and written) Ability to build and maintain efficient working relationships with remote teams Demonstrate ability to take ownership of and accountability for relevant products and services Ability to plan, prioritize and complete your own work, whilst remaining a team player Willingness to engage with and work in other technologies Project Specific Requirements: Expertise in data ingestion, transformation, and enrichment Good knowledge and hands on experience in Rest APIs and Webservices Good experience in elasticsearch query optimization and scripting Experience in integration of elasticsearch with backend systems, services and data pipelines Expertise in troubleshooting Exposure to ILM policies Exposure to snapshot and restore of indices in elasticsearch Good to have knowledge in Elasticsearch Cluster Must-Have Skills: Elastic Search - Logstach - Kibana Good-to-Have Skills: Java, Postgresql Skills: Java Angular Postgre SQL Python

Posted 2 weeks ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Education Qualification: Bachelor's degree in Computer Science or related field or higher with minimum 4 years of relevant experience. Position Description:. 5 - 8 years of experience in implementation of Elastic Search based project development Design and implement highly scalable ELK (Elastic Search,Logstash and Kibana) stack solutions Strong Knowledge of object-oriented JAVA programming Design Concepts & Design patterns and secure API's using Webservices Strong knowledge of Elastic Search,Kibana,Banana, Dashboards Work experience in any front end like HTML,CSS,Bootstrap and JavaScript, React and AngularBuild and manage DevOps automation using Ansible and Python scripts for ELK and Java services product stack Work experience in DB side like MySQL,PostgreSQL,Oracle,Cassandra,Elastic Search and MongoDB Good knowledge on debugging the application Good knowledge on deployment and configuration part Very good Troubleshooting & Analytical skills Experience with scripting in UNIX,Linux and Windows environments Must have excellent communication skills Skils : ELK (Elastic Search; Logstash, and Kibana); ETL; Cloud Computing; Infrastructure architecture; React Behavioral Competencies : Proven experience of delivering process efficiencies and improvements Clear and fluent English (both verbal and written) Ability to build and maintain efficient working relationships with remote teams Demonstrate ability to take ownership of and accountability for relevant products and services Ability to plan, prioritize and complete your own work, whilst remaining a team player Willingness to engage with and work in other technologies Project Specific Requirements : Expertise in data ingestion, transformation, and enrichment Good knowledge and hands on experience in Rest APIs and Webservices Good experience in elasticsearch query optimization and scripting Experience in integration of elasticsearch with backend systems, services and data pipelines Expertise in troubleshooting Exposure to ILM policies Exposure to snapshot and restore of indices in elasticsearch Good to have knowledge in Elasticsearch Cluster Must-Have Skills: Elastic Search - Logstach - Kibana Good-to-Have Skills: Java, Postgresql Skills: Java Angular Postgre SQL Python

Posted 2 weeks ago

Apply

7.0 - 12.0 years

10 - 18 Lacs

Gurugram

Work from Office

Naukri logo

Working knowledge of integration of Splunk logging infrastructure with 3rd party Observability Tools (e.g. ELK, DataDog etc.) o Experience in identifying the security and non-security logs and apply adequate filters/re-route the logs accordingly. o Expert in understanding the Network Architecture a nd identifying the components of impact. o Expert in Linux Administration. o Proficient in working with Syslog . o Proficiency in scripting languages like Python, PowerShell, or Bash to automate tasks Expertise with OEM SIEM tools preferably Splunk E xperience with open source SIEM/Log storage solutions like ELK OR Datadog etc. . o Very good with documentation of HLD, LLD, Implementation guide and Operation Manuals Role & responsibilities Preferred candidate profile

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies