Jobs
Interviews

310 Elk Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

You should have a minimum of 5 years of experience in DevOps, SRE, or Infrastructure Engineering. Your expertise should include a strong command of Azure Cloud and Infrastructure-as-Code using tools such as Terraform and CloudFormation. Proficiency in Docker and Kubernetes is essential. You should be hands-on with CI/CD tools and scripting languages like Bash, Python, or Go. A solid knowledge of Linux, networking, and security best practices is required. Experience with monitoring and logging tools such as ELK, Prometheus, and Grafana is expected. Familiarity with GitOps, Helm charts, and automation will be an advantage. Your key responsibilities will involve designing and managing CI/CD pipelines using tools like Jenkins, GitLab CI/CD, and GitHub Actions. You will be responsible for automating infrastructure provisioning through tools like Terraform, Ansible, and Pulumi. Monitoring and optimizing cloud environments, implementing containerization and orchestration with Docker and Kubernetes (EKS/GKE/AKS), and maintaining logging, monitoring, and alerting systems (ELK, Prometheus, Grafana, Datadog) are crucial aspects of the role. Ensuring system security, availability, and performance tuning, managing secrets and credentials using tools like Vault and Secrets Manager, troubleshooting infrastructure and deployment issues, and implementing blue-green and canary deployments will be part of your responsibilities. Collaboration with developers to enhance system reliability and productivity is key. Preferred skills include certification as an Azure DevOps Engineer, experience with multi-cloud environments, microservices, and event-driven systems, as well as exposure to AI/ML pipelines and data engineering workflows.,

Posted 2 weeks ago

Apply

6.0 - 12.0 years

0 - 16 Lacs

Noida, Pune

Work from Office

Roles and Responsibilities : Design, implement, and maintain Elasticsearch clusters for large-scale data storage and retrieval. Develop custom plugins and integrations to enhance the functionality of Elasticsearch engines. Troubleshoot complex issues related to Elasticsearch indexing, querying, and data modeling. Collaborate with cross-functional teams to develop scalable solutions using Elk (Elasticsearch, Logstash, Kibana) stack. Job Requirements : 6-12 years of experience in designing and implementing Elasticsearch clusters. Strong understanding of Elasticsearch architecture, including nodes, indices, shards, replicas, etc. Proficiency in developing custom plugins and integrations for Elasticsearch engines. Experience working with Logstash for log processing and Kibana for visualization.

Posted 2 weeks ago

Apply

1.0 - 4.0 years

3 - 6 Lacs

Noida, Gurugram, Delhi / NCR

Work from Office

1+ years experience in below skills: Must Have: - Troubleshoot platform issues - Manage configs via YAML/Helm/Kustomize - Upgrade & maintain OpenShift clusters & Operators - Support CI/CD pipelines, DevOps teams - Monitoring Prometheus/ EFK/ Grafana Required Candidate profile Participate in CR/patch planning Automate provisioning (namespaces, RBAC, NetworkPolicies) Open to work in 24x7 Rotational Coverage/ On-call Support Immediate Joiner is plus Excellent in communication

Posted 2 weeks ago

Apply

6.0 - 11.0 years

9 - 15 Lacs

Noida, Gurugram, Delhi / NCR

Work from Office

6+ years experience in below skills: Must Have: - Troubleshoot platform issues - Manage configs via YAML/Helm/Kustomize - Upgrade & maintain OpenShift clusters & Operators - Support CI/CD pipelines, DevOps teams - Monitoring Prometheus/ EFK/ Grafana Required Candidate profile Participate in CR/patch planning Automate provisioning (namespaces, RBAC, NetworkPolicies) Open to work in 24x7 Rotational Coverage/ On-call Support Immediate Joiner is plus Excellent in communication

Posted 2 weeks ago

Apply

5.0 - 10.0 years

25 - 40 Lacs

Noida

Work from Office

Description: Hiring Golang Developer for Noida location. Requirements: Job Title: Go Developer with Kubernetes Experience Location: Noida Experience Level: 8+ Years Team: Engineering / Platform Team About the Role We are looking for a skilled Go (Golang) Developer with working knowledge of Kubernetes. The ideal candidate will be proficient in building scalable backend systems using Go, and comfortable working with cloud-native technologies and Kubernetes for deployment, monitoring, and management. This is a hands-on engineering role that bridges application development and infrastructure orchestration, ideal for someone who enjoys both writing clean code and understanding how it runs in modern containerized environments. You will be involved in ensuring reliable, highly available, scalable, maintainable and highly secure systems. Candidates who fit these roles come from both systems and software development backgrounds. Your development background will help you in designing large scale, highly distributed and fault tolerant applications. Your systems background will help you in ensuring the uptime and reliability through monitoring deep system parameters and remediating issues at the systems level. Skills Golang Kubernetes Docker CI/CD Cloud Platforms (Azure, AWS, Google Cloud etc.) Microservices Git Linux System Monitoring and Logging Job Responsibilities: Responsibilities: - Designing, developing, and maintaining scalable and efficient applications using the Go programming language with a strong grasp of idiomatic Go, interfaces, channels, and goroutines. - Experience developing scalable backend services (microservices, APIs). - Understanding of REST and distributed systems. - Hands-on experience in Public Cloud – Azure, AWS, etc. - Experience with Docker and deploying containerized applications to Kubernetes. - Familiarity with Kubernetes concepts: pods, services, deployments, config maps, secrets, health checks. - Collaborating on the design and implementation of CI/CD pipelines for automated testing and deployment. - Implement best practices for software development and infrastructure management. - Monitor system performance and troubleshoot issues. - Write and maintain technical documentation. - Comfortable with logging/monitoring tools like Prometheus, Grafana, ELK stack New Relic, Splunk etc. - Keeping abreast of the latest advancements in Kubernetes, Go, and cloud-native technologies. - Good communication and teamwork skills. - Excellent problem-solving skills and attention to detail. - Management and leadership experience very helpful. What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!

Posted 2 weeks ago

Apply

3.0 - 6.0 years

6 - 9 Lacs

Noida, Gurugram, Delhi / NCR

Work from Office

3+ years experience in below skills: Must Have: - Troubleshoot platform issues - Manage configs via YAML/Helm/Kustomize - Upgrade & maintain OpenShift clusters & Operators - Support CI/CD pipelines, DevOps teams - Monitoring Prometheus/ EFK/ Grafana Required Candidate profile Participate in CR/patch planning Automate provisioning (namespaces, RBAC, NetworkPolicies) Open to work in 24x7 Rotational Coverage/ On-call Support Immediate Joiner is plus Excellent in communication

Posted 2 weeks ago

Apply

5.0 - 7.0 years

15 - 25 Lacs

Pune

Work from Office

Required Skills and Qualifications: 3+ years of backend development experience in Java (Java 8+) and Spring Boot Strong understanding of REST APIs, JPA/Hibernate, and SQL databases (e.g., PostgreSQL, MySQL) Knowledge of software engineering principles and design patterns Experience with testing frameworks like JUnit and Mockito Familiarity with Docker and CI/CD tools Good communication and team collaboration skills Roles and Responsibilities Key Responsibilities: Develop and maintain backend systems using Java (Spring Boot) Build RESTful APIs and integrate with databases and third-party services Write unit and integration tests to ensure code quality Participate in code reviews and collaborate with peers and senior engineers Follow clean code principles and best practices in microservices design Support CI/CD deployment pipelines and container-based workflows Continuously learn and stay updated with backend technologies Required Skills and Qualifications: 3+ years of backend development experience in Java (Java 8+) and Spring Boot Strong understanding of REST APIs, JPA/Hibernate, and SQL databases (e.g., PostgreSQL, MySQL) Knowledge of software engineering principles and design patterns Experience with testing frameworks like JUnit and Mockito Familiarity with Docker and CI/CD tools Good communication and team collaboration skills Nice to Have: Exposure to Kubernetes and cloud platforms (AWS, GCP, etc.) Familiarity with messaging systems like Kafka or RabbitMQ Awareness of security standards and authentication protocols (OAuth2, JWT) Interest in DevOps practices and monitoring tools (Prometheus, ELK, etc.)

Posted 2 weeks ago

Apply

5.0 - 8.0 years

18 - 20 Lacs

Noida, Madurai, Chennai

Hybrid

1. Expertise on Observability/SRE tools, platforms, and standards, including ELK Stack, Grafana, Prometheus, Loki, Victoria Metrics, Telegraf 2. Familiarity with modern logging frameworks and best practices: Opentelemetry, Kafka etc. 3. Experience with data visualization tools like Grafana, Kibana to create informative and actionable dashboards, reports, and alerts. 4. Proficiency in scripting languages like Python, Bash, or PowerShell is valuable for automating data collection, analysis, and visualization processes. 5. Good to have Experience in Monitoring Tools SCOM, Opensearch.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

9 - 19 Lacs

Hyderabad, Ahmedabad, Bengaluru

Work from Office

Kubernetes Engineer Build bulletproof infrastructure for regulated industries At Ajmera Infotech , we're building planet-scale software for NYSE-listed clients with a 120+ strong engineering team . Our work powers mission-critical systems in HIPAA, FDA, and SOC2-compliant domains where failure is not an option . Why Youll Love It Own production-grade Kubernetes deployments at real scale Drive TDD-first DevOps in CI/CD environments Work in a compliance-first org (HIPAA, FDA, SOC2) with code-first values Collaborate with top-tier engineers in multi-cloud deployments Career growth via mentorship , deep-tech projects , and leadership tracks Key Responsibilities Design, deploy, and manage resilient Kubernetes clusters (k8s/k3s) Automate workload orchestration using Ansible or custom scripting Integrate Kubernetes deeply into CI/CD pipelines Tune infrastructure for performance, scalability, and regulatory reliability Support secure multi-tenant environments and compliance needs (e.g., HIPAA/FDA) Must-Have Skills 38 years of hands-on experience in production Kubernetes environments Expert-level knowledge of containerization with Docker Proven experience with CI/CD integration for k8s Automation via Ansible , shell scripting, or similar tools Infrastructure performance tuning within Kubernetes clusters Nice-to-Have Skills Multi-cloud cluster management (AWS/GCP/Azure) Helm, ArgoCD, or Flux for deployment and GitOps Service mesh, ingress controllers, and pod security policies

Posted 2 weeks ago

Apply

5.0 - 8.0 years

1 Lacs

Hyderabad

Work from Office

Overview : TekWissen Group is a workforce management provider operating throughout India and several other countries worldwide. The client below is a leading technology company offering a range of IT solutions to businesses and organizations, enabling them to transform their digital futures Position: Senior Software Engineer Location: Hyderabad Job Type: Contract Work Type: Onsite Job Description : Key Responsibilities: Should be able to participate and drive in-sprint development, test automation projects & programs life cycle that includes Desing, Develop, Unit Test and Functional automation Should be able to create Low Level Technical Design. Should have good expertise in Oracle SOA development. Should have experience on any continuous integration tool and understand the release management process well Keeps team and leadership updated on project status and risk factors Work with the engineering manager, product owners, architects, developers ensure that the product is being delivered with quality on time and within budget Develop, document technical specifications and designs from which composite application can be developed that satisfy business requirements & ensure solutions are flexible& Extensible Experience in customer interaction. Should have good knowledge on WebLogic server configuration components like JMS, Data Sources, Connection Pools, Clustering, etc. for all environments. Should have thorough knowledge on following. Essential Requirements: 5-8 years of experience [Primary: Core Java, J2EE Hibernate, Spring Boot, Micro Services and React with UI/UX & [secondary: SQL, KAFKA, ELK, Mongo] Expert technical knowledge and experience in Oracle SOA technologies - BPEL, Workflow, Rules, OSB, should have worked on Oracle BPM, Oracle SOA Suite BAM. Very good exposure to JDeveloper 10g/11g/12c, Oracle Application server, WebLogic-Server and Oracle Application Adapters, SQL (SQL Server Oracle 11g ,12C or greater) Expert technical knowledge of Tomcat and WebLogic Concepts (Domains, Servers, Machines, Clusters) & Webservice Security and Governance Good Knowledge of web services with WSDL, WADL, XSD, XSLT, JASON, UDDI SOAP, XPATH and XQUERY Good understanding of web services Security (OAUTH, etc), DB Adapter, File/FTP Adapter, Canonical Data Model, XA/Non-XA Transaction, Rules Engine, Human Workflow Good in core Java and REST web-services in Java, Socket connection programming in java Experience in design and development assistance for application builds with OSB, SOA suite, DB Adapters, JMS, MQAdapters, and Oracle Business Rules Ensure that all code / technical configurations and other work products are thoroughly unit-tested prior to delivery Perform code-reviews and other QA steps as requested, Capable of suggesting best practice and alternate solutions for the applications Good Knowledge on DB Adapters, JMS, MQ Adapters integrations Experience Complete SOA development lifecycle Performance tuning for SOA applications, experience in orchestration of services using OSB12c and knowledge of ESB/OSB and Mediator Enterprise Design Patterns; object-oriented Design, JDeveloper, Service Component Architecture (SCA), Web Services (SOAP, RESTful) Exposure to Cloud technologies - Azure, VMware etc, - Containerization, Docker, Kubernetes Expertise in Splunk platform engineering and other monitoring tools like Reverbed, AppDynamics Experience administering and deploying development CI/CD tools such as Git, Jira, GitLab, Jenkins and Ant Core Java & J2EE with Spring Boot and Microservices Solid experience in backend development using Java technologies and microservices architecture. WebLogic Server & Middleware Configuration In-depth knowledge of WebLogic server components like JMS, Data Sources, Clustering, etc. CI/CD & DevOps Tools – Hands-on experience with tools like Git, Jenkins, Jira, and understanding of release management processes Experience: Total Exp – 5-8 Years Rel Exp – 5+ years TekWissen® Group is an equal opportunity employer supporting workforce diversity.

Posted 2 weeks ago

Apply

10.0 - 15.0 years

0 Lacs

delhi

On-site

You are looking for a Senior Data Architect to join the team at Wingify in Delhi. As a Senior Data Architect, you will be responsible for leading and mentoring a team of data engineers, optimizing scalable data infrastructure, driving data governance frameworks, collaborating with cross-functional teams, and ensuring data security, compliance, and quality. Your role will involve optimizing data processing workflows, fostering a culture of innovation and technical excellence, and aligning technical strategy with business objectives. To be successful in this role, you should have at least 10 years of experience in software/data engineering, with a minimum of 3 years in a leadership position. You should possess expertise in backend development using programming languages like Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS. Proficiency in SQL, Python, and Scala for data processing and analytics is essential, along with a strong understanding of cloud platforms such as AWS, GCP, or Azure and their data services. Additionally, you should have experience with big data technologies like Spark, Hadoop, Kafka, and distributed computing frameworks, as well as hands-on experience with data warehousing solutions like Snowflake, Redshift, or BigQuery. Deep knowledge of data governance, security, and compliance, along with familiarity with NoSQL databases and automation/DevOps tools, is required. Strong leadership, communication, and stakeholder management skills are crucial for this role. Preferred qualifications include experience in machine learning infrastructure or MLOps, exposure to real-time data processing and analytics, and interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture. Prior experience in a SaaS or high-growth tech company would be advantageous. Please note that candidates must have a minimum of 10 years of experience to be eligible for this role. Graduation from Tier - 1 colleges, such as IIT, is preferred. Candidates from B2B Product Companies with High data-traffic are encouraged to apply, while those who do not meet these criteria are kindly requested not to apply.,

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

We are looking for an experienced DevOps Architect to spearhead the design, implementation, and management of scalable, secure, and highly available infrastructure. As the ideal candidate, you should possess in-depth expertise in DevOps practices, CI/CD pipelines, cloud platforms, and infrastructure automation across various cloud environments. This role requires strong leadership skills and the ability to mentor team members effectively. Your responsibilities will include leading and overseeing the DevOps team to ensure the reliability of infrastructure and automated deployment processes. You will be tasked with designing, implementing, and maintaining highly available, scalable, and secure cloud infrastructure on platforms such as AWS, Azure, and GCP. Developing and optimizing CI/CD pipelines for multiple applications and environments will be a key focus, along with driving Infrastructure as Code (IaC) practices using tools like Terraform, CloudFormation, or Ansible. Monitoring, logging, and alerting solutions will fall under your purview to ensure system health and performance. Collaboration with Development, QA, and Security teams to integrate DevOps best practices throughout the SDLC is essential. You will also lead incident management and root cause analysis for production issues, ensuring robust security practices for infrastructure and pipelines. Guiding and mentoring team members to foster a culture of continuous improvement and technical excellence will be crucial. Additionally, evaluating and recommending new tools, technologies, and processes to enhance operational efficiency will be part of your role. Qualifications: - Bachelor's degree in Computer Science, IT, or a related field; Master's degree preferred. - At least two current cloud certifications (e.g., AWS Solutions Architect, Azure Administrator, GCP DevOps Engineer, CKA). - 10+ years of relevant experience in DevOps, Infrastructure, or Cloud Operations. - 5+ years of experience in a technical leadership or team lead role. Skills & Abilities: - Expertise in at least two major cloud platforms: AWS, Azure, or GCP. - Strong experience with CI/CD tools such as Jenkins, GitLab CI, Azure DevOps, or similar. - Hands-on experience with Infrastructure as Code (IaC) tools like Terraform, Ansible, or CloudFormation. - Proficiency in containerization and orchestration using Docker and Kubernetes. - Strong knowledge of monitoring, logging, and alerting tools (e.g., Prometheus, Grafana, ELK, CloudWatch). - Scripting knowledge in languages like Python, Bash, or Go. - Solid understanding of networking, security, and system administration. - Experience in implementing security best practices across DevOps pipelines. - Proven ability to mentor, coach, and lead technical teams. Conditions: Work Arrangement: An occasionally hybrid opportunity based out of our Trivandrum office. Travel Requirements: Occasional travel may be required for team meetings, user research, or conferences. On-Call Requirements: Light on-call rotation may be required depending on operational needs. Hours of Work: Monday to Friday, 40 hours per week, with overlap with PST required. Values: Our values at AOT guide how we work, collaborate, and grow as a team. Every role is expected to embody and promote values such as innovation, integrity, ownership, agility, collaboration, and empowerment.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

8 - 13 Lacs

Pune

Work from Office

Essential Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field 5+ years Java backend experience with Spring Boot (including MVC, Security, WebFlux) Experience with REST API design and secure development practices Testing experience across unit, integration, and performance (e.g. JUnit, Mockito, WireMock) Familiar with CI/CD and Git-based workflows (e.g. GitLab) Solid experience with core AWS services (e.g. SQS, S3, DynamoDB) Understanding of distributed systems concepts (scalability, resilience, consistency) Experience mentoring engineers and promoting engineering best practices Exposure to MuleSoft (RAML, mUnit) in a migration or integration context Desirable Experience with API Gateways (e.g. Apigee) Exposure to containerisation and event-driven systems Experience with observability tools (e.g. Dynatrace, ELK) Prior involvement in platform migration or system modernisation initiatives

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

andhra pradesh

On-site

The role of Technical Architect for IoT Platform requires a highly skilled individual with over 10 years of experience, who possesses expertise in Java Spring Boot, React.js, IoT system architecture, and a strong foundation in DevOps practices. As a Technical Architect, you will be responsible for designing scalable, secure, and high-performance IoT solutions. Your role will involve leading full-stack teams and collaborating with product, infrastructure, and data teams to ensure the successful implementation of IoT projects. Your key responsibilities will include architecture and design tasks such as implementing scalable and secure IoT platform architecture, defining and maintaining architecture blueprints and technical documentation, leading technical decision-making, and ensuring adherence to best practices and coding standards. You will also be involved in architecting microservices-based solutions using Spring Boot, integrating them with React-based front-ends, defining data flow, event processing pipelines, and device communication protocols. In terms of IoT domain expertise, you will be required to architect solutions for real-time sensor data ingestion, processing, and storage, work closely with hardware and firmware teams for device-cloud communication, support multi-tenant, multi-protocol device integration, and guide the design of edge computing, telemetry, alerting, and digital twin models. Your role will also involve DevOps and infrastructure-related tasks such as defining CI/CD pipelines, managing containerization & orchestration, driving infrastructure automation, ensuring platform monitoring, logging, and observability, and enabling auto-scaling, load balancing, and zero-downtime deployments. As a Technical Architect, you will be expected to demonstrate leadership qualities by collaborating with product managers and business stakeholders, mentoring and leading a team of developers and engineers, conducting code and architecture reviews, setting goals and targets, organizing features and sprints, and providing coaching and professional development to team members. Your technical skills and experience should include proficiency in backend technologies such as Java 11+/17, Spring Boot, Spring Cloud, REST APIs, JPA/Hibernate, PostgreSQL, as well as frontend technologies like React.js, Redux, TypeScript, and Material-UI. Additionally, experience with messaging/streaming platforms, databases, DevOps tools, monitoring tools, cloud platforms, and other relevant technologies is required. Other must-have qualifications for this role include hands-on IoT project experience, designing and deploying multi-tenant SaaS platforms, knowledge of security best practices in IoT and cloud environments, as well as excellent problem-solving, communication, and team leadership skills. It would be beneficial if you have experience with Edge Computing frameworks, AI/ML model integration, industrial protocols, digital twin concepts, and relevant certifications in AWS/GCP, Kubernetes, or Spring. A Bachelor's or Master's degree in Computer Science, Engineering, or a related field is also required. By joining us, you will have the opportunity to lead architecture for cutting-edge industrial IoT platforms, work with a passionate team in a fast-paced and innovative environment, and gain exposure to cross-disciplinary challenges in IoT, AI, and cloud-native technologies.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a Team Lead in DevOps with 6+ years of experience, you will be responsible for managing, mentoring, and developing a team of DevOps engineers. Your role will involve overseeing the deployment and maintenance of applications such as Odoo (Python/PostgreSQL), Magento (PHP/MySQL), and Node.js (JavaScript/TypeScript). You will design and manage CI/CD pipelines using tools like Jenkins, GitHub Actions, and GitLab CI. Additionally, you will handle environment-specific configurations for staging, production, and QA. Your responsibilities will include containerizing legacy and modern applications using Docker and deploying them via Kubernetes (EKS/AKS/GKE) or Docker Swarm. You will implement and maintain Infrastructure as Code using tools like Terraform, Ansible, or CloudFormation. Monitoring application health and infrastructure using tools such as Prometheus, Grafana, ELK, Datadog, and ensuring systems are secure, resilient, and compliant with industry standards will also be part of your role. Optimizing cloud costs and infrastructure performance, collaborating with development, QA, and IT support teams, and troubleshooting performance, deployment, or scaling issues across tech stacks are essential tasks. To excel in this role, you must have at least 6 years of experience in DevOps/Cloud/System Engineering roles, with hands-on experience. You should have a minimum of 2 years of experience managing or leading DevOps teams. Proficiency in supporting and deploying Odoo on Ubuntu/Linux with PostgreSQL, Magento with Apache/Nginx, PHP-FPM, MySQL/MariaDB, and Node.js with PM2/Nginx or containerized setups is required. Experience with AWS, Azure, or GCP infrastructure in production, strong scripting skills (Bash, Python, PHP CLI, or Node CLI), and a deep understanding of Linux system administration and networking fundamentals are essential. In addition, you should have experience with Git, SSH, reverse proxies (Nginx), and load balancers. Good communication skills and exposure to managing clients are crucial. Preferred certifications that are highly valued include AWS Certified DevOps Engineer Professional, Azure DevOps Engineer Expert, and Google Cloud Professional DevOps Engineer. Additionally, experience with Magento Cloud DevOps or Odoo Deployment is considered a bonus. Nice-to-have skills include experience with multi-region failover, HA clusters, RPO/RTO-based design, familiarity with MySQL/PostgreSQL optimization, and knowledge of Redis, RabbitMQ, or Celery. Previous experience with GitOps, ArgoCD, Helm, or Ansible Tower, as well as knowledge of VAPT 2.0, WCAG compliance, and infrastructure security best practices, are also advantageous for this role.,

Posted 2 weeks ago

Apply

2.0 - 8.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a DevOps Manager, you will be responsible for leading our DevOps efforts across a suite of modern and legacy applications, including Odoo (Python), Magento (PHP), Node.js, and other web-based platforms. Your main duties will include managing, mentoring, and growing a team of DevOps engineers, overseeing the deployment and maintenance of various applications, designing and managing CI/CD pipelines, handling environment-specific configurations, containerizing applications, implementing and maintaining Infrastructure as Code, monitoring application health and infrastructure, ensuring system security and compliance, optimizing cloud cost and performance, collaborating with cross-functional teams, and troubleshooting technical issues. To be successful in this role, you should have at least 8 years of experience in DevOps/Cloud/System Engineering roles with real hands-on experience, including 2+ years of experience managing or leading DevOps teams. You should have experience supporting and deploying applications like Odoo, Magento, and Node.js, along with strong scripting skills in Bash, Python, PHP CLI, or Node CLI. Additionally, you should have a deep understanding of Linux system administration, networking fundamentals, AWS/Azure/GCP infrastructure, Git, SSH, reverse proxies, and load balancers. Good communication skills and client management exposure are also essential for this position. Preferred certifications for this role include AWS Certified DevOps Engineer, Azure DevOps Engineer Expert, and Google Cloud Professional DevOps Engineer. Bonus skills that would be beneficial for this position include experience with multi-region failover, HA clusters, MySQL/PostgreSQL optimization, GitOps, VAPT 2.0, WCAG compliance, and infrastructure security best practices. In summary, as a DevOps Manager, you will play a crucial role in leading our DevOps efforts and ensuring the smooth deployment, maintenance, and optimization of various applications while collaborating with different teams and implementing best practices in infrastructure management and security.,

Posted 2 weeks ago

Apply

2.0 - 7.0 years

2 - 7 Lacs

Bengaluru

Work from Office

Experience: Minimum 5+ years of experience in Enterprise Elastic, kibana and logstash (ELK stack for SIEM) administration. Which includes designing, deploying and managing SOC environments & deploying Microsoft Sentinel Content

Posted 2 weeks ago

Apply

21.0 - 31.0 years

35 - 42 Lacs

Bengaluru

Work from Office

What we’re looking for As a member of the Infrastructure team at Survey Monkey, you will have a direct impact in designing, engineering and maintaining our Cloud, Messaging and Observability Platform. Solutioning with best practices, deployment processes, architecture, and support the ongoing operation of our multi-tenant AWS environments. This role presents a prime opportunity for building world-class infrastructure, solving complex problems at scale, learning new technologies and offering mentorship to other engineers. What you’ll be working on Architect, build, and operate AWS environments at scale with well-established industry best practices Automating infrastructure provisioning, DevOps, and/or continuous integration/delivery Support and maintain AWS services, such as EKS, Heroku Write libraries and APIs that provide a simple, unified interface to other developers when they use our monitoring, logging, and event-processing systems Support and partner with other teams on improving our observability systems to monitor site stability and performance Work closely with developers in supporting new features and services. Work in a highly collaborative team environment. Participate in on-call rotation We’d love to hear from people with 8+ years of relevant professional experience with cloud platforms such as AWS, Heroku. Extensive experience with Terraform, Docker, Kubernetes, scripting (Bash/Python/Yaml), and helm. Experience with Splunk, Open Telemetry, CloudWatch, or tools like New Relic, Datadog, or Grafana/Prometheus, ELK (Elasticsearch/Logstash/Kibana). Experience with metrics and logging libraries and aggregators, data analysis and visualization tools – Specifically Splunk and Otel. Experience instrumenting PHP, Python, Java and Node.js applications to send metrics, traces, and logs to third-party Observability tooling. Experience with GitOps and tools like ArgoCD/fluxcd. Interest in Instrumentation and Optimization of Kubernetes Clusters. Ability to listen and partner to understand requirements, troubleshoot problems, or promote the adoption of platforms. Experience with GitHub/GitHub Actions/Jenkins/Gitlab in either a software engineering or DevOps environment. Familiarity with databases and caching technologies, including PostgreSQL, MongoDB, Elasticsearch, Memcached, Redis, Kafka and Debezium. Preferably experience with secrets management, for example Hashicorp Vault. Preferably experience in an agile environment and JIRA. SurveyMonkey believes in-person collaboration is valuable for building relationships, fostering community, and enhancing our speed and execution in problem-solving and decision-making. As such, this opportunity is hybrid and requires you to work from the SurveyMonkey office in Bengaluru 3 days per week. #LI - Hybrid

Posted 2 weeks ago

Apply

12.0 - 16.0 years

37 - 42 Lacs

Bengaluru

Work from Office

Job Objective: As AVP/VP Architect- Lead the design and development of scalable, reliable, and high-performance architecture for Zwayam. Job Description: In this role you will: Hands-on Coding & Code Review: Actively participate in coding and code reviews, ensuring adherence to best practices, coding standards, and performance optimization. High-Level and Low-Level Design: Create comprehensive architectural documentation that guides the development team and ensures the scalability and security of the system. Security Best Practices: Implement security strategies, including data encryption, access control, and threat detection, ensuring the platform adheres to the highest security standards. Compliance Management: Oversee compliance with regulatory requirements such as GDPR, including data protection, retention policies, and audit readiness. Disaster Recovery & Business Continuity: Design and implement disaster recovery strategies to ensure the reliability and continuity of the system in case of failures or outages. Scalability & Performance Optimization: Ensure the system architecture can scale seamlessly and optimize performance as business needs grow. Monitoring & Alerting: Set up real-time monitoring and alerting systems to ensure proactive identification and resolution of performance bottlenecks, security threats, and system failures. Cross-Platform Deployment: Architect flexible, cloud-agnostic solutions and manage deployments on Azure and AWS platforms. Containerization & Orchestration: Use Kubernetes and Docker Swarm for container management and orchestration to achieve a high degree of automation and reliability in deployments. Data Management: Manage database architecture using MySQL, MongoDB and ElasticSearch to ensure efficient storage, retrieval, and management of data. Message Queuing Systems: Design and manage asynchronous communication using Kafka and Redis for event-driven architecture. Collaboration & Leadership: Work closely with cross-functional teams including developers, product managers, and other stakeholders to deliver high-quality solutions on time. Mentoring & Team Leadership: Mentor, guide, and lead the engineering team, fostering technical growth and maintaining adherence to architectural and coding standards. Required Skills: Experience: 12+ years of experience in software development and architecture, with at least 3 years in a leadership/architect role. Technical Expertise: Proficient in Java and related frameworks like Spring-boot Experience with databases like MySQL, MongoDB, ElasticSearch, and message queuing systems like Kafka, Redis. Proficiency with containerization (Docker, Docker Swarm) and orchestration (Kubernetes). Solid experience with cloud platforms (Azure, AWS, GCP). Experience with monitoring tools (e.g., Prometheus, Grafana, ELK stack) and alerting systems for real-time issue detection and resolution. Compliance & Security: Hands-on experience in implementing security best practices. Familiarity with compliance frameworks such as GDPR and DPDP Architecture & Design: Proven experience in high-level and low-level architectural design. Problem-Solving: Strong analytical and problem-solving skills, with the ability to handle complex and ambiguous situations. Leadership: Proven ability to lead teams, influence stakeholders, and drive change. Communication: Excellent verbal and written communication skills Our Ideal Candidate: The ideal candidate should possess a deep understanding of the latest architectural patterns, cloud-native design, and security practices. They should be adept at translating business requirements into scalable and efficient technical solutions. A proactive, hands-on approach to problem-solving and a passion for innovation are essential. Strong leadership and mentoring skills are crucial to drive a high-performance team and foster technical excellence.

Posted 2 weeks ago

Apply

1.0 - 4.0 years

4 - 7 Lacs

Pune

Work from Office

Job Summary: We are seeking a proactive and detail-oriented Site Reliability Engineer (SRE) focused on Monitoring to join our observability team. The candidate will be responsible for ensuring the reliability, availability, and performance of our systems through robust monitoring, alerting, and incident response practices. Key Responsibilities: Monitor Application, IT infrastructure environment Drive the end-to-end incident response and resolution Design, implement, and maintain monitoring and alerting systems for infrastructure and applications. Continuously improve observability by integrating logs, metrics, and traces into a unified monitoring platform. Collaborate with development and operations teams to define and track SLIs, SLOs, and SLAs. Analyze system performance and reliability data to identify trends and potential issues. Participate in incident response, root cause analysis, and post-mortem documentation. Automate repetitive monitoring tasks and improve alert accuracy to reduce noise. Required Skills & Qualifications: 2+ years of experience in application/system monitoring, SRE, or DevOps roles. Proficiency with monitoring tools such as Prometheus, Grafana, ELK, APM, Nagios, Zabbix, Datadog, or similar. Strong scripting skills (Python, Bash, or similar) for automation. Experience with cloud platforms (AWS, Azure) and container orchestration (Kubernetes). Solid understanding of Linux/Unix systems and networking fundamentals. Excellent problem-solving and communication skills.

Posted 2 weeks ago

Apply

2.0 - 4.0 years

4 - 9 Lacs

Pune

Hybrid

So, what’s the role all about? We're looking for a passionate and hands-on DevOps Engineer with 3–5 years of experience to help us scale, automate, and secure our cloud infrastructure. This role is ideal for someone who thrives in a dynamic environment, enjoys solving infrastructure challenges, and loves working with cutting-edge DevOps tools. As part of our engineering team, you’ll play a key role in managing cloud resources, enhancing CI/CD workflows, and enabling development teams to move faster and safer. How will you make an impact? Cloud Infrastructure: Design, implement, and manage scalable, secure AWS-based infrastructure. Infrastructure as Code: Use Terraform (or CloudFormation) to provision and maintain infrastructure in a repeatable way. CI/CD Pipelines: Develop and enhance continuous integration and deployment pipelines using Jenkins, GitHub Actions, or Spacelift. Automation & Scripting: Build automation scripts in Python, Shell, or Groovy to reduce manual effort and improve system reliability. Monitoring & Logging: Set up monitoring tools and alerting systems to ensure uptime and performance (e.g., CloudWatch, ELK, Prometheus). Collaboration: Work closely with developers, QA, and other DevOps teams to support smooth and secure delivery workflows. Have you got what it takes? 3–5 years of experience in a DevOps or SRE role. Strong hands-on knowledge of AWS (EC2, VPC, S3, RDS, ECS, Route53, etc.). Experience with Terraform (or CloudFormation) for infrastructure management. Solid understanding of Docker and containerized workflows. Expertise in CI/CD tools like Jenkins, GitHub Actions, Spacelift. Proficiency in scripting languages (Python, Shell, Groovy). Experience with Git, version control workflows, and team collaboration tools (JIRA, Confluence). Strong problem-solving ability, attention to detail, and eagerness to learn and grow in a fast-paced team. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Reporting into: Tech Manager Role Type: Individual Contributor

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

NTT DATA is looking for an Azure Cloud Engineer to join their team in Bangalore, Karnataka, India. As an Azure Cloud Engineer, you will be responsible for working in the Banking Domain as an Azure consultant. To qualify for this position, you should have a Bachelors/Masters degree in Computer Science or Data Science, along with 5 to 8 years of experience in software development and proficiency in data structures/algorithms. You should have 5 to 7 years of experience working with programming languages such as Python or JAVA, database languages like SQL, and no-sql databases. Additionally, you should have 5 years of experience in developing large-scale platforms, distributed systems, and networks, with a good understanding of microservices architecture. Experience building AKS applications on Azure and a strong understanding of Kubernetes for ensuring availability and scalability of applications in Azure Kubernetes Service are required for this role. You should also have experience in deploying applications with Azure using third-party tools like Docker, Kubernetes, and Terraform. Knowledge of building and working with AKS clusters, VNETs, NSGs, Azure storage technologies, and Azure container registries is essential. Familiarity with building applications using Redis, ElasticSearch, and MongoDB is preferred, along with experience working with RabbitMQ. In addition, you should have an end-to-end understanding of ELK, Azure Monitor, DataDog, Splunk, and logging stack. Experience with development tools and CI/CD pipelines like GitLab CI/CD, Artifactory, Cloudbees, Jenkins, Helm, and Terraforms is necessary. Understanding of IAM roles on Azure and integration/configuration experience is also required. Preferably, you should have experience working on Data Robot setup or similar applications on Cloud/Azure. Functional, integration, and security testing, along with performance validation, are part of the responsibilities for this role. NTT DATA is a trusted global innovator of business and technology services with a commitment to helping clients innovate, optimize, and transform for long-term success. As a Global Top Employer, NTT DATA has diverse experts in more than 50 countries and a robust partner ecosystem. NTT DATA's services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation, and management of applications, infrastructure, and connectivity. NTT DATA is known for being one of the leading providers of digital and AI infrastructure globally. NTT DATA is part of the NTT Group, which invests significantly in R&D each year to support organizations and society in moving confidently and sustainably into the digital future. Visit us at us.nttdata.com,

Posted 2 weeks ago

Apply

3.0 - 5.0 years

15 - 27 Lacs

Bengaluru

Work from Office

Job Summary We are seeking a skilled and innovative Cloud Engineer to join our team. As a Cloud Engineer, you will be responsible for developing and maintaining cloud-based solutions, with a focus on coding complex problems, automation using Golang and Python, and collaborating with the Site Reliability Engineering (SRE) team for feature deployment in production. Additionally, the ideal candidate should be proficient in utilizing AI tools like Copilot to enhance productivity in the areas of automation, documentation, and unit test writing. Responsibilities: • Develop, test, and maintain cloud-based applications and services using Golang and Python. • Write clean, efficient, and maintainable code to solve complex problems and improve system performance. • Collaborate with cross-functional teams to understand requirements and design scalable and secure cloud solutions. • Automate deployment, scaling, and monitoring of cloud-based applications and infrastructure. • Work closely with the SRE team to ensure smooth feature deployment in production environments. • Utilize AI tools like Copilot to enhance productivity in automation, documentation, and unit test writing. • Troubleshoot and resolve issues related to cloud infrastructure, performance, and security. • Stay up to date with emerging technologies and industry trends to continuously improve cloud-based solutions. • Participate in code reviews, knowledge sharing sessions, and contribute to the improvement of development processes. Job Requirements • Strong programming skills in Golang and Python. • Proficiency in using AI tools like Copilot to enhance productivity in automation, documentation, and unit test writing. • Solid understanding of cloud computing concepts and services (e.g., AWS, Azure, Google Cloud). • Experience with containerization technologies (e.g., Docker, Kubernetes) and infrastructure-as-code tools (e.g., Terraform, CloudFormation). • Proficient in designing and implementing RESTful APIs and microservices architectures. • Familiarity with CI/CD pipelines and tools (e.g., Jenkins, GitLab CI/CD). • Knowledge of networking concepts, security best practices, and system administration. • Excellent problem-solving skills and ability to work in a fast-paced, collaborative environment. • Strong communication and interpersonal skills to effectively collaborate with cross-functional teams. Preferred Skills: • Experience with other programming languages, such as Java, C++, or Ruby. • Knowledge of database technologies (e.g., SQL, NoSQL) and data storage solutions. • Familiarity with monitoring and logging tools (e.g., Prometheus, ELK stack). • Understanding of Agile/Scrum methodologies and DevOps principles. • Certifications in cloud technologies (e.g., AWS Certified Cloud Practitioner, Google Cloud Certified - Associate Cloud Engineer) would be a plus. If you are passionate about cloud technologies, have a strong problem-solving mindset, and enjoy working in a collaborative environment, we would love to hear from you. Join our team and contribute to building scalable, reliable, and secure cloud solutions. Please note that this job description is not exhaustive and may change based on the organization's needs. Education A Bachelor of Science Degree in Engineering or Computer Science with 2 years of experience, or a Master’s Degree; or equivalent experience is typically required.

Posted 2 weeks ago

Apply

3.0 - 7.0 years

15 - 25 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Salary: 15-20 LPA Exp: 3 to 5 years Location: Gurgaon/Pune Notice: Immediate to 15 days..!! Job Title: AWS DevOps Engineer Job Description: We are seeking a highly skilled AWS DevOps Engineer with extensive experience in Chef and CloudWatch The ideal candidate will have a strong background in cloud infrastructure, automation, and monitoring. Key Responsibilities: Design, implement, and manage scalable and reliable cloud infrastructure on AWS. Automate provisioning, configuration management, and deployment using Chef. Monitor system performance and reliability using AWS CloudWatch and other monitoring tools. Develop and maintain CI/CD pipelines to ensure smooth and efficient software releases. Collaborate with development, QA, and operations teams to ensure high availability and reliability of applications. Troubleshoot and resolve infrastructure and application issues in a timely manner. Implement security best practices and ensure compliance with industry standards. Optimize infrastructure for cost and performance. Maintain documentation related to infrastructure, processes, and procedures.

Posted 2 weeks ago

Apply

8.0 - 12.0 years

27 - 42 Lacs

Chennai

Work from Office

Job summary The Sr. Business Analyst will play a pivotal role in analyzing and optimizing business processes through the application of technical skills in SRE Grafana ELK Dynatrace AppMon and Splunk. This hybrid role requires a seasoned professional with 8 to 12 years of experience to drive impactful solutions in a day shift setting without the need for travel. Responsibilities Analyze business processes and identify areas for improvement using advanced technical skills. Collaborate with cross-functional teams to gather and document business requirements. Develop and implement monitoring solutions using Grafana and ELK to ensure system reliability. Utilize Dynatrace AppMon and Splunk to troubleshoot and resolve performance issues. Provide insights and recommendations based on data analysis to enhance business operations. Lead the design and execution of test plans to validate system changes. Ensure seamless integration of new solutions with existing systems and processes. Oversee the deployment of updates and enhancements in a hybrid work environment. Maintain comprehensive documentation of processes configurations and changes. Conduct training sessions to educate stakeholders on new tools and processes. Monitor system performance and proactively address potential issues. Collaborate with IT teams to ensure alignment with business objectives. Drive continuous improvement initiatives to optimize system performance and user experience. Qualifications Possess a strong background in SRE Grafana ELK Dynatrace AppMon and Splunk. Demonstrate excellent analytical and problem-solving skills. Exhibit proficiency in documenting business processes and technical specifications. Have experience in leading cross-functional teams and projects. Show capability in developing and executing test plans. Display strong communication skills to interact with stakeholders. Be adept at working in a hybrid work model and managing day shift responsibilities. Certifications Required Certified Business Analysis Professional (CBAP) Dynatrace Associate Certification

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies