Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
0 Lacs
noida, uttar pradesh
On-site
We are looking for a highly skilled Lead DevOps Engineer to join our dynamic team at LUMIQ. As the Lead DevOps Engineer, you will be responsible for overseeing the day-to-day operations of our DevOps team. Your role will involve effective stakeholder management and executive reporting. You will lead our teams in following and establishing best practices, implementing efficient DevOps practices and designs for both internal and customer projects. These projects involve building and deploying Data & AI platforms, our products, and their use cases. Your main responsibility will be to ensure that all DevOps engagements are completed on time while providing the necessary support and engagement to stakeholders throughout the process. To be successful in this role, you should have expert knowledge of Linux system & networking internals, as well as Container runtimes. You should also have significant experience with major cloud providers such as AWS, GCP, or Azure. Hands-on experience with Infra as Code (IAC) tools like Terraform, solid understanding of container environments, dockerising applications, orchestration using Kubernetes, helm, and service mesh deployments are essential. Experience with open-source tooling and configuration, development and deployment of CI/CD on the cloud, and server-side scripting using bash or python is required. Excellent communication, collaboration, and documentation skills are a must, along with strong analytical, troubleshooting, and problem-solving abilities. You should be a team player with strong collaboration, prioritization, and adaptability skills. Understanding end-to-end business requirements, proposing feasible solutions, and implementing error-free products for clients are key aspects of this role. You should also have experience in systems & architectural design, mentoring, and coaching fellow engineers. Qualifications for this role include 6+ years of experience in infrastructure, DevOps, platform, or SRE engineering, along with a thorough understanding of the software development lifecycle and infrastructure engineering best practices. A Bachelor's Degree in Computer Science, Technology, or equivalent experience is required. Proven experience in leading and managing a team of DevOps engineers in previous roles is also important. Join us at LUMIQ and be part of an innovative and collaborative work environment. Enjoy competitive salary packages, group medical policies, equal employment opportunities, maternity leave, and opportunities for upskilling and exposure to the latest technologies. We also offer 100% sponsorship for certifications.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
Job Description: As a Senior DevOps Engineer at LUMIQ, you will be a crucial part of our team, responsible for designing and implementing DevOps solutions for both internal and external projects. Your primary focus will be on creating robust DevOps processes that enhance efficiency, scalability, and automation within the project landscape. Working at the intersection of automation, system reliability, and CI/CD pipelines, you will ensure secure, scalable, and efficient deployment processes. Your expertise will drive continuous improvement in operational workflows and contribute to delivering high-performance solutions for our customers. Your responsibilities will include having a sound knowledge of Linux system and networking internals, as well as experience with container runtimes. You will be expected to work with cloud providers such as AWS, GCP, or Azure, and have hands-on experience with IAC (Infra as Code) tools like Terraform. Additionally, a solid understanding of running applications in container environments, dockerising applications, and orchestrating using Kubernetes, helm, and service mesh deployments is essential. Experience with open-source tooling, configuration management, CI/CD deployment on cloud or Jenkins, and server-side scripting using bash, python, or similar languages is required. Strong analytical, troubleshooting, and problem-solving skills, along with effective communication, interpersonal, collaboration, and documentation skills are necessary for success in this role. Qualifications for this position include a Bachelor's degree in computer science, Engineering, or a related field, with a preference for a Master's degree. You should have at least 3 years of experience in infrastructure, DevOps, or platform roles, with a thorough understanding of the software development lifecycle and infrastructure engineering best practices. While not mandatory, preferred skills include exposure to databases, data analytics, and warehousing services on the cloud, familiarity with architectural and systems designing best practices, experience in Professional Services, knowledge of the BFSI domain, and understanding of Data, Snowflake, AI, and ML concepts. Joining LUMIQ will provide you with the opportunity to work in an entrepreneurial culture and experience the startup hustler environment. You will be part of a collaborative and innovative work environment, receive a competitive salary package, access group medical policies, equal employment opportunities, maternity leave, and opportunities for upskilling and exposure to the latest technologies. Additionally, 100% sponsorship for certifications is provided to support your professional growth.,
Posted 1 month ago
10.0 - 14.0 years
0 Lacs
haryana
On-site
You are a Senior-Level Subject Matter Expert (L4) specializing in Red Hat OpenShift Enterprise Administration within the Telco domain, with a focus on bare metal deployments. Your primary responsibility is to manage and enhance Telco-grade workloads for a global Tier-1 telecommunications provider, ensuring platform resilience, lifecycle governance, and continuous integration across critical services. Your key responsibilities include serving as the OpenShift Bare Metal Admin SME for complex Telco workloads on private cloud infrastructure, overseeing end-to-end deployment, scaling, and lifecycle operations of Red Hat OpenShift on bare metal environments, and owning upgrade readiness and rollback strategies. You will collaborate closely with OEM vendors for platform bug RCA and patch governance, enable monitoring, alerting, compliance enforcement, and automation for large-scale cluster environments, and ensure platform alignment with Telco service and regulatory requirements in coordination with network, security, and architecture teams. Additionally, you will be responsible for integrating with CI/CD pipelines, GitOps, observability frameworks (such as Prometheus and Grafana), and ITSM tools, driving operational maturity through SOP creation, audit compliance, RCA publishing, and incident retrospectives, as well as mentoring junior engineers and reviewing configurations impacting production-grade OpenShift clusters. To excel in this role, you must have at least 8 years of direct experience managing Red Hat OpenShift v4.x on bare metal, strong expertise in Kubernetes, SDN, CNI plugins, CoreOS, and container runtimes (CRI-O/containerd), experience with BIOS provisioning, PXE boot environments, and bare metal cluster node onboarding, as well as familiarity with Telco Cloud principles, especially NFV/CNF workloads. Proficiency in RHEL administration, Ansible automation, Terraform (optional), and compliance remediation is essential, along with an understanding of storage backends (preferably Ceph), NTP/DNS/syslog integration, and cluster certificate renewal processes. Moreover, hands-on experience with Red Hat Satellite, Quay, multi-tenant OpenShift workloads, and security features like SELinux, SCCs, RBAC, and namespace isolation is required. Preferred certifications include Red Hat Certified Specialist in OpenShift Administration (EX280), Red Hat Certified System Administrator (RHCSA), and Certified Kubernetes Administrator (CKA) as a bonus.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
As a liaison between Development teams and Platform (PAAS) teams, you will be responsible for translating requirements into technical tasks or support requests. You will use coding languages or scripting methodologies to solve problems with custom workflows. Your role involves documenting problems, articulating solutions or workarounds, and being a key contributor on projects to brainstorm the best way to tackle complex technological infrastructure, security, or development problems. You will be expected to learn methodologies for performing incremental testing actions on code using a test-driven approach where possible (TDD). Strong oral and written communication skills with a keen sense of customer service, problem-solving, and troubleshooting skills are essential. Being process-oriented with excellent documentation skills is crucial. Knowledge of best practices in a micro-service architecture in an always-up, always-available service is required. Experience with or knowledge of Agile Software Development methodologies and security best practices in a containerized or cloud-based architecture is preferred. Familiarity with event-driven architecture and related concepts is a plus. In terms of experience, familiarity with container orchestration services, preferably Kubernetes, is necessary. Competency with container runtimes like docker, cri-o, mesos, rkt (Core OS), and working knowledge of Kubernetes templating tools such as Helm or Kustomize is expected. Proficiency in infrastructure scripting/templating solutions such as BASH, GO, Python is required. Demonstrated experience with infrastructure code tools such as Terraform, CloudFormation, Chef, Puppet, SaltStack, Ansible, or equivalent is a must. Competency in administering and deploying development lifecycle tooling such as Git, Jira, GitLab, CircleCI, or Jenkins is essential. Knowledge of logging and monitoring tools such as Splunk, Logz.io, Prometheus, Grafana, or full suites of tools like Datadog or New Relic is advantageous. Significant experience with multiple Linux operating systems in both a virtual or containerized platform is expected. Experience with Infrastructure as Code principles utilizing GitOps and secrets management tools such as Vault, AWS Secrets Manager, Azure Key Vault, or equivalent would be beneficial.,
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
67288 Jobs | Dublin
Wipro
26722 Jobs | Bengaluru
Accenture in India
21682 Jobs | Dublin 2
EY
19903 Jobs | London
Uplers
14260 Jobs | Ahmedabad
Bajaj Finserv
13722 Jobs |
IBM
13229 Jobs | Armonk
Accenture services Pvt Ltd
12639 Jobs |
Amazon
12358 Jobs | Seattle,WA
Capgemini
11880 Jobs | Paris,France