Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
Role Overview: Hiring a Principal Engineer to join our Engineering Excellence organization, reporting to the VP of Engineering Excellence. In this strategic enabler role, youll drive the evolution of our developer toolchains, build systems, and CI/CD pipelinesempowering eng About the Role Hiring a Principal Engineer to join our Engineering Excellence organization, reporting to the VP of Engineering Excellence . In this strategic enabler role , youll drive the evolution of our developer toolchains, build systems, and CI/CD pipelinesempowering engineering teams to deliver high-quality software faster and more efficiently. Rather than directly executing all initiatives, youll influence, align, and coordinate cross-functional teams to adopt best practices, modern automation solutions, and scalable workflows. Your impact will be measured by how effectively you enable teams to deliver , not by doing it all yourself. You must have the technical skills and experience to evaluate how we operate today, assess and recommend tooling choices, and coach teams on good practice. Youll partner with DevOps teams, engineering teams, engineering leaders, architects, and security to streamline the end-to-end development experiencefrom local dev environments through to production delivery. Youll identify friction points, guide toolchain modernization, and drive adoption of shared frameworks and standards. Responsibilities: Define and promote a company-wide strategy for developer tooling and CI/CD. Influence and coordinate engineering teams to align on scalable, secure development workflows. Identify bottlenecks in the developer experience and lead initiatives to address them. Drive adoption of modern, automation-first practices across build, test, and release. Partner with DevOps teams to deliver reliable, self-service tooling. Foster a culture of engineering excellence through mentorship, guidance, and collaboration. Requirements: Proven experience with developer tooling, CI/CD pipelines, and build automation at scale. Strong influencing and communication skills across technical and non-technical stakeholders. Hands-on knowledge of tools like GitHub Actions, Jenkins, ArgoCD, Gradle, or Maven. Passion for improving developer productivity, consistency, and release velocity. Systems-thinking mindset with a focus on security, scalability, and maintainability.
Posted 1 month ago
2.0 - 3.0 years
4 - 6 Lacs
Bengaluru
Work from Office
Working Model: Our flexible work arrangement combines both remote and in-office work, optimizing flexibility and productivity. This position will be part of Sapiens CTIO division, for more information about it, click here: What youll do: Implement secure, resilient, and cost-efficient architecture for our cloud-native platform service. Build and maintain a cloud-native platform Infrastructure following the "infrastructure as code" principle. Maintain and optimize the application layer of a multi-DC environment. Deliver solutions, architectures, and automation for Sapiens Applications. Conduct research to bring innovative solutions to a complex environment to improve processes and tech stack. Application and infrastructure logging and monitoring solutions. What to Have for this position. Must have Skills. 2 to 3 years of experience as a DevOps Engineer. Windows / Linux 2+ years of experience administering Linux servers. Kubernetes -Hands-on experience in developing, deploying, tuning and debugging applications on Kubernetes. Experience with designing and implementing CI/CD pipelines and automation solutions like GitHub / ArgoCD, Azure DevOps is a plus. Cloud -Hands-on experience in working on public Cloud Azure, AWS. Code - Vast scripting experience in PowerShell, Python and Bash. Applications - Vast experience in working with Java web applications. IaC - At least 1-year experience with at least one automation tool - Ansible\Terraform. Security knowledge with web security aspects as WAF, certificates, OS hardening, security policies, VPNs Advantage. Monitoring - Good understanding of monitoring stack ELK/Grafana/ Prometheus/DataDog/ Azure Monitoring. Experience with a live production environment. Accountability, ownership, and independence. Great verbal and written communication skills. Good to have Skills: Experience with Packer / Chocolatey Knowledge with Azure Blueprints.
Posted 1 month ago
5.0 - 10.0 years
7 - 13 Lacs
Mumbai, Pune, Bengaluru
Work from Office
Senior Software Engineer for DevHub - Full Stack (React, TypeScript, Node.js, and JavaScript) What will you do? Create and enhance developer workflows to improve user experience. Design, develop, and maintain custom RHDH plugins. Collaborate with the team on new plugin development and integration. Manage platform maintenance, including patching, upgrades, releases, bug fixes, and vulnerability resolution. Deploy applications and updates using ArgoCD on OpenShift (OCP). Collaborate with internal stakeholders to understand requirements and develop solutions. Proactively communicate updates and challenges to team members and stakeholders. Actively participate in OSS communities, sharing knowledge and collaborating on projects. Stay updated on AI developments and explore how to Integrate them within the platform What will you bring? 5+ years of experience in full-stack software development. Proficiency in React and TypeScript, with strongJavaScript skills. Python experience is a plus Experience with backend framework in (preferred) and frontend frameworks like React. Experience integrating with REST APIs, and handling RESTful data. Understanding of monorepo development and tooling (e.g., Yarn Workspaces). Hands-on experience with ArgoCD, OpenShift (OCP), and Kubernetes. Familiarity with CI/CD pipelines and version control (GitLab/GitHub). Strong interest or experience in AI technologies. Proven ability to interact and communicate effectively with both technical and non-technical stakeholders. Following are considered a plus: Experience working on critical, high-visibility projects within an enterprise environment. Understanding of the Red Hat ecosystem and internal IT tooling. Active involvement in Open Source projects is highly preferred. Experience with plugin development within developer portal environments. Prior experience with Red Hat Developer Hub, Backstage or Similar Internal Development Portals. Passion for open-source contribution and community engagement.
Posted 1 month ago
9.0 - 12.0 years
10 - 14 Lacs
Bengaluru
Remote
Greetings!!! We have an urgent opening Apigee Platform Engineer- Remote Role:- Apigee Platform Engineer Location- Remote Duration: Long term Contract Budget: 13 LPA Immediate to 15 days Joiner JD: Key Responsibilities •API Platform Support & Operations • Provide basic support for Apigee API proxies, policies, and configurations. • Assist in APIGEE Hybrid Runtime management, monitoring API performance and availability using Apigee analytics and logging tools. • Troubleshoot common issues related to API errors, latency, Hybrid runtime and authentication failures. • Help maintain API documentation and update it as needed. Deployment & Automation • Support API deployment activities using ArgoCD CI/CD pipelines under the guidance of senior engineers. • Perform routine checks and ensure APIs are deployed and functioning as per standards. Security & Best Practices • Ensure API endpoints follow basic security guidelines (e.g., OAuth, API key usage). • Follow organizations platform management practices. Collaboration & Learning • Work closely with senior Apigee engineers, developers, and DevOps teams to understand platform needs. Experience: • 35 years of experience in APIGEE • Apigee Hybrid experience is a plus. Technical Skills: • Familiarity with REST APIs and tools like Postman. • Basic knowledge of programming (Java, JavaScript, or Python). • Understanding of API authentication protocols (OAuth, JWT) is an advantage. • Exposure to DevOps/CI-CD tools is preferred (Jenkins, GitLab, ArgoCD etc.). If you're interested, please send your resume to suhas@iitjobs.com.
Posted 1 month ago
8.0 - 13.0 years
30 - 35 Lacs
Bengaluru
Work from Office
The Opportunity " FICO is seeking an AWS Cloud Engineer who thrives working in a fast-paced state of the art Cloud environment. This position will be heavily involved with the migration of our existing products, as well as the development of new products within our cloud platform VP, Cloud Engineering What Youll Contribute Design, maintain and expand our infrastructure (maintained as IaaC). Oversee systems and resources for alerting, logging, backups, disaster recovery, and monitoring. Work jointly with other software engineering teams in building fault-tolerant & resilient services that are in line with our infrastructure's best practices. Improve the performance and durability of our CI/CD pipelines. Think with security in your mind all the time. You may be asked to be on-call to assist with engineering projects. What Were Seeking Bachelors degree in Computer Science or related field or relevant experience. Ability to design and implement highly automated and holistic solutions. Act as a tech lead for the team and possess a strong ability to lead by example and drive projects. 8+ years of relevant experience in cloud domain. Hands on experience with a cloud-provider (preferably AWS) by maintaining or even deploying production applications infrastructure. Significant experience with distributed systems and container orchestration (specifically Kubernetes deployments). Proficiency in developing and maintaining CI/CD pipelines using GitHub, GitHub Actions, Jenkins, Helm, Harness, ArgoCD etc. Strong grasp of Infrastructure as Code (IAC), preferably using Terraform or Crossplane composition. Experience hosting/supporting the Atlassian suite of tools; Specifically Jira, Confluence, and Bitbucket. Scripting knowledge in Python/Ruby/Bash. Strong automation mindset and experience using APIs to automate administrative tasks. Hands-on experience with monitoring and logging tools, like Splunk.
Posted 2 months ago
11.0 - 13.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Introduction A career in IBM Software means youll be part of a team that transforms our customers challenges into solutions. Seeking new possibilities and always staying curious, we are a team dedicated to creating the worlds leading AI-powered, cloud-native software solutions for our customers. Our renowned legacy creates endless global opportunities for our IBMers, so the door is always open for those who want to grow their career. IBMs product and technology landscape includes Research, Software, and Infrastructure. Entering this domain positions you at the heart of IBM, where growth and innovation thrives. Your role and responsibilities Lead the design, development, and deployment of scalable, secure backend systems using Java, J2EE, and GoLang. Architect and implement robust RESTful APIs and microservices aligned with enterprise cloud-native standards. Collaborate closely with DevOps, QA, and frontend teams to deliver end-to-end product functionality. Set coding standards, influence architectural direction, and drive adoption of best practices across backend systems. Own performance tuning, monitoring, and high availability for backend services using tools like Prometheus, ELK, and Grafana. Implement security, compliance, and privacy by design principles in backend systems. Lead incident response and resolution of complex production issues across multi-cloud environments (e.g., AWS, Azure, OCP). Mentor and guide junior developers and contribute to team-wide knowledge sharing and skill development. Actively participate in Agile ceremonies and contribute to continuous delivery and process improvement. Required education Bachelors Degree Preferred education Bachelors Degree Required technical and professional expertise 11+ years of backend software development experience focused on scalable, secure, cloud-native enterprise systems. Deep expertise in Java, J2EE, and GoLang for building distributed backend systems. Advanced experience in architecting and implementing RESTful APIs, service meshes, and inter-service communication. Expert in Postgres or equivalent RDBMS - data modeling, indexing, and performance optimization at scale. Proven track record with microservices architecture, including Docker, Kubernetes, and service deployment patterns. Expert-level familiarity with backend-focused CI/CD tooling (Jenkins, GitLab CI/CD, ArgoCD) and IaC tools (Terraform, CloudFormation). Strong knowledge of monitoring/logging tools such as Prometheus, Grafana, ELK, and Splunk, focusing on backend telemetry and observability. Experience deploying applications on cloud platforms: AWS (EKS, ECS, Lambda, CloudFormation), Azure, or GCP. Familiarity with DevSecOps, secure coding practices, and compliance-aware architecture for regulated environments. Proficient in integration, load, and unit testing using JMeter, RestAssured, JUnit, etc. Leadership in backend architecture, performance tuning, platform modernization, and mentoring of technical teams. Effective cross-functional collaboration skills in multi-team, multi-region environments. Preferred technical and professional experience Deep understanding of backend architecture patterns including microservices, event-driven architecture, and domain-driven design. Experience implementing security and privacy by design principles in cloud-native backend systems. Hands-on expertise with cryptographic protocols and standards such as TLS, FIPS, and experience integrating with Java security frameworks (e.g., JCE, Spring Security). Strong grasp of secure coding practices, with experience identifying and mitigating OWASP Top 10 vulnerabilities. Exposure to designing and developing shared platform services or backend frameworks reused across products or tenants (e.g., in multi-tenant SaaS environments). Familiarity with API security patterns, including OAuth2, JWT, API gateways (e.g., Kong, Apigee). Prior experience working on compliance-oriented systems (e.g., SOC2, HIPAA, FedRAMP) or architecting for high-assurance environments. Proficiency with Shell scripting, Python, or Node.js for infrastructure automation or backend utilities.
Posted 2 months ago
5.0 - 7.0 years
7 - 9 Lacs
Pune
Work from Office
Overview We are seeking an experienced Azure DevOps Engineer. As a key member of our technology team, he has to play a crucial role in architecting, implementing, and optimizing our cloud-based infrastructure and DevOps practices. Your depth of experience will drive the successful deployment, monitoring, and management of our systems Responsibilities Azure Cloud and AKS Expertise: Architect, implement, and maintain Azure-based solutions, with a specific focus on Azure Kubernetes Service (AKS) clusters. Terraform experience to provision and manage AKS clusters, virtual networks, and related resources. Ensure the security, scalability, and availability of AKS environments. DevOps Implementation: Define and enhance CI/CD pipelines using Azure DevOps for various application stacks. Automate and streamline deployment processes, utilizing Infrastructure as Code (IaC) principles. Collaborate with development teams to integrate automated testing, code scanning, and security checks into pipelines. Monitoring and Performance Optimization: Set up monitoring and alerting systems to ensure timely detection and response to issues. Analyze system performance metrics and logs to identify bottlenecks and areas for improvement. Recommend and implement optimizations to enhance system performance and reliability. Infrastructure as Code (IaC): Leverage tools like Terraform to define infrastructure deployments and updates as code. Implement version-controlled IaC practices to ensure consistency and reproducibility of environments. Documentation and Knowledge Sharing: Document architecture, designs, and processes to maintain a well-documented knowledge base. Share expertise and knowledge with the team, fostering a culture of continuous learning. Application of Dev and GIT Ops Technologies and Tools: Azure DevOps, Atlassian DevOps Toolchain, GitHub, ArgoCD, visual studio Code .Net Framework Bamboo JFrog Qualifications Bachelor's degree in Computer Science, Engineering, or related field. Essential skills Expertise in architecting, deploying, and managing Azure resources and services. Proficiency in creating CI/CD pipelines using Azure DevOps. Experience Minimum of 5 to 7 years of professional experience in IT, with at least 4 years focused on Azure Cloud and DevOps.
Posted 2 months ago
3.0 - 4.0 years
10 - 20 Lacs
Mumbai, New Delhi, Bengaluru
Work from Office
Description: The RedHat DevOps Engineer will implement and manage OpenShift infrastructure, ensuring the smooth operation of containerized workloads and CI/CD pipelines. This role focuses on deployment automation, cluster management, and performance optimization within the OpenShift ecosystem. Responsibilities: Deploy and maintain OpenShift infrastructure while ensuring high availability and scalability. Manage and optimize OpenShift CI/CD pipelines (GitOps, ArgoCD, Tekton) to streamline application delivery. Implement Kubernetes-to-OpenShift migrations, ensuring compatibility and best practices. Automate deployments and infrastructure provisioning using Terraform and Ansible. Configure and fine-tune OpenShift clusters for performance and security. Establish monitoring, logging, and alerting solutions for proactive platform management. Troubleshoot and resolve OpenShift and Kubernetes-related performance and operational issues. Required Expertise: Strong knowledge of Azure and OpenShift, with hands-on experience managing containerized workloads. Proficiency in Kubernetes, Terraform, Ansible, and Docker for infrastructure and deployment automation. Experience in CI/CD pipeline management using GitOps (ArgoCD, Tekton, Jenkins, etc.). Strong understanding of container security, RBAC, and networking in OpenShift. Hands-on experience with performance tuning, monitoring, and troubleshooting OpenShift clusters. Exp : 3- 4 years Location : - Remote,New Delhi,Mumbai,Bengaluru
Posted 2 months ago
2.0 - 7.0 years
4 - 9 Lacs
Bengaluru
Work from Office
Sapiens is on the lookout for a Senior DevOps Engineer to become a key player in our Bangalore team. If you're a seasoned DevOps pro and ready to take your career to new heights with an established, globally successful company, this role could be the perfect fit. Location: Bangalore Working Model: Our flexible work arrangement combines both remote and in-office work, optimizing flexibility and productivity. This position will be part of Sapiens Digital (Data Suite) R&D division, for more information about it, click here: What youll do: Implement secure, resilient, and cost-efficient architecture for our cloud-native platform service. Build and maintain a cloud-native platform Infrastructure following the "infrastructure as code" principle. Maintain and optimize the application layer of a multi-DC environment. Deliver solutions, architectures, and automation for Sapiens Applications. Conduct research to bring innovative solutions to a complex environment to improve processes and tech stack. Application and infrastructure logging and monitoring solutions. What to Have for this position. Must have Skills. 5-8 years of experience as a DevOps Engineer. Windows / Linux 5 to 8 years of experience administering Linux servers. Kubernetes -Hands-on experience in developing, deploying, tuning and debugging applications on Kubernetes. Experience with designing and implementing CI/CD pipelines and automation solutions like GitHub / ArgoCD, Azure DevOps is a plus. Cloud -Hands-on experience in working on public Cloud Azure, AWS. Code - Vast scripting experience in PowerShell, Python and Bash. Applications - Vast experience in working with Java web applications. IaC - At least 2-year experience with at least one automation tool - Ansible\Terraform. Security knowledge with web security aspects as WAF, certificates, OS hardening, security policies, VPNs Advantage. Monitoring - Good understanding of monitoring stack ELK/Grafana/Prometheus/DataDog/Azure Monitoring. Experience with a live production environment. Accountability, ownership, and independence. Great verbal and written communication skills. Good to have Skills: Experience with Packer / Chocolatey Knowledge with Azure Blueprints
Posted 2 months ago
3.0 - 4.0 years
10 - 20 Lacs
Mumbai, New Delhi, Bengaluru
Work from Office
The DevOps Engineer will implement and manage OpenShift infrastructure, ensuring the smooth operation of containerized workloads and CI/CD pipelines. This role focuses on deployment automation, cluster management, and performance optimization within the OpenShift ecosystem. Responsibilities: Deploy and maintain OpenShift infrastructure while ensuring high availability and scalability. Manage and optimize OpenShift CI/CD pipelines (GitOps, ArgoCD, Tekton) to streamline application delivery. Implement Kubernetes-to-OpenShift migrations, ensuring compatibility and best practices. Automate deployments and infrastructure provisioning using Terraform and Ansible. Configure and fine-tune OpenShift clusters for performance and security. Establish monitoring, logging, and alerting solutions for proactive platform management. Troubleshoot and resolve OpenShift and Kubernetes-related performance and operational issues. Required Expertise: Strong knowledge of Azure and OpenShift, with hands-on experience managing containerized workloads. Proficiency in Kubernetes, Terraform, Ansible, and Docker for infrastructure and deployment automation. Experience in CI/CD pipeline management using GitOps (ArgoCD, Tekton, Jenkins, etc.). Strong understanding of container security, RBAC, and networking in OpenShift. Hands-on experience with performance tuning, monitoring, and troubleshooting OpenShift clusters. Location - Remote, hyderabad,ahmedabad,pune,chennai,kolkata. Keyword GitOps,ArgoCD,Tekton,Terraform,Docker,CI/CD,performance tuning,troubleshooting,OpenShift*,Kubernetes*,Ansible*,DevOps*,Azure*
Posted 2 months ago
7.0 - 9.0 years
0 Lacs
, India
On-site
Introduction At IBM, work is more than a job - its a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things youve never thought possible. Are you ready to lead in this new era of technology and solve some of the worlds most challenging problems If so, lets talk. Your role and responsibilities Your Role and Responsibilities . Software developers at IBM are the backbone of our overall strategy, and software development is the essential activity that drives the success of IBM and our clients worldwide. At IBM, you will use the latest software development tools, techniques and technologies and work with leading minds in the industry to build products, path-breaking technologies, and solutions that you can be proud of. . Do you have the skills and passion for building the future If yes, come and be part of a niche team at IBM Software Labs focused on building an AI-driven Digital Labor platform, Watson Orchestrate, an AI platform that offers digeys (aka digital employees) with custom skills that can automate todays businesses. Look for more details at [1] . We seek a DevOps technical leader/architect with robust expertise in designing distributed SaaS platforms and associated end-to-end build, deployment, CD/CI pipelines, frameworks, and tooling. Experience in quickly isolating problems and identifying root causes in complex production systems. The ideal candidate would have rich experience in understanding enterprise architecture and complex systems and be able to architect a solution that eases the deployment, identifies issues via monitoring and ensures the system is always highly available, reliable and resilient. ReferencesVisible links 1. Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise . 7+ years of experience with at least 5+ years of experience as a DevOps/SRE Architect . Designed, implemented, and supported complex distributed Saas platforms. . Deep understanding and working experience on Kubernetes, Containers, Red Hat Open Shift Clusters on AWS, AWS services, ArgoCD, Jenkins, Grafana and other pipelines, and monitoring tools. . Has an approach to troubleshooting systematically and has a deep sense of ownership . Maintains personal responsibility and commitment to address and respond to incidents quickly . Passionate about automation and innovations that improve productivity and reliability. . Experience in technically coaching and mentoring junior SRE/DevOps Engineers Preferred technical and professional experience . Good communication, collaboration, negotiation skills and technical leadership qualities . Strong Go Skills
Posted 2 months ago
8.0 - 12.0 years
13 - 23 Lacs
Hyderabad, Chennai
Work from Office
DevOps Tools expertise: GitLab, Jenkins, ArgoCD etc., Artifact Management using JFrog Application Security Auotmation Testing Public Cloud: Google and AWS Cloud DevOps platform migration project
Posted 2 months ago
6.0 - 8.0 years
40 - 50 Lacs
Mumbai, Pune
Hybrid
Congratulations, you have taken the first step towards bagging a career-defining role. Join the team of superheroes that safeguard data wherever it goes. What should you know about us? Seclore protects and controls digital assets to help enterprises prevent data theft and achieve compliance. Permissions and access to digital assets can be granularly assigned and revoked, or dynamically set at the enterprise-level, including when shared with external parties. Asset discovery and automated policy enforcement allow enterprises to adapt to changing security threats and regulatory requirements in real-time and at scale. Know more about us at www.seclore.com You would love our tribe: If you are a risk-taker, innovator, and fearless problem solver who loves solving challenges of data security, then this is the place for you! Role: Lead Product Engineer - Developer Productivity Experience: 6 - 8 Years Location: Mumbai/Pune A sneak peek into the role: We are seeking a highly motivated and experienced Lead, Developer Productivity & Platform Engineering to spearhead our efforts in building, scaling, and continuously improving our internal developer platform. In this critical role, you will be responsible for empowering our development teams with the tools, infrastructure, and processes necessary to achieve exceptional productivity, accelerate software delivery, and enhance their overall experience. You will driving the vision, strategy, and execution of our IDP initiatives, with a strong focus on measuring and improving developer effectiveness. Here's what you will get to explore: Leadership: This role blends the responsibilities of an individual contributor with the need to lead a team as the practice grows. While the primary focus is on individual contributions and expertise, the role also requires guiding, mentoring, and coordinating the work of others. Foster a collaborative, innovative, and results-oriented team culture. Define clear roles, responsibilities, and performance expectations for team members. Platform Vision, Strategy & Roadmap: Define and articulate a clear vision, strategy, and roadmap for our internal developer platform (IDP), aligning with overall engineering and business objectives. Identify and prioritize key features and improvements for the IDP based on developer needs and productivity goals. Stay abreast of industry trends and emerging technologies in platform engineering, developer experience, and IDPs (e.g., Backstage). Collaboration & Stakeholder Management: Work closely with application development teams, product managers, security teams, operations, and other stakeholders to understand their pain points, needs, and requirements for the IDP. Effectively communicate the value and progress of the IDP to both technical and non-technical audiences. IDP Design, Development & Maintenance: Lead the design, development, and maintenance of core components of our internal developer platform, emphasizing self-service capabilities, automation, standardization, and a seamless developer experience. Drive the adoption of Infrastructure as Code (IaC), Continuous Integration/Continuous Delivery (CI/CD), and robust observability practices within the platform. Ensure the IDP is scalable, reliable, secure, and cost-effective. Focus on Developer Productivity & Measurement: Define and track key metrics to measure the impact of the IDP on developer productivity (e.g., deployment frequency, lead time for changes, time to recovery, developer satisfaction). Implement mechanisms for collecting and analyzing data related to developer workflows and platform usage. Identify and implement solutions to streamline developer workflows, reduce toil, and accelerate application delivery based on data and feedback. Potentially lead initiatives to integrate and leverage tools like Backstage to enhance developer experience and provide a centralized platform. Tooling & Integration: Evaluate and integrate relevant tools and technologies into the IDP ecosystem, including CI/CD systems, monitoring tools, logging solutions, security scanners, and potentially IDP frameworks like Backstage. Ensure seamless integration between different platform components and existing development tools. We can see the next Entrepreneur At Seclore if you: 6+ years of relevant experience in software engineering, platform engineering, or DevOps roles, with increasing levels of responsibility. Proven experience leading and managing engineering teams, including hiring, mentoring, and performance management. Strong understanding of the software development lifecycle and common developer workflows. Deep technical expertise in cloud platforms (e.g., AWS, Azure, GCP) and cloud-native technologies (e.g., Kubernetes, Docker, serverless). Extensive experience with Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Significant experience designing and implementing CI/CD pipelines using tools like Jenkins, GitLab CI, GitHub Actions, CircleCI, Argo CD, or Flux CD. Solid understanding of observability principles and hands-on experience with monitoring tools (e.g., Prometheus, Grafana, Datadog), logging solutions (e.g., ELK stack, Splunk), and distributed tracing (e.g., Jaeger, Zipkin). Strong understanding of security best practices for cloud environments and containerized applications, and experience with security scanning tools and secrets management. Experience in managing and configuring Code Quality tools like SonarQube Experience in managing and configuring Git tools like Gitlab Proficiency in at least one Programming language (e.g., Python, Go) for automation. Understanding of API design principles (REST, GraphQL) and experience with building and consuming APIs. Experience with data collection and analysis to identify trends and measure the impact of platform initiatives. Excellent communication, collaboration, and interpersonal skills, with the ability to influence and build consensus across teams. Strong problem-solving and analytical abilities. Experience working in an Agile development environment. Prior experience building and maintaining an Internal Developer Platform (IDP). Hands-on experience with IDP frameworks like Backstage, including setup, configuration, plugin development, and integration with other tools. Familiarity with developer productivity frameworks and methodologies. Experience with other programming languages commonly used by development teams (e.g., Java, Node.js, C++). Experience with service mesh technologies. Knowledge of cost management and optimization in the cloud. Experience in defining and tracking developer productivity metrics. Experience with data visualization tools (e.g., Grafana, Tableau). Why do we call Seclorites Entrepreneurs not Employees? We value and support those who take the initiative and calculate risks. We have an attitude of a problem solver and an aptitude that is tech agnostic. You get to work with the smartest minds in the business. We are thriving not living. At Seclore, it is not just about work but about creating outstanding employee experiences. Our supportive and open culture enables our team to thrive. Excited to be the next Entrepreneur, apply today! Don't have some of the above points in your resume at the moment? Don't worry. We will help you build it. Let's build the future of data security at Seclore together.
Posted 2 months ago
1.0 - 3.0 years
3 - 6 Lacs
Chennai
Work from Office
What youll be doing You will be part of the Network Planning group in GNT organization supporting development of deployment automation pipelines and other tooling for the Verizon Cloud Platform. You will be supporting a highly reliable infrastructure running critical network functions. You will be responsible for solving issues that are new and unique, which will provide the opportunity to innovate. You will have a high level of technical expertise and daily hands-on implementation working in a planning team designing and developing automation. This entitles programming and orchestrating the deployment of feature sets into the Kubernetes CaaS platform along with building containers via a fully automated CI/CD pipeline utilizing Ansible playbooks, Python and CI/CD tools and process like JIRA, GitLab, ArgoCD, or any other scripting technologies. Leveraging monitoring tools such as Redfish, Splunk, and Grafana to monitor system health, detect issues, and proactively resolve them. Design and configure alerts to ensure timely responses to critical events. Working with the development and Operations teams to design, implement, and optimize CI/CD pipelines using ArgoCD for efficient, automated deployment of applications and infrastructure. Implementing security best practices for cloud and containerized services and ensure adherence to security protocols. Configure IAM roles, VPC security, encryption, and compliance policies. Continuously optimize cloud infrastructure for performance, scalability, and cost-effectiveness. Use tools and third-party solutions to analyze usage patterns and recommend cost-saving strategies. Working closely with the engineering and operations teams to design and implement cloud-based solutions. Maintaining detailed documentation of cloud architecture and platform configurations and regularly provide status reports and performance metrics. What were looking for... Youll need to have: Bachelors degree or one or more year of work experience. Experience years in Kubernetes administration Hands-on experience with one or more of the following platforms: EKS, Red Hat OpenShift, GKE, AKS, OCI GitOps CI/CD workflows (ArgoCD, Flux) and Very Strong Expertise in the following: Ansible, Terraform, Helm, Jenkins, Gitlab VSC/Pipelines/Runners, Artifactory Strong proficiency with monitoring/observability tools such as New Relic, Prometheus/Grafana, logging solutions (Fluentd/Elastic/Splunk) to include creating/customizing metrics and/or logging dashboards Backend development experience with languages to include Golang (preferred), Spring Boot, and Python Development Experience with the Operator SDK, HTTP/RESTful APIs, Microservices Familiarity with Cloud cost optimization (e.g. Kubecost) Strong experience with infra components like Flux, cert-manager, Karpenter, Cluster Autoscaler, VPC CNI, Over-provisioning, CoreDNS, metrics-server Familiarity with Wireshark, tshark, dumpcap, etc., capturing network traces and performing packet analysis Demonstrated expertise with the K8S ecosystem (inspecting cluster resources, determining cluster health, identifying potential application issues, etc.) Strong Development of K8S tools/components which may include standalone utilities/plugins, cert-manager plugins, etc. Development and working experience with Service Mesh lifecycle management and configuring, troubleshooting applications deployed on Service Mesh and Service Mesh related issues Expertise in RBAC and Pod Security Standards, Quotas, LimitRanges, OPA Gatekeeper Policies Working experience with security tools such as Sysdig, Crowdstrike, Black Duck, etc. Demonstrated expertise with the K8S security ecosystem (SCC, network policies, RBAC, CVE remediation, CIS benchmarks/hardening, etc.) Networking of microservices, solid understanding of Kubernetes networking and troubleshooting Certified Kubernetes Administrator (CKA) Demonstrated very strong troubleshooting and problem-solving skills Excellent verbal communication and written skills Even better if you have one or more of the following: Certified Kubernetes Application Developer (CKAD) Red Hat Certified OpenShift Administrator Familiarity with creating custom EnvoyFilters for Istio service mesh and integrating with existing web application portals Experience with OWASP rules and mitigating security vulnerabilities using security tools like Fortify, Sonarqube, etc. Database experience (RDBMS, NoSQL, etc.)
Posted 2 months ago
1.0 - 6.0 years
3 - 8 Lacs
Bengaluru
Work from Office
We are seeking an experienced OpenShift Engineer to design, deploy, and manage containerized applications on Red Hat OpenShift. Key Responsibilities: Deploy, configure, and manage OpenShift clusters in hybrid/multi-cloud environments. Automate deployments using CI/CD pipelines (Jenkins, GitLab CI/CD, ArgoCD). Troubleshoot Kubernetes/OpenShift-related issues and optimize performance. Implement security policies and best practices for containerized workloads. Work with developers to containerize applications and manage microservices. Monitor and manage OpenShift clusters using Prometheus, Grafana, and logging tools.
Posted 2 months ago
1.0 - 5.0 years
3 - 8 Lacs
Bengaluru
Work from Office
We are seeking an experienced OpenShift Engineer to design, deploy, and manage containerized applications on Red Hat OpenShift. Key Responsibilities: Deploy, configure, and manage OpenShift clusters in hybrid/multi-cloud environments. Automate deployments using CI/CD pipelines (Jenkins, GitLab CI/CD, ArgoCD). Troubleshoot Kubernetes/ OpenShift-related issues and optimize performance. Implement security policies and best practices for containerized workloads. Work with developers to containerize applications and manage microservices. Monitor and manage OpenShift clusters using Prometheus, Grafana, and logging tools.
Posted 2 months ago
5.0 - 8.0 years
7 - 10 Lacs
Bengaluru
Work from Office
Requirements: (Must have Qualifications) Solid cloud infrastructure background and operational, fixing and problem-solving experience Strong software development experience in Python. Experience in building and maintaining code distribution through automated pipelines Experience in deploying and managing (IaaS) infrastructure in Private/Public Cloud using Openstack. Experience with Ansible or Puppet for configuration management IaaC experience Terraform, Ansible, Git, GitLab, Jenkins, Helm, ArgoCD, Conjur/Vault
Posted 2 months ago
4.0 - 12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That's why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don't need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey Bring your brightest version of you and have a brighter work day here. About the Team The Data Platform and Observability team is based in Pleasanton,CA Boston,MA Atlanta, GA, Dublin, Ireland and Chennai, India. Our focus is on the development of large scale distributed data systems to support critical Workday products and provide real-time insights across Workday's platforms, infrastructure and applications. The team provides platforms that process 100s of terabytes of data that enable core Workday products and use cases like core HCM, Fins, AI/ML skus, internal data products and Observability. If you enjoy writing efficient software or tuning and scaling large distributed systems you will enjoy working with us. Do you want to tackle exciting challenges at massive scale across private and public clouds for our 10000+ global customers Do you want to work with world class engineers and facilitate the development of the next generation Distributed systems platforms If so, we should chat. About the Role The Messaging, Streaming and Caching team is a full-service Distributed Systems Engineering team. We architect and provide async messaging, streaming, and NoSQL platforms and solutions that power the Workday products and SKUs ranging from core HCM, Fins, Integrations, and AI/ML. We develop client libraries and SDK's that make it easy for teams to build Workday products. We develop automation to deploy and run hundreds of clusters, and we also operate and tune our clusters as well. As a team member you will play a key role in improving our services and encouraging their adoption within Workday's infrastructure both in our private cloud and public cloud. As a member of this team you will design and build new capabilities from inception to deployment to exploit the full power of the core middleware infrastructure and services, and work hand in hand with our application and service teams! Primary Responsibilities Design, build, and enhance critical distributed services, including Kafka, Redis, RabbitMQ etc. Design, develop, build, deploy and maintain core distributed services using a combination of open source and proprietary stacks across diverse infrastructure environments (Kubernetes, OpenStack, Bare Metal, etc.) Design and develop core software modules for streaming, messaging and caching. Construct observability modules, alerts and automation for Dashboard lifecycle management for the distributed services. Build, deploy and operate infrastructure components in production environments. Champion all aspects of streaming, messaging and caching with a focus on resiliency and operational excellence. Evaluate and implement new open-source and cloud-native tools and technologies as needed. Participate in the on-call rotation to support the distributed systems platforms. Manage and optimize Workday distributed services in AWS, GCP & Private cloud env. About You You are a senior software engineer with a distributed systems background and significant experience in distributed systems products liketKafka, Redis, RabbitMQ or Zookeeper. You have independently led product features and deployed on large scale NoSQL clusters. Basic Qualifications 4-12 years of software engineering experience using one or more of the following: Java/Scala, Golang. 4+ years of distributed systems experience 3+ years of development and DevOps experience in designing and operating large-scale deployments of distributed NoSQL & messaging systems. 1+ year of leading a NoSQL technology related product right from conception to deployment and maintenance. Preferred Qualifications a consistent track record of technical project leadership and success involving collaborators and interested partners across the enterprise. expertise in developing distributed system software and deployments that perform well and degrade gracefully under excessive load. hands-on experience with atleast one or more distributed systems technologies like Kafka/RabbitMQ, Redis, Cassandra experience learning complex open source service internals via code inspection. extensive experience with modern software development tools including CI/CD and methodologies like Agile expertise with configuration management using Chef and service deployment on Kubernetes via Helm and ArgoCD. experience with Linux system internals and tuning. experience with distributed system performance analysis and optimization. strong written and oral communication skills and the ability to explain esoteric technical details clearly to engineers without a similar background. Pursuant to applicable Fair Chance law, Workday will consider for employment qualified applicants with arrest and conviction records. Workday is an Equal Opportunity Employer including individuals with disabilities and protected veterans. Are you being referred to one of our roles If so, ask your connection at Workday about our Employee Referral process!
Posted 2 months ago
3.0 - 5.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Job Summary: Executing direction from leadership, delivering results that align with strategic objectives, communicating critical information to other teams, managing vendor relationships, developing processes that align to organizational goals, specific technical skills required for managing a process. In this role, you will use your expertise and technical skills around cloud computing to design scalable platform to be used at the organization. You will help build viable long term infrastructure solutions. Roles & Responsibilities: Core Responsibilities: Completely Hands-on, help us build the next gen platform based on mentioned technologies Suggest improvements in automation, CI/CD practices, security, and platform services. Drive evaluation of different tools and executing technical feasibility assessments. Years of Experience 3+ years of experience with Cloud Technologies. Skill Set Required Primary Skills: Experience of working on GCP Cloud environment Working knowledge on containerization. Experience of setting up/working with Kubernetes Cluster in Production. Including customizing the Kubernetes setup. Worked on configuring and deploying Rancher, RKE, Flux. Dabbled with open-source tools like Terraform/Vault/Jenkins etc. Knowledge on CI/CD automation or tools Proficient in scripting shell/bash Comfortable in Go/Python programming. Strong communication skills both verbal and written skills to develop technical documentation and presentations. Secondary Skills: Drive evaluation of different tools and executing technical feasibility assessments. Good knowledge of Linux and Knowledge on setting up HA distributed streaming platform such as Kafka/NoSQL Databases and Prometheus, ELK and Pinpoint
Posted 2 months ago
1 - 6 years
5 - 8 Lacs
Bengaluru
Work from Office
We are seeking an experienced OpenShift Engineer to design, deploy, and manage containerized applications on Red Hat OpenShift. Key Responsibilities: Deploy, configure, and manage OpenShift clusters in hybrid/multi-cloud environments Automate deployments using CI/CD pipelines (Jenkins, GitLab CI/CD, ArgoCD) Troubleshoot Kubernetes/OpenShift-related issues and optimize performance Implement security policies and best practices for containerized workloads Work with developers to containerize applications and manage microservices Monitor and manage OpenShift clusters using Prometheus, Grafana, and logging tools
Posted 2 months ago
1 - 5 years
3 - 6 Lacs
Bengaluru
Work from Office
We are seeking an experienced OpenShift Engineer to design, deploy, and manage containerized applications on Red Hat OpenShift. Key Responsibilities: Deploy, configure, and manage OpenShift clusters in hybrid/ multi-cloud environments. Automate deployments using CI/CD pipelines (Jenkins, GitLab CI/CD, ArgoCD). Troubleshoot Kubernetes/ OpenShift-related issues and optimize performance. Implement security policies and best practices for containerized workloads. Work with developers to containerize applications and manage microservices. Monitor and manage OpenShift clusters using Prometheus, Grafana, and logging tools.
Posted 2 months ago
5 - 9 years
0 Lacs
Bengaluru
Work from Office
Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below client services offerings are used to create the Internet solutions that make networks possible providing easy access to information anywhere, at any time. Job Title: DevOps Engineer Location: Bangalore Duration: 5 Months Work Type: Onsite Job Description: 5+ years of experience are required. Requirements: (Must have Qualifications) Solid cloud infrastructure background and operational, fixing and problem-solving experience Strong software development experience in Python. Experience in building and maintaining code distribution through automated pipelines Experience in deploying and managing (IaaS) infrastructure in Private/Public Cloud using Openstack. Experience with Ansible or Puppet for configuration management IaaC experience Terraform, Ansible, Git, GitLab, Jenkins, Helm, ArgoCD, Conjur/Vault TekWissen Group is an equal opportunity employer supporting workforce diversity.
Posted 2 months ago
5 - 8 years
9 - 19 Lacs
Hyderabad, Ahmedabad
Work from Office
JD Devops Engineer Job Description Roles and Responsibilities Responsible for managing capacity across public and private cloud resource pools, including automating scale-down/-up of environments. Improve cloud product reliability, availability, maintainability, and cost/benefitincluding developing fault-tolerant tools to ensure the general robustness of the cloud infrastructure. Design and implement CI/CD pipeline elements to provide automated compilation, assembly, and testing of containerized and non-containerized components. Design and implement infrastructure solutions on GCP that are scalable, secure, and highly available. Automate infrastructure deployment and management using Terraform, Ansible, or equivalent tools. Create and maintain CI/CD pipelines for our applications. Monitor and troubleshoot system and application issues to ensure high availability and reliability. Work closely with development teams to identify and address infrastructure issues. Collaborate with security teams to ensure infrastructure is compliant with company policies and industry standards. Participate in on-call rotations to provide 24/7 support for production systems. Continuously evaluate and recommend new technologies and tools to improve infrastructure efficiency and performance. Mentor and guide junior DevOps engineers. Other duties as assigned. Requirements Minimum Special Certifications or Technical Skills Proficient in at least two or more software languages (e.g. Python, Java, Go, etc. concerning to designing, coding, testing, and software delivery. Strong knowledge on CI/CD, Jenkins and github action More of a application devops engineer rather than infra devops engineer Strong knowledge on maven, sonarqube Strong knowledge on scripting and some knowledge on java Strong knowledge on agocd, helm Hands-on experience with Google Cloud Platform (GCP) and its services such as Compute Engine, Cloud Storage, Kubernetes Engine, Cloud SQL, Cloud Functions, etc. Strong understanding of infrastructure-as-code principles and tools such as Terraform, Ansible, or equivalent. Experience with CI/CD tools such as Jenkins, GitLab CI, or equivalent Strong understanding of networking concepts such as DNS, TCP/IP, and load balancing Preferred candidate profile
Posted 2 months ago
8 - 12 years
25 - 30 Lacs
Mumbai
Work from Office
Job Summary: We are seeking a skilled and motivated System Programmer to join our IT Infrastructure team. This role is responsible for the installation, configuration, maintenance, and performance of critical enterprise systems including Linux servers , Apache HTTP Server , and Oracle WebLogic . The ideal candidate will have strong scripting abilities and experience with writing SQL queries to support operational and development teams. Key Responsibilities: Install, configure, and maintain Linux operating systems , Apache HTTP Server , and Oracle WebLogic application servers in development, test, and production environments. Perform regular system patching and software updates to ensure platform security and stability. Develop and maintain automation scripts (e.g., Bash, Python, or similar) to streamline system management tasks. Write and optimize SQL queries to support reporting, troubleshooting, and system integration needs. Monitor system performance and implement tuning improvements to maximize availability and efficiency. Work closely with development, QA, and operations teams to support application deployments and troubleshoot system-related issues. Maintain accurate system documentation, including configurations, procedures, and troubleshooting guides. Participate in an on-call rotation and respond to incidents as required. Required Qualifications: Overall 8-12 years of experience. Proven experience with Linux system administration (RHEL, CentOS, or equivalent). Hands-on experience with Apache HTTP Server and Oracle WebLogic . Proficiency in scripting languages such as Bash, Python, or Perl. Strong understanding of SQL and relational databases (e.g., Oracle, MySQL). Familiarity with system monitoring tools and performance tuning. Knowledge of security best practices and patch management procedures. Excellent troubleshooting, analytical, and problem-solving skills. Strong communication skills and ability to work in a collaborative team environment. Preferred Qualifications: Experience with CI/CD pipelines , Ansible, ArgoCD , or other automation tools. Exposure to cloud environments (e.g., AWS, Azure) or container technologies (e.g., Docker, Kubernetes).
Posted 2 months ago
3 - 8 years
5 - 10 Lacs
Hyderabad, Chennai
Work from Office
The Impact you will have in this role: The Systems Engineering family is responsible for the entire technical effort to evolve and verify solutions that satisfy client needs. The primary focus is centered on reducing risk and improving the efficiency, performance, stability, security, and quality of all systems and platforms. The Systems Engineering role specializes in analysis, evaluation, design, testing, implementation, support, and debugging of all middleware, mainframe, and distributed platforms, tools, and systems of the firm. Your Primary Responsibilities: Proficiency using Linux platform and hands-on experience with scripting languages Shell scripting, Python, Ansible is a must Experience with the AWS services, handling Infrastructure as a code using Terraform, Pipelines Experience with containerization using platforms like Docker, Kubernetes, OCP Experience using code repositories like GitHub, Bitbucket, Jenkins, CI/CD Pipeline Knowledge with SSL/TLS certificates, Autosys job scheduling, basic networking concepts like firewall, load balancing Experience working with DataPower setup, install and configuration is desirable Knowledge of working with the vendor, open tickets and follow-up with them to resolve issues. Experience working with Sterling Connect Direct, Control Center Monitor configuration and setup is desirable Qualifications: Minimum of 03+ years of related experience Bachelor's degree preferred or equivalent experience Talents Needed for Success: Programming skills using Python, Unix shell scripting is required Knowledge of CI/CD tools sets is required (Jenkins, Ansible, Terraform, ArgoCD) Familiarity with multi-cloud or hybrid-cloud environments Fosters a culture where honesty and transparency are expected. Stays current on changes in their own specialist area and seeks out learning opportunities to ensure knowledge is up-to-date. Collaborates well within and across teams. Communicates openly with team members and others. Open to learn and adapt to new technologies and tools. Strong problem-solving and communication skills.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough