Jobs
Interviews

1569 Gitops Jobs - Page 12

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

0 Lacs

gurugram, haryana, india

On-site

Details Of Senior DevOps Engineer We are seeking a skilled and motivated DevOps Engineer to join our growing engineering team. In this role, you will drive the development and optimization of our CI/CD pipelines, cloud infrastructure, and deployment workflows. Your work will play a vital role in enhancing system reliability, scalability, and performance while supporting a seamless collaboration between our development and operations teams. This is a high-impact role offering the opportunity to work on cutting-edge tools and practices, and contribute to the technical foundation of a fast-scaling tech platform. Key Responsibilities Design, implement, and maintain CI/CD pipelines using GitLab to automate deployment processes. Develop and manage infrastructure as code (IaC) using Terraform for scalable and reliable cloud infrastructure. Apply GitOps principles using ArgoCD for declarative Kubernetes configuration and application delivery. Manage and monitor cloud infrastructure (AWS, Azure, or Google Cloud) to ensure high availability and security. Implement robust monitoring, logging, and alerting systems to detect and respond to performance issues. Collaborate with engineering teams to improve application reliability, scalability, and performance. Identify and adopt emerging tools and best practices to enhance DevOps efficiency and workflows. Participate in incident response and troubleshoot production issues to ensure minimal downtime. Support cross-functional teams with DevOps-related technical guidance and enablement. Document infrastructure architecture, processes, and configurations for transparency and maintainability. Ensure compliance with internal security policies and industry-standard regulations. Skills 5 to 8 years of experience as a DevOps Engineer or in a related role. Bachelor's degree in Computer Science, Engineering, or related field. Proficient in scripting languages like Python, Shell, or Ruby. Strong experience with GitLab CI/CD or similar continuous integration tools. In-depth knowledge of Terraform and infrastructure as code practices. Practical experience with GitOps workflows and tools such as ArgoCD. Hands-on expertise in containerization and orchestration using Docker and Kubernetes. Familiarity with major cloud providers such as AWS, Azure, or Google Cloud. Strong problem-solving and debugging skills. Excellent communication and collaboration skills in a cross-functional team setup. Ability to thrive in a fast-paced, agile environment. Preferred Qualifications Certifications such as AWS Certified DevOps Engineer or Certified Kubernetes Administrator (CKA). Exposure to microservices architecture and distributed systems. Working knowledge of security best practices and tools. Familiarity with Agile development methodologies. Key Competencies A strong sense of ownership and accountability. Analytical thinking with a continuous improvement mindset. Comfort working with complex infrastructure and rapid deployment cycles. Team player with a proactive and solution-oriented attitude. What We Offer A dynamic, fast-paced work environment with real impact on products used by millions. Ownership of projects and the freedom to innovate and take initiative. A collaborative culture that encourages learning, knowledge sharing, and creativity. Exposure to the latest tools, technologies, and industry best practices. Clear paths for career growth and personal development. Employee Stock Options (ESOPs) as part of long-term rewards. (ref:hirist.tech)

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

gurugram, haryana, india

Remote

Project Role : Technology Support Engineer Project Role Description : Resolve incidents and problems across multiple business system components and ensure operational stability. Create and implement Requests for Change (RFC) and update knowledge base articles to support effective troubleshooting. Collaborate with vendors and help service management teams with issue analysis and resolution. Must have skills : DevOps Good to have skills : Google Cloud Platform Administration, Infrastructure As Code (IaC), GitHub Actions Minimum 12 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: We are seeking an experienced Senior DevOps & Cloud Automation Architect to design, lead, and optimize infrastructure automation and CI/CD strategies across multiple cloud environments (Azure, AWS, GCP). The ideal candidate will be hands-on with Azure DevOps, GitHub Actions, and Terraform, and have a strong foundation in platform engineering, security automation, and modern DevOps practices. You will work closely with engineering, security, and cloud teams to create reusable, scalable, and secure DevOps frameworks that accelerate cloud delivery and compliance. Roles & Responsibilities: - Architect end-to-end CI/CD pipelines using Azure DevOps and GitHub Actions for deploying applications and infrastructure across Azure, AWS, and GCP. - Define and implement reusable pipeline templates, environments, release gates, and approvals for secure DevOps. - Integrate testing, security scans (SAST/DAST), and compliance checks in all deployment workflows. - Lead IaC initiatives using Terraform, including module creation, state management (remote backends), and policy-as-code integration. - Design standardized Terraform modules for multi-cloud services (VMs, AKS/EKS/GKE, Networking, Storage, IAM). - Implement guardrails and governance using Sentinel, OPA (Open Policy Agent), or Terraform Cloud/Enterprise. - Architect secure and scalable solutions across Azure, AWS, and GCP with automation-first principles. - Design infrastructure blueprints using Bicep (Azure), CloudFormation (AWS), and Deployment Manager (GCP) when needed. - Automate resource provisioning, configuration management, and secrets handling (e.g., Azure Key Vault, AWS Secrets Manager). - Implement logging, monitoring, and alerting using cloud-native tools and integrations (e.g., Azure Monitor, AWS CloudWatch, Datadog, ELK). - Enforce security and compliance policies using DevSecOps tools, static code analysis, and pre-deployment gates. - Support incident response automation and root cause analysis tooling integration. - Collaborate with application teams to enable self-service deployments with secure guardrails. - Define and enforce multi-cloud governance, cost control, and resource tagging strategies. - Lead technical design discussions, PoCs, and architecture reviews across cloud and DevOps teams. Professional & Technical Skills: - 10+ years of experience in DevOps, cloud architecture, or platform engineering - Hands-on expertise in Azure DevOps Pipelines, GitHub Actions, and Terraform (0.13+) - Deep understanding of multi-cloud platforms: Azure, AWS, and/or GCP - Strong experience with IaC versioning, state management, and module registry design - Proficient in PowerShell, Bash, and Python scripting for automation tasks - Familiarity with GitOps, Secrets Management, and policy-as-code - CI/CD security and integration with tools like SonarQube, Checkov, Snyk, AquaSec, or Twistlock - Microsoft Certified: Azure DevOps Engineer Expert - HashiCorp Certified: Terraform Associate - AWS/GCP Professional Cloud Architect - Experience with ArgoCD, Flux, or Spinnaker is a plus - Container orchestration with Kubernetes (AKS/EKS/GKE) and Helm charts - Strong communication and architectural documentation skills -Proactive problem solver with a strategic mindset - Ability to mentor DevOps engineers and lead cross-functional DevSecOps initiatives - Comfortable with agile, product-driven environments and iterative delivery Additional Information: - The candidate should have minimum 12 years of experience in DevOps. - This position is based at our Gurugram office. - A 15 years full time education is required.

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

gurugram, haryana, india

Remote

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Site Reliability Engineering Good to have skills : Google Cloud Data Services, Microsoft Azure Analytics Services Minimum 12 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Job Title: SRE and Automation Architect Location: [Insert Location or Remote] Experience Level: 10+ Years Employment Type: Full-Time ________________________________________ Job Summary: We are looking for a seasoned Site Reliability Engineering (SRE) and Automation Architect to lead the design and implementation of highly available, reliable, and automated platforms and operations. The ideal candidate will bridge the gap between development and operations, driving infrastructure automation, observability, resiliency engineering, and SRE best practices at scale across multi-cloud and hybrid environments. This role requires deep technical expertise in cloud platforms (Azure/AWS/GCP), CI/CD pipelines, IaC, SLO/SLI implementation, and incident management automation. ________________________________________ Key Responsibilities: Platform Reliability & Architecture: Architect highly available, resilient, and self-healing systems and services. Define and implement SLOs, SLIs, error budgets, and performance benchmarks across platforms. Drive observability standards including logging, metrics, and distributed tracing. Automation Strategy: Lead design and implementation of end-to-end automation across infrastructure provisioning, configuration management, CI/CD pipelines, and incident response. Build reusable IaC modules using tools like Terraform, Ansible, Pulumi, or Bicep. Automate environment creation, scaling, patching, and compliance using scripts and DevOps toolchains. DevOps & CI/CD: Architect and maintain CI/CD pipelines using Azure DevOps, GitHub Actions. Ensure secure and reliable software deployments by implementing automated testing, canary deployments, blue-green strategies, and rollback automation. Monitoring & Incident Response: Define standards for monitoring, alerting, and incident management using tools like Prometheus, Grafana, ELK, Datadog, Splunk, or Azure Monitor. Build auto-remediation runbooks and event-driven workflows using platforms like StackStorm, Azure Logic Apps, PagerDuty, or OpsGenie. Facilitate blameless post-mortems and continuous improvement processes. Security, Compliance & Cost Optimization: Integrate security checks and policy-as-code into automation and deployment pipelines (e.g., with OPA, Sentinel, or Azure Policy). Optimize cost through right-sizing, autoscaling, and usage-based automation. Collaboration & Leadership: Act as the SRE and automation thought leader across development, infrastructure, and operations teams. Mentor engineers and advocate for modern SRE principles such as Toil Reduction, Error Budgeting, and Release Engineering. Collaborate with architecture teams to align reliability with business and technical goals. ________________________________________ Required Skills & Experience: 10+ years of experience in infrastructure, DevOps, or SRE roles, with at least 3 years in an architect-level role Deep expertise in cloud platforms: Azure (preferred), AWS, or GCP Strong experience with IaC (Terraform, ARM/Bicep, Ansible) and automation scripting (Python, Bash, PowerShell) Hands-on experience with CI/CD tools and container orchestration (Kubernetes, Helm, Istio) Proven ability to design and manage high-availability and disaster recovery strategies Strong observability experience with APM tools, log aggregation, and distributed tracing Knowledge of incident response automation and auto-remediation frameworks ________________________________________ Preferred Qualifications: Certified: Azure DevOps Expert, GCP SRE, AWS DevOps Engineer, or Kubernetes Administrator (CKA) Experience with GitOps tools like Flux or ArgoCD Familiarity with Service Meshes, Chaos Engineering (e.g., Chaos Monkey, Litmus) Understanding of FinOps, Cloud Governance, and Security Automation ________________________________________ Soft Skills: Strategic mindset with attention to detail Excellent problem-solving and analytical skills Strong communication and documentation skills Passion for automation, scalability, and improving developer productivity ________________________________________

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

hyderabad, telangana, india

On-site

About Marvell Marvells semiconductor solutions are the essential building blocks of the data infrastructure that connects our world. Across enterprise, cloud and AI, automotive, and carrier architectures, our innovative technology is enabling new possibilities. At Marvell, you can affect the arc of individual lives, lift the trajectory of entire industries, and fuel the transformative potential of tomorrow. For those looking to make their mark on purposeful and enduring innovation, above and beyond fleeting trends, Marvell is a place to thrive, learn, and lead. Your Team, Your Impact As a key CAD member of Marvell Central Engineering, you will play a leading role on developing next-generation automated design flow and its add-on tools. You will have the opportunity to use your extensive design and CAD knowledge to participate in defining the whole organization&aposs design infrastructure, methodology and workflows. What You Can Expect Design, implement, and maintain large-scale HPC clusters for EDA workloads, ensuring high availability, fault tolerance, and efficient resource utilization. Manage, configure, and optimize LSF job scheduling systems to support diverse verification workflows. Develop, automate, and monitor deployment, configuration, and operational processes for EDA infrastructure. Collaborate with EDA engineers and designers to refine verification flows to run optimally on the grid. Implement and advance CI/CD pipelines to streamline the deployment, testing, and monitoring of infrastructure and EDA flows. Provide troubleshooting and support for users and the infrastructure. Monitor infrastructure health, performance, and usage; proactively identify, resolve, and document issues. Ensure compliance with security best practices, license management, and data protection requirements. Contribute to architectural innovation and process improvement for future scalability and efficiency. Participate in incident management teams for prompt issue resolution. What We&aposre Looking For Bachelors or Masters degree in Computer Science, Electrical Engineering, or related field. Proficiency with Programming or scripting in languages such as Python, Bash, or Perl for automation and workflow development. Working knowledge of Linux system administration and cluster troubleshooting. Familiarity with infrastructure-as-code, configuration management, and monitoring , DevOps; SRE concepts; CI/CD and GitOps. Strong communication and collaboration skills; ability to work in cross-functional teams. Track record of identifying and implementing infrastructure optimizations for efficiency, throughput, and reliability. Preferred Qualifications Experience with cloud-based EDA infrastructure or hybrid HPC environments. Familiarity with regression management tools, and workflow automation specific to silicon verification. Experience with HPC cluster management, especially using LSF/Platform LSF, in a chip verification or EDA context. Key Attributes Analytical, detail-oriented, and proactive in identifying and solving technical problems. Passion for continuous learning and embracing new technologies and methods. Strong organizational abilities and commitment to documentation and process improvement. This role is essential in ensuring that our chip verification teams have a robust, high-performance, and adaptable infrastructure to accelerate silicon innovation. Additional Compensation And Benefit Elements With competitive compensation and great benefits, you will enjoy our workstyle within an environment of shared collaboration, transparency, and inclusivity. Were dedicated to giving our people the tools and resources they need to succeed in doing work that matters, and to grow and develop with us. For additional information on what its like to work at Marvell, visit our Careers page. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

india

Remote

About KYFEX: KYFEX is a leading AI consulting firm, dedicated to harnessing the power of artificial intelligence to revolutionize business operations across the globe. Our expertise in Large Language Models (LLMs) and AI infrastructure positions us at the cutting edge of AI technology, enabling us to offer unparalleled solutions to our clients. As we continue to grow, we're seeking a skilled Remote AWS Infrastructure/DevOps Engineer to join our dynamic team and contribute to our mission of delivering scalable, secure, and reliable AI infrastructure solutions. Job Responsibilities: Design, implement, and manage scalable AWS infrastructure to support LLM deployments and AI workloads for our diverse client base. Build and maintain CI/CD pipelines for automated deployment of AI models and applications across multiple environments. Implement Infrastructure as Code (IaC) using Terraform, CloudFormation, or CDK to ensure reproducible and version-controlled infrastructure. Optimize cloud infrastructure costs while maintaining high performance for compute-intensive AI/ML workloads. Design and implement robust monitoring, logging, and alerting systems to ensure 99.99% uptime for production AI services. Collaborate with ML engineers and data scientists to containerize and orchestrate AI models using Docker and Kubernetes/EKS. Implement security best practices, including network segmentation, IAM policies, and compliance frameworks (SOC2, HIPAA, FedRAMP) for enterprise clients. Manage and optimize GPU-enabled infrastructure for training and inference of large language models. Develop disaster recovery strategies and implement backup solutions for critical AI infrastructure and model artifacts. Automate operational tasks and create self-healing systems to reduce manual intervention. Minimum Requirements: Bachelor's degree in Computer Science, Engineering, or a related technical field. 4+ years of hands-on experience with AWS services, particularly EC2, ECS/EKS, Lambda, S3, RDS, and VPC. Strong expertise in Infrastructure as Code tools (Terraform, CloudFormation, or AWS CDK). Proficiency in scripting languages (Python, Bash) and automation frameworks. Solid experience with containerization (Docker) and orchestration platforms (Kubernetes). Deep understanding of CI/CD principles and tools (Jenkins, GitLab CI, GitHub Actions, or AWS CodePipeline). Experience with monitoring and observability tools (CloudWatch, Prometheus, Grafana, DataDog, or New Relic). Strong understanding of networking concepts, security best practices, and Linux system administration. Demonstrated ability to work independently in a remote setting, managing complex infrastructure projects. Preferred Skills: AWS certifications (Solutions Architect, DevOps Engineer, or Security Specialty). Experience with ML/AI infrastructure, including GPU instance management and ML platforms (SageMaker, MLflow). Familiarity with serverless architectures and event-driven systems. Experience with HashiCorp tools (Vault, Consul, Packer). Knowledge of database administration (PostgreSQL, MongoDB, Redis) and data streaming technologies (Kafka, Kinesis). Experience implementing air-gapped or hybrid cloud solutions for high-security environments. Understanding of FinOps practices and cloud cost optimization strategies. Experience with GitOps workflows and tools (ArgoCD, Flux). Strong communication skills, with the ability to document complex infrastructure designs and explain technical concepts to diverse stakeholders. Why Join KYFEX? Work at the forefront of AI technology with a team of experts passionate about innovation. Enjoy the flexibility and benefits of a fully remote position. Build infrastructure that powers cutting-edge AI solutions with real-world impact. Benefit from a culture of continuous learning, professional development, and collaborative achievement. To Apply: Interested candidates are invited to submit their resume, a cover letter detailing their experience with AWS infrastructure and DevOps practices, and any relevant project samples or GitHub links to careers@kyfex.com with "Remote AWS Infrastructure/DevOps Engineer Application" as the subject line. KYFEX is committed to diversity and inclusion and encourages applications from all qualified individuals, including those from diverse backgrounds and underrepresented groups.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Job Description Who We Are At Goldman Sachs, we connect people, capital and ideas to help solve problems for our clients. We are a leading global financial services firm providing investment banking, securities and investment management services to a substantial and diversified client base that includes corporations, financial institutions, governments and individuals. Responsibilities Build and manage platform infrastructure and oversee application deployments in cloud environments. Assist in developing and implementing DevOps processes, tools, and best practices. Collaborate with development and operations teams to ensure software is built, deployed, and maintained reliably and efficiently. Automate tasks and processes to improve operational efficiency and reduce manual effort. Monitor, troubleshoot, and resolve system issues to ensure high availability and performance. Stay informed about emerging DevOps trends, tools, and technologies to support continuous improvement. Basic Qualifications Bachelor’s degree in computer science, Engineering, or a related field (or equivalent experience). 3+ years of experience in DevOps, with a focus on AWS cloud services. Familiarity with AWS services such as EC2, S3, RDS, Lambda, IAM, AWS CDK Experience with CI/CD tools (e.g., Jenkins, GitLab CI/CD, AWS CodePipeline) and various deployment strategies like BlueGreen, Canary. Knowledge of infrastructure-as-code (IaC) tools like Terraform or AWS CloudFormation. Proficiency in scripting and automation using Python, Bash, TypeScript,or similar languages. Exposure to containerization tools (e.g., Docker) and orchestration tools (e.g., Kubernetes). Basic understanding of monitoring and logging tools (e.g., CloudWatch, ELK Stack, Splunk, Prometheus/Grafana, Datadog). Knowledge of Networking fundamentals and Concepts, TCP/IP, DNS, SSL, Certificate Management, Encryption, Firewalls , security and system administration in cloud environments. Additional Qualifications AWS certifications (e.g., AWS Certified Cloud Practitioner, AWS Solutions Architect – Associate). Experience with configuration management tools like Ansible, Chef, or Puppet. Familiarity with GitOps principles and tools (e.g., ArgoCD, Flux). Understanding microservices architecture and serverless computing. Basic experience with performance tuning and cost optimization in AWS. Experience managing Kafka as messaging platform and Familiarity with high-scale NoSQL solutions like MongoDB Strong problem-solving skills and the ability to work collaboratively in a team environment. Good communication and documentation skills. Preferred Soft Skills Willingness to learn and adapt to new tools and technologies. Strong organizational skills and attention to detail. Fintech experience will be a great asset Proactive mindset with a focus on continuous improvement. Goldman Sachs Engineering Culture At Goldman Sachs, our Engineers don’t just make things – we make things possible. Change the world by connecting people and capital with ideas. Solve the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets. Engineering is at the critical center of our business, and our dynamic environment requires innovative strategic thinking and immediate, real solutions. Want to push the limit of digital possibilities? Start here! © The Goldman Sachs Group, Inc., 2025. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity.

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

hyderabad, telangana, india

On-site

About Marvell Marvell's semiconductor solutions are the essential building blocks of the data infrastructure that connects our world. Across enterprise, cloud and AI, automotive, and carrier architectures, our innovative technology is enabling new possibilities. At Marvell, you can affect the arc of individual lives, lift the trajectory of entire industries, and fuel the transformative potential of tomorrow. For those looking to make their mark on purposeful and enduring innovation, above and beyond fleeting trends, Marvell is a place to thrive, learn, and lead. Your Team, Your Impact As a key CAD member of Marvell Central Engineering, you will play a leading role on developing next-generation automated design flow and its add-on tools. You will have the opportunity to use your extensive design and CAD knowledge to participate in defining the whole organization's design infrastructure, methodology and workflows. What You Can Expect . Design, implement, and maintain large-scale HPC clusters for EDA workloads, ensuring high availability, fault tolerance, and efficient resource utilization. Manage, configure, and optimize LSF job scheduling systems to support diverse verification workflows. Develop, automate, and monitor deployment, configuration, and operational processes for EDA infrastructure. Collaborate with EDA engineers and designers to refine verification flows to run optimally on the grid. Implement and advance CI/CD pipelines to streamline the deployment, testing, and monitoring of infrastructure and EDA flows. Provide troubleshooting and support for users and the infrastructure. Monitor infrastructure health, performance, and usage proactively identify, resolve, and document issues. Ensure compliance with security best practices,license management, and data protection requirements. Contribute to architectural innovation and process improvement for future scalability and efficiency. Participate in incident management teams for prompt issue resolution. What We're Looking For Bachelor's or Master's degree in Computer Science, Electrical Engineering, or related field. Proficiency with Programming or scripting in languages such as Python, Bash, or Perl for automation and workflow development. Working knowledge of Linux system administration and cluster troubleshooting. Familiarity with infrastructure-as-code, configuration management, and monitoring , DevOps SRE concepts CI/CD and GitOps. Strong communication and collaboration skills ability to work in cross-functional teams. Track record of identifying and implementing infrastructure optimizations for efficiency, throughput, and reliability. Preferred Qualifications: Experience with cloud-based EDA infrastructure or hybrid HPC environments. Familiarity with regression management tools, and workflow automation specific to silicon verification. Experience with HPC cluster management, especially using LSF/Platform LSF, in a chip verification or EDA context. Key Attributes: Analytical, detail-oriented, and proactive in identifying and solving technical problems. Passion for continuous learning and embracing new technologies and methods. Strong organizational abilities and commitment to documentation and process improvement. This role is essential in ensuring that our chip verification teams have a robust, high-performance, and adaptable infrastructure to accelerate silicon innovation. Additional Compensation and Benefit Elements With competitive compensation and great benefits, you will enjoy our workstyle within an environment of shared collaboration, transparency, and inclusivity. We're dedicated to giving our people the tools and resources they need to succeed in doing work that matters, and to grow and develop with us. For additional information on what it's like to work at Marvell, visit our page. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. #LI-AB3

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

thiruporur, tamil nadu, india

On-site

Job Description The Test Architect for the Packet Core (PaCo) domain is a highly experienced and technically skilled individual responsible for designing, implementing, and maintaining the overall test strategy and architecture for PaCo products and services. This role requires a deep understanding of testing methodologies, automation frameworks, and the PaCo domain itself. The successful candidate will lead the evolution of our testing infrastructure, ensuring comprehensive test coverage, high automation rates, and efficient test execution. This role also requires the ability to provide full-stack development support to enhance testing capabilities and tools. How You Will Contribute And What You Will Learn Test Strategy and Architecture: Define and maintain the overall test strategy and architecture for PaCo products, encompassing unit, integration, system, performance, and security testing. This includes defining test methodologies, selecting appropriate tools and technologies, and establishing clear testing processes. Test Automation: Lead the development and implementation of automated testing frameworks and solutions. This includes designing and developing automated test scripts, integrating them into CI/CD pipelines, and ensuring high test coverage. Test Environment Management: Oversee the management and maintenance of test environments, ensuring they accurately reflect production environments and are readily available for testing activities. Performance and Scalability Testing: Design and execute performance and scalability tests to ensure PaCo products meet performance requirements under various load conditions. Security Testing: Incorporate security testing into the overall test strategy, identifying and mitigating potential security vulnerabilities. Full-Stack Development Support: Provide full-stack development support to enhance testing capabilities. This includes developing custom test tools, scripts, and applications using appropriate programming languages and technologies (e.g., Python, Java, JavaScript, etc.) to address specific testing needs and improve automation. This may involve working with databases, APIs, and UI elements. Team Leadership and Mentorship: Guide and mentor junior test engineers, providing technical expertise and support. Collaboration: Collaborate effectively with development teams, product managers, and other stakeholders to ensure alignment on testing goals and priorities. Key Skills And Experience You have: Master’s or bachelor’s degree in computer science Engineering, or a related field. Master's degree preferred with 8+ years of experience in software testing, with at least 5 years in a test architect or similar role. Experience in the telecommunications industry, specifically with Packet Core (PaCo) technologies with AI/ML based testing Extensive experience in designing and implementing automated testing frameworks and deep understanding of testing methodologies (e.g., Agile, Waterfall). Experience with various testing tools and technologies (e.g., Selenium, JMeter, Appium, etc.). Strong programming skills in at least two of the following: Python, Java, JavaScript, C++, or similar languages. Experience with CI/CD pipelines and tools (e.g., NCD, Gitops, GitLab CI). It would be nice if you also had: Experience with Kubernetes and containerized environments. Experience with network protocols & Simulators (e.g., 5G, 4G, VoIP, Spirent). About Us Come create the technology that helps the world act together Nokia is committed to innovation and technology leadership across mobile, fixed and cloud networks. Your career here will have a positive impact on people’s lives and will help us build the capabilities needed for a more productive, sustainable, and inclusive world. We challenge ourselves to create an inclusive way of working where we are open to new ideas, empowered to take risks and fearless to bring our authentic selves to work What we offer Nokia offers continuous learning opportunities, well-being programs to support you mentally and physically, opportunities to join and get supported by employee resource groups, mentoring programs and highly diverse teams with an inclusive culture where people thrive and are empowered. Nokia is committed to inclusion and is an equal opportunity employer Nokia has received the following recognitions for its commitment to inclusion & equality: One of the World’s Most Ethical Companies by Ethisphere Gender-Equality Index by Bloomberg Workplace Pride Global Benchmark At Nokia, we act inclusively and respect the uniqueness of people. Nokia’s employment decisions are made regardless of race, color, national or ethnic origin, religion, gender, sexual orientation, gender identity or expression, age, marital status, disability, protected veteran status or other characteristics protected by law. We are committed to a culture of inclusion built upon our core value of respect. Join us and be part of a company where you will feel included and empowered to succeed. About The Team As Nokia's growth engine, we create value for communication service providers and enterprise customers by leading the transition to cloud-native software and as-a-service delivery models. Our inclusive team of dreamers, doers and disruptors push the limits from impossible to possible.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

gurgaon

Remote

Who We Are Simpplr is the AI-powered platform that unifies the digital workplace – bringing together engagement, enablement, and services to transform the employee experience. It streamlines communication, simplifies interactions, automates workflows, and elevates the everyday experience of work. The platform is intuitive, highly extensible, and built to integrate seamlessly with your existing technology. More than 1,000 leading organizations – including AAA, the NHS, Penske, and Moderna – trust Simpplr to foster a more aligned and productive workforce. Headquartered in Silicon Valley with global offices, Simpplr is backed by Norwest Ventures, Sapphire Ventures, Salesforce Ventures, and Tola Capital. Learn more at simpplr.com. The Opportunity Location- Gurugram We are looking for a Senior Software Engineer, Front End (with TypeScript) with 5 + years of experience to help us build the foundation of and deliver a new product line. We’ll be leaning on your experience in building amazing user experience on the front-end . The products we build have a direct impact on our user's happiness, success and satisfaction at work. Successful candidates will be working closely in a cross-functional team with other senior engineers, product managers and our UI/UX designers. They will be responsible for owning development projects from start to completion; this includes helping plan features, build services and deploy infrastructure. The products we're building are all brand new so we have a modern tech stack implementing lots of best practices. Our apps are React Micro Front-Ends (MFE) with a multitude of backend services in Node.JS all deployed through GitOps. We're fans of infrastructure-as-code, automating toil and spending time refactoring if we need to. Your Job Responsibilities What you will be doing: Work in a talented cross-functional team to develop new user-facing features using TypeScript on the frontend (React) Contribute to our shared UI library used by many engineering teams across the whole company. Write automated unit tests and end-to-end tests for your code and services. Quality is incredibly important to us and everybody is responsible for it. Participate in agile ceremonies; regularly and sustainably delivering value in two-week sprints. Help influence the overall architecture and direction of the codebase as well as the wider product. Help establish best practices, guidelines, and processes to allow the team focus on what they do best - building the application Mentor and guide other team members to help them grow in their career Your Skillset What makes you a great fit for the team: You like to deliver great user experience You are user focused – we solve our customer’s problems together; everybody has a say in planning, design & execution. 5+ years of overall experience. 4+ years of experience with React, you should have a solid understanding of how and why it uses a virtual DOM. Experience using modern modular CSS strategies e.g styled components, emotion, etc (we use CSS Modules) and why globally scoped styles are bad. You love identifying new technologies, patterns, and techniques and planning out how we can apply them to improve productivity, code quality and user experience Affinity for profiling and analyzing code to identify areas for improvement. You should have a high level understanding of how both React and browser internals work to ensure our frontend stays performant and doesn't leak memory. Good understanding of CI/CD, unit testing (with Jest), and automated end-to-end testing using a framework like Cypress. Strong knowledge and understanding of functional programming patterns. Excited by working in a fast-paced startup environment We’d especially love to hear from you if: You have proven excellence in writing readable and efficient TypeScript code. You have a good track record of project leadership and mentorship of software engineers. You have experience working with micro-frontends in production. You are familiar with feature flag tools such as Harness You have worked with frameworks / libraries such as css-modules, Next.js #LI-DNI Simpplr’s Hub-Hybrid-Remote Model: At Simpplr we believe that when work is good, life is better and that belief guides all we do. Including how we approach our flexible work model. Simpplr operates with a Hub-Hybrid-Remote model. This model is role-based with exceptions and provides employees with the flexibility that many have told us they want. Hub - 100% work from Simpplr office. Role requires Simpplifier to be in the office full-time. Hybrid - Hybrid work from home and office. Role dictates the ability to work from home, plus benefit from in-person collaboration on a regular basis. Remote - 100% remote. Role can be done anywhere within your country of hire, as long as the requirements of the role are met.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

hyderābād

On-site

We work on Apple scale opportunities and challenges. We are engineers at heart. We like solving technical problems. We believe a good engineer has the curiosity to dig into inner workings of technology and is always experimenting, reading and in constant learning mode. If you are a software engineer with passion to code and dig deeper into any technology, love knowing the internals, fascinated by distributed systems architecture, we want to hear from you. Description We are seeking a highly skilled LLM Ops and ML Ops Engineer to lead the deployment, scaling, monitoring, and optimization of large language models (LLMs) across diverse environments. This role is critical to ensuring our machine learning systems are production-ready, high-performing, and resilient. The ideal candidate will have deep expertise in Python programming / Go Programming, a comprehensive understanding of LLM internals, and hands-on experience with various inference engines and deployment strategies. The person should be capable of exhibiting deftness to balance multiple simultaneous competing priorities and deliver solutions in a timely manner. The person should be able to understand complex architectures and be comfortable working with multiple teams KEY RESPONSIBILITIES: - Design and build scalable infrastructure for fine-tuning, and deploying large language models. - Develop and optimize inference pipelines using popular frameworks and engines (e.g. TensorRT, vLLM, Triton Inference Server). - Implement observability solutions for model performance, latency, throughput, GPU/TPU utilization, and memory efficiency. - Own the end-to-end lifecycle of LLMs in production-from experimentation to continuous integration and continuous deployment (CI/CD). - Collaborate with research scientists, ML engineers, and backend teams to operationalize groundbreaking LLM architectures. - Automate and harden model deployment workflows using Python, Kubernetes, Containers and orchestration tools like Argo Workflows and GitOps. - Design reproducible model packaging, versioning, and rollback strategies for large-scale serving. - Stay current with advances in LLM inference acceleration, quantization, distillation, and model compilation techniques (e.g., GGUF, AWQ, FP8). Minimum Qualifications 5+ years of experience in LLM/ML Ops, DevOps, or infrastructure engineering with a focus on machine learning systems. Advance level proficiency in Python/Go, with ability to write clean, performant, and maintainable production code. Deep understanding of transformer architectures, LLM tokenization, attention mechanisms, memory management, and batching strategies. Proven experience deploying and optimizing LLMs using multiple inference engines. Strong background in containerization and orchestration (Kubernetes, Helm). Familiarity with monitoring tools (e.g., Prometheus, Grafana), logging frameworks, and performance profiling. Preferred Qualifications Experience integrating LLMs into micro-services or edge inference platforms. Experience with Ray distributed inference Hands-on with quantization libraries Contributions to open-source ML infrastructure or LLM optimization tools. Familiarity with cloud platforms (AWS, GCP) and infrastructure-as-code (Terraform). Exposure to secure and compliant model deployment workflows Submit CV

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

chennai

On-site

Company Description R25_0009780 NIQ, a leader in understanding consumer buying behavior, is looking for a Senior Platform Engineer to join our Enterprise Platform Engineering team in Chennai, India. You'll be a crucial part of our mission to deliver world-class corporate technologies for over 30,000 global employees, supporting a wide range of critical services, including our AI/ML initiatives. This is a full-time position where you'll play a key role in a highly skilled team. You will design, build, and maintain the core frameworks and platforms that power NIQ, working with a diverse and cutting-edge tech stack. Job Description Design and architect scalable, resilient platforms that empower other engineering teams to confidently deploy and run their services. Build and maintain robust platform frameworks that support various engineering needs, including data science and machine learning workflows. Collaborate closely with application development, data science, and site reliability engineering (SRE) teams to deliver effective solutions. Deepen your expertise in core platform technologies like Kubernetes (EKS, AKS), Helm, Terraform, and GitOps tools (ArgoCD). Ensure seamless deployment and operation of platforms by building and maintaining CI/CD pipelines (GitLab, Azure Pipelines). Proactively monitor, analyze, and optimize system performance and security using tools like Prometheus, Grafana, and Datadog. Continuously improve platform reliability, scalability, and availability by applying SRE principles like SLOs and error budgets. Create and maintain comprehensive documentation for all platforms and frameworks. Qualifications 5+ years of experience in software development or DevOps, with at least 2 years specifically in platform engineering. Strong hands-on experience with Docker, Kubernetes, and GitOps tooling. Proficiency in cloud platforms , with experience in AWS and Azure. Familiarity with monitoring and observability tools like Corelogix, Prometheus, Grafana, Datadog, or OpenTelemetry. Solid understanding of CI/CD pipelines leveraging tools like GitLab or GitHub actions Experience with Infrastructure as Code (Terraform, Ansible) and configuration management tools. Knowledge of networking concepts (TCP/IP, DNS, REST APIs) and API management tools. Experience with building and managing ML platforms is a plus , specifically with tools like MLflow or Kubeflow. Proficiency in scripting languages like Python or Bash. Excellent communication skills, both verbal and written, with the ability to clearly articulate complex technical concepts. A team-oriented mindset with the ability to work effectively both collaboratively and independently. A Bachelor’s degree in Computer Science or Computer Engineering, or equivalent practical work experience. Additional Information We offer a flexible working mode in Chennai #LI-hybrid Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion At NIQ, we are steadfast in our commitment to fostering an inclusive workplace that mirrors the rich diversity of the communities and markets we serve. We believe that embracing a wide range of perspectives drives innovation and excellence. All employment decisions at NIQ are made without regard to race, color, religion, sex (including pregnancy, sexual orientation, or gender identity), national origin, age, disability, genetic information, marital status, veteran status, or any other characteristic protected by applicable laws. We invite individuals who share our dedication to inclusivity and equity to join us in making a meaningful impact. To learn more about our ongoing efforts in diversity and inclusion, please visit the https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 2 weeks ago

Apply

5.0 years

3 - 4 Lacs

calcutta

On-site

Job Information Number of Positions 4 Title DevOps Engineer Date Opened 08/20/2025 Job Type Full time Industry Technology Work Experience 5+ Years City Kolkata State/Province West Bengal Country India Zip/Postal Code 700010 Job Description A. Job Purpose The Cloud Engineer–DevOps designs, sets up, and operates scalable cloud infrastructure and CI/CD pipelines. This position maintains high availability, security, and automation in development and production environments while working with cross-functional teams to streamline deployment workflows. B. Duties and Responsibilities Design, deploy, and operate cloud infrastructure (AWS/Azure/GCP) with IaC (Terraform, CloudFormation). Manage container orchestration (Docker, Kubernetes, ECS/EKS). Monitor and optimize AWS infrastructure for performance, cost, and security, implementing automated monitoring and alerting systems. Analyze manual processes to identify automation opportunities and improve operational efficiency. Create and execute scalable continuous integration and continuous delivery pipelines (Jenkins, GitHub Actions, GitLab CI/CD,CodePipeline,CodeBuild). Streamline build, test, and deployment procedures for various applications and environments. Deploy, maintain, and manage AWS production systems, ensuring availability, reliability, security, and scalability. Develop and maintain RESTful APIs and microservices using high-level programming languages such as Python, Java, or Node.js. Work with relational (e.g., MSSQL, PostgreSQL, RDS) and non-relational (e.g., MongoDB, ElastiCache) databases. Collaborate with software developers, designers, product managers, and other stakeholders to translate requirements into technical solutions. Debug and resolve technical issues related to DevOps processes, infrastructure, and application performance. Identify and address bottlenecks, performance issues, and security vulnerabilities, recommending effective solutions. C. Qualifications and Work Experience Bachelor's degree in Computer Science, Information Technology, or related field. 5+ years of experience working in Cloud Engineer–DevOps, DevOps or SRE roles or cloud engineering. AWS Certified Developer – Associate or AWS Certified DevOps Engineer – Professional certification (preferred). Hands-on containerization (e.g., Docker, ECS, EKS) and scripting languages (e.g., Python, Bash) experience. Proven experience deploying CI/CD pipelines into production environments D. Essential Skills Technical: Excellent skills in cloud services, Infrastructure as Code tools, CI/CD tools, Docker, and Kubernetes. Advanced scripting (Python/Bash) and automation capabilities. Networking, security, and database management knowledge. Good troubleshooting and debugging skills in Linux environments. Good communication, documentation, and collaboration skills. Knowledge of monitoring and observability tools (Prometheus, Grafana, ELK Stack). Professional Skills: Good communication and collaboration skills among technical and non-technical team. Capacity to work independently and handle multiple projects at once. Strategic understanding of compliance requirements and security best practices Leadership and mentoring abilities. Requirements Must Have Skills: Strong hands-on experience with AWS services (EC2, S3, RDS, IAM, VPC, CloudWatch, ALB/NLB, etc.) Expertise in Kubernetes (EKS) – cluster setup, scaling, upgrades, and troubleshooting Proficiency in Python scripting/automation (boto3, automation frameworks) Experience with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI, AWS CodePipeline) Infrastructure as Code ( Terraform / CloudFormation ) Strong Linux administration and shell scripting skills GitOps (ArgoCD, Flux) Monitoring & logging (Prometheus, Grafana, ELK/EFK, CloudWatch) Service mesh (Istio, Linkerd) Security best practices (IAM, Secrets Management, Vulnerability Scanning) Soft Skills: Problem-solving & troubleshooting ability Team collaboration & communication skills Good documentation practices Benefits Cutting-edge Technology Exposure – Hands-on with AWS, Kubernetes (EKS), Python automation, and AI-driven DevOps solutions (e.g., AI-based monitoring, auto-healing, and optimization in EKS). Career Growth & Certifications – Support for AWS/Kubernetes certifications, training programs, and continuous learning. Challenging Projects – Opportunity to design and manage scalable, secure, and high-performance cloud + AI-powered DevOps solutions. Collaborative Culture – Work with a skilled team of cloud and DevOps engineers in a knowledge-sharing, growth-oriented environment. Job Stability & Recognition – Be part of a fast-growing cloud services company with enterprise clients and next-gen technology adoption.

Posted 2 weeks ago

Apply

0 years

0 Lacs

gurugram, haryana, india

On-site

CI/CD (Continuous Integration/Delivery/Deployment) The Core Requirements For The Job Include The Following Tools: Jenkins, GitHub Actions, GitLab CI, CircleCI, ArgoCD, Spinnaker. Concepts: Pipeline design (build, test, deploy), Blue-green / canary deployments, Rollbacks and artifact versioning, GitOps practices. Infrastructure As Code (IaC) Tools: Terraform, Pulumi, AWS CloudFormation, Ansible, Helm. Skills: Writing modular IaC code. Secret and state management. Policy enforcement (OPA, Sentinel). DRY patterns and IaC testing (e. g., Terratest). Cloud Platforms Platforms: AWS, Azure, GCP, OCI. Skills: VPC/networking setup, IAM policies, Managed services (EKS, GKE, AKS, RDS, Lambda), Billing, cost control, tagging governance, Cloud automation with CLI/SDKs. Containerization And Orchestration Tools: Docker, Podman, Kubernetes, OpenShift. Skills: Dockerfile optimization, multi-stage builds, Helm charts, Kustomize, K8s RBAC, admission controllers, pod security policies, Service mesh (Istio, Linkerd). Security And Compliance Tools: HashiCorp Vault, AWS Secrets Manager, Aqua, Snyk. Practices: Image scanning and runtime protection, Least privilege access models, Network policies, TLS enforcement, Audit logging, and compliance automation. Observability And Monitoring Tools: Prometheus, Grafana, ELK stack, Datadog, New Relic. Skills: Metrics, tracing, log aggregation, alerting thresholds and SLOs, Distributed tracing (Jaeger, OpenTelemetry). Reliability And Resilience Engineering Concepts and Tools: SRE practices, error budgets, Chaos engineering (Gremlin, LitmusChaos), Auto-scaling, self-healing infrastructure, Service Level Objectives (SLO/SLI) Platform Engineering (DevEx Focused) Tools: Backstage, Internal Developer Portals, Terraform Cloud. Practices: Golden paths and reusable blueprints, Self-service pipelines, Developer onboarding automation, Platform as a Product mindset. Source Control And Collaboration Tools: Git, Bitbucket, GitHub, GitLab. Practices: Branching strategies (Git Flow, trunk-based), Code reviews, merge policies, commit signing, and DCO enforcement. Scripting And Automation Languages: Bash, Python, Go, PowerShell. Skills: Writing CLI tools, Cron jobs and job runners, ChatOps and automation bots (Slack, MS Teams). This job was posted by Bhavya Chauhan from CloudTechner.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

chennai, tamil nadu, india

On-site

We are hiring for Cloud solutions Role Description Required Skills & Qualifications:  Technical Expertise: o Advanced knowledge of Ansible, Terraform for IaC implementation. o Strong experience with CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions. o Proficiency in scripting languages like Python, Bash, or PowerShell. o Experience with VMware automation platforms and the Aria stack to support cloud and virtualization initiatives. o Experience working with Nutanix environments (AHV, Prism, Calm) for infrastructure automation. o Familiarity with version control systems like Git and GitOps workflows. o Hands-on experience with at least one major cloud provider Hyperscaler as OCI o Solid understanding and Administration of containerization tools like Docker and orchestration platforms such as Red Hat OpenShift.  Experience: o Minimum 3 years of experience in infrastructure automation on VMware Private Cloud Stack or Public Cloud OCI Platform, Cloud Engineering, or DevOps roles. o Proven experience in deploying and managing IaC at scale with Terraform and Configuration Management using Ansible. Certifications  Certifications in Terraform (HashiCorp Certified: Terraform Associate), OCI, or Kubernetes (CKA/CKAD/OpenShift Specialist Ex280)  VMware Certifications  Nutanix Certified Associate (NCA) and Nutanix Certified Professional - Multicloud Infrastructure (NCP-MCI) Share your cv at sanskriti@far1.tech or Whatsapp at +917563840512

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As an experienced DevOps Engineer joining our development team, you will play a crucial role in the evolution of our Platform Orchestration product. Your expertise will be utilized to work on software incorporating cutting-edge technologies and integration frameworks. At our organization, we prioritize staff training, investment, and career growth, ensuring you have the opportunity to enhance your skills and experience through exposure to various software validation techniques and industry-standard engineering processes. Your contributions will include building and maintaining CI/CD pipelines for multi-tenant deployments using Jenkins and GitOps practices, managing Kubernetes infrastructure (specifically AWS EKS), Helm charts, and service mesh configurations (ISTIO). You will utilize tools like kubectl, Lens, or other dashboards for real-time workload inspection and troubleshooting. Evaluating the security, stability, compatibility, scalability, interoperability, monitorability, resilience, and performance of our software will be a key responsibility. Supporting development and QA teams with code merge, build, install, and deployment environments, you will ensure continuous improvement of the software automation pipeline to enhance build and integration efficiency. Additionally, overseeing and maintaining the health of software repositories and build tools, ensuring successful and continuous software builds will be part of your role. Verifying final software release configurations against specifications, architecture, and documentation, as well as performing fulfillment and release activities for timely and reliable deployments, will also fall within your purview. To thrive in this role, we are seeking candidates with a Bachelors or Masters degree in Computer Science, Engineering, or a related field, coupled with 8-12 years of hands-on experience in DevOps or SRE roles for cloud-native Java-based platforms. Deep knowledge of AWS Cloud Services, including EKS, IAM, CloudWatch, S3, and Secrets Manager, is essential. Your expertise with Kubernetes, Helm, ConfigMaps, Secrets, and Kustomize, along with experience in authoring and maintaining Jenkins pipelines integrated with security and quality scanning tools, will be beneficial. Proficiency in scripting/programming languages such as Ruby, Groovy, and Java is desired, as well as experience with infrastructure provisioning tools like Docker and CloudFormation. In return, we offer an inclusive culture that reflects our core values, providing you with the opportunity to make an impact, develop professionally, and participate in valuable learning experiences. You will benefit from highly competitive compensation, benefits, and rewards programs that recognize and encourage your best work every day. Our engaging work environment promotes work/life balance, offers employee resource groups, and social events to foster interaction and camaraderie. Join us in shaping the future of our Platform Orchestration product and growing your skills in a supportive and dynamic team environment.,

Posted 2 weeks ago

Apply

0 years

0 Lacs

pune, maharashtra, india

On-site

Join us as a DevOps Engineer at Barclays, where you will be responsible for supporting the successful delivery of location strategy projects to plan, budget, agreed quality and governance standards. You'll spearhead the evolution of our API First digital strategy, driving innovation and operational excellence. You will harness cutting-edge technology to build and manage robust, scalable, and secure APIs, ensuring seamless delivery of our digital solutions. To be successful as a DevOps Engineer you should have experience with: Proficiency in implementing and maintaining CI/CD pipelines using Jenkins, GitLab CI, or similar tools Strong hands-on experience with Git version control and Git workflows Experience with Helm for Kubernetes application packaging and deployment on OpenShift Solid understanding of infrastructure as code using tools Practical knowledge of branching strategies, pull request workflows, and code review processes Experience deploying and managing applications in cloud environments with focus on OpenShift Working knowledge of scripting languages such as Python, Bash, or PowerShell for automation Experience implementing and maintaining monitoring and observability solutions Understanding of networking concepts including subnets, routing, load balancing, and security groups Knowledge of security best practices for cloud and container environments Ability to troubleshoot and resolve infrastructure and deployment issues Experience working in Agile development environments with focus on DevOps practices Understanding of microservices architecture and related deployment patterns Good knowledge of logging, metrics collection, and alerting systems Some Other Highly Valued Skills May Include Experience with advanced Git operations and repository management Knowledge of GitOps workflows and tools Practical experience with API ecosystem components including gateways, service meshes, and proxies Understanding of database administration and data persistence in containerized environments Familiarity with compliance and security scanning tools (SonarQube, Veracode) Understanding of high availability and disaster recovery concepts Experience with performance testing and optimization at infrastructure level Familiarity with event-driven architectures and message brokers Knowledge of ITIL-based change management processes You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based out of Pune. Purpose of the role To design, develop and improve software, utilising various engineering methodologies, that provides business, platform, and technology capabilities for our customers and colleagues. Accountabilities Development and delivery of high-quality software solutions by using industry aligned programming languages, frameworks, and tools. Ensuring that code is scalable, maintainable, and optimized for performance. Cross-functional collaboration with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration and alignment with business objectives. Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing. Stay informed of industry technology trends and innovations and actively contribute to the organization’s technology communities to foster a culture of technical excellence and growth. Adherence to secure coding practices to mitigate vulnerabilities, protect sensitive data, and ensure secure software solutions. Implementation of effective unit testing practices to ensure proper code design, readability, and reliability. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Software Engineer within the Release Engineering team at Sumo Logic, your primary responsibility will be to develop and maintain automated tooling for the release processes of all services. You will play a crucial role in establishing automated delivery pipelines to enable autonomous teams in creating independently deployable services. Your contribution is essential in enhancing software delivery and advancing Sumo Logic's internal Platform-as-a-Service strategy. Your key responsibilities will include: - Taking ownership of the delivery pipeline and release automation framework for all Sumo services - Collaborating with teams during design and development phases to ensure best practices are followed - Mentoring a team of Engineers across different seniority levels and enhancing software development processes - Evaluating, testing, and providing technology and design recommendations to executives - Creating detailed design documents and system implementation documentation - Ensuring engineering teams are equipped to deliver high-quality software efficiently and reliably - Improving and maintaining infrastructure and tooling for development, testing, and debugging Qualifications required: - Bachelor's or Master's degree in Computer Sciences or a related discipline - Strong influencing skills to guide architectural decisions effectively - Collaborative working style to make informed decisions with other engineers - Bias towards action to drive progress and enable forward momentum - Flexibility and willingness to adapt, learn, and evolve with changing requirements Technical skills needed: - 4+ years of experience in designing, developing, and using release automation tooling, DevOps, CI/CD, etc. - 2+ years of software development experience in Java, Scala, Golang, or similar languages - 3+ years of experience in software delivery technologies like Jenkins, including developing CI/CD pipelines and knowledge of build tools like make, Gradle, npm, etc. - Proficiency in cloud technologies such as AWS, Azure, GCP - Familiarity with Infrastructure-as-Code and tools like Terraform - Expertise in scripting languages such as Groovy, Python, Bash, etc. - Knowledge of monitoring tools like Prometheus, Grafana, or similar solutions - Understanding of GitOps and ArgoCD concepts/workflows - Awareness of security and compliance aspects of DevSecOps Sumo Logic, Inc. is dedicated to empowering individuals who drive modern digital businesses. Through the Sumo Logic SaaS Analytics Log Platform, we support customers in delivering reliable and secure cloud-native applications. Our platform assists practitioners and developers in ensuring application reliability, security against modern threats, and gaining insights into cloud infrastructures. Customers globally rely on Sumo Logic for real-time analytics and insights across observability and security solutions. Please visit www.sumologic.com for more information about Sumo Logic and its offerings.,

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

gurgaon, haryana, india

On-site

Who We Are BCG partners with clients from the private, public, and not‐for profit sectors in all regions of the globe to identify their highest value opportunities, address their most critical challenges, and transform their enterprises. We work with the most innovative companies globally, many of which rank among the world’s 500 largest corporations. Our global presence makes us one of only a few firms that can deliver a truly unified team for our clients – no matter where they are located. Our ~22,000 employees, located in 90+ offices in 50+ countries, enable us to work in collaboration with our clients, to tailor our solutions to each organization. We value and utilize the unique talents that each of these individuals brings to BCG; the wide variety of backgrounds of our consultants, specialists, and internal staff reflects the importance we place on diversity. Our employees hold degrees across a full range of disciplines – from business administration and economics to biochemistry, engineering, computer science, psychology, medicine, and law. What You'll Do BCG X develops innovative and AI driven solutions for the Fortune 500 in their highest‐value use cases. The BCG X Software group productizes repeat use‐cases, creating both reusable components as well as single‐tenant and multi‐tenant SaaS offerings that are commercialized through the BCG consulting business. BCG X is currently looking for a Software Engineering Architect to drive impact and change for the firms engineering and analytics engine and bring new products to BCG clients globally. This Will Include Serving as a leader within BCG X and specifically the KEY Impact Management by BCG X Tribe (Transformation, Post-Merger-Integration related software and data products) overseeing the delivery of high-quality software: driving technical roadmap, architectural decisions and mentoring engineers Influencing and serving as a key decision maker in BCG X technology selection & strategy Active “hands-on” role, building intelligent analytical products to solve problems, write elegant code, and iterate quickly Overall responsibility for the engineering and architecture alignment of all solutions delivered within the tribe. Responsible for technology roadmap of existing and new components delivered. Architecting and implementing backend and frontend solutions primarily using .NET, C#, MS SQL Server, Angular, and other technologies best suited for the goals, including open source i.e. Node, Django, Flask, Python where needed. What You'll Bring 8+ years of technology and software engineering experience in a complex and fast-paced business environment (ideally agile environment) with exposure to a variety of technologies and solutions, with at least 5 year’ experience in Architect role. Experience with a wide range of Application and Data architectures, platforms and tools including: Service Oriented Architecture, Clean Architecture, Software as a Service, Web Services, Object-Oriented Languages (like C# or Java), SQL Databases (like Oracle or SQL Server), Relational, Non-relational Databases, Hands on experience with analytics tools and reporting tools, Data Science experience etc. Thoroughly up to date in technology: Modern cloud architectures including AWS, Azure, GCP, Kubernetes Very strong particularly in .NET, C#, MS SQL Server, Angular technologies Open source stacks including NodeJs, React, Angular, Flask are good to have CI/CD / DevSecOps / GitOps toolchains and development approaches Knowledge in machine learning & AI frameworks Big data pipelines and systems: Spark, Snowflake, Kafka, Redshift, Synapse, Airflow At least Bachelors degree; Master’s degree and/or MBA preferred Team player with excellent work habits and interpersonal skills Care deeply about product quality, reliability, and scalability Passion about the people and culture side of engineering teams Outstanding written and oral communications skills The ability to travel, depending on project requirements.#BCGXjob Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

greater kolkata area

On-site

Job Description Job Purpose The Cloud Engineer–DevOps designs, sets up, and operates scalable cloud infrastructure and CI/CD pipelines. This position maintains high availability, security, and automation in development and production environments while working with cross-functional teams to streamline deployment workflows. Duties and Responsibilities Design, deploy, and operate cloud infrastructure (AWS/Azure/GCP) with IaC (Terraform, CloudFormation). Manage container orchestration (Docker, Kubernetes, ECS/EKS). Monitor and optimize AWS infrastructure for performance, cost, and security, implementing automated monitoring and alerting systems. Analyze manual processes to identify automation opportunities and improve operational efficiency. Create and execute scalable continuous integration and continuous delivery pipelines (Jenkins, GitHub Actions, GitLab CI/CD,CodePipeline,CodeBuild). Streamline build, test, and deployment procedures for various applications and environments. Deploy, maintain, and manage AWS production systems, ensuring availability, reliability, security, and scalability. Develop and maintain RESTful APIs and microservices using high-level programming languages such as Python, Java, or Node.js. Work with relational (e.g., MSSQL, PostgreSQL, RDS) and non-relational (e.g., MongoDB, ElastiCache) databases. Collaborate with software developers, designers, product managers, and other stakeholders to translate requirements into technical solutions. Debug and resolve technical issues related to DevOps processes, infrastructure, and application performance. Identify and address bottlenecks, performance issues, and security vulnerabilities, recommending effective solutions. Qualifications and Work Experience Bachelor's degree in Computer Science, Information Technology, or related field. 5+ years of experience working in Cloud Engineer–DevOps, DevOps or SRE roles or cloud engineering. AWS Certified Developer – Associate or AWS Certified DevOps Engineer – Professional certification (preferred). Hands-on containerization (e.g., Docker, ECS, EKS) and scripting languages (e.g., Python, Bash) experience. Proven experience deploying CI/CD pipelines into production environments Essential Skills Technical Excellent skills in cloud services, Infrastructure as Code tools, CI/CD tools, Docker, and Kubernetes. Advanced scripting (Python/Bash) and automation capabilities. Networking, security, and database management knowledge. Good troubleshooting and debugging skills in Linux environments. Good communication, documentation, and collaboration skills. Knowledge of monitoring and observability tools (Prometheus, Grafana, ELK Stack). Professional Skills Good communication and collaboration skills among technical and non-technical team. Capacity to work independently and handle multiple projects at once. Strategic understanding of compliance requirements and security best practices Leadership and mentoring abilities. Requirements Must Have Skills: Strong hands-on experience with AWS services (EC2, S3, RDS, IAM, VPC, CloudWatch, ALB/NLB, etc.) Expertise in Kubernetes (EKS) – cluster setup, scaling, upgrades, and troubleshooting Proficiency in Python scripting/automation (boto3, automation frameworks) Experience with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI, AWS CodePipeline) Infrastructure as Code (Terraform / CloudFormation) Strong Linux administration and shell scripting skills GitOps (ArgoCD, Flux) Monitoring & logging (Prometheus, Grafana, ELK/EFK, CloudWatch) Service mesh (Istio, Linkerd) Security best practices (IAM, Secrets Management, Vulnerability Scanning) Soft Skills Problem-solving & troubleshooting ability Team collaboration & communication skills Good documentation practices Benefits Cutting-edge Technology Exposure – Hands-on with AWS, Kubernetes (EKS), Python automation, and AI-driven DevOps solutions (e.g., AI-based monitoring, auto-healing, and optimization in EKS). Career Growth & Certifications – Support for AWS/Kubernetes certifications, training programs, and continuous learning. Challenging Projects – Opportunity to design and manage scalable, secure, and high-performance cloud + AI-powered DevOps solutions. Collaborative Culture – Work with a skilled team of cloud and DevOps engineers in a knowledge-sharing, growth-oriented environment. Job Stability & Recognition – Be part of a fast-growing cloud services company with enterprise clients and next-gen technology adoption. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#3945A0;border-color:#3945A0;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered="">

Posted 2 weeks ago

Apply

6.0 - 8.0 years

0 Lacs

noida, uttar pradesh, india

On-site

Join Aristocrat as a DevOps Engineer - Technical Lead and be a part of an exceptionally skilled team that&aposs crafting the future of gaming technology! At Aristocrat, we are dedicated to bringing happiness to life through the power of play. Our mission drives us to innovate, collaborate, and deliver world-class experiences for our customers and players. As a DevOps Engineer - Technical Lead, you will play a crucial role in our organization, ensuring flawless operations and seamless deployments across various platforms. This is an outstanding opportunity to work with ground-breaking technology and a collaborative team that values excellence and creativity. What You&aposll Do Take care of the GCP, AWS, and Azure Cloud Infrastructure, including provisioning, alerting, and monitoring. Build and manage private networks, establishing robust networking solutions. Handle firewalls and VPN tunnels to ensure secure communications. Design and document processes for versioning, deployment, and code migration between environments. Apply your excellent knowledge of Docker, Kubernetes, Terraform, and Ansible. Apply your scripting skills in Python, Shell/Bash, and CI/CD tools like GitOps, Jenkins Pipelines/Groovy, and Azure Pipelines. Bring strong experience in Linux OS to the table. Provide 24x7 production support (L2/L3) and ensure seamless server, storage, and network operations. Use monitoring tools like Grafana, Prometheus, and Datadog to maintain system health. Leverage logging tools such as Cora Logix, ELK, and Splunk for effective troubleshooting. Apply intermediate experience with VMware. Use JIRA/Confluence or other defect tracking/wiki systems to keep projects on track. (Good to have) Experience with Istio or service mesh. Collaborate with a geographically dispersed team and quickly grasp functional aspects with minimal mentorship. What We&aposre Looking For 6+ years of proven experience in DevOps or related fields. Strong analytical and creative problem-solving skills. Ability to challenge the status quo and suggest improvements. Demonstrates a very high level of accuracy and attention to detail. Strong interpersonal skills and ability to work within a team. Ability to drive discussions towards successful conclusions. Articulate and able to express ideas and issues clearly without inhibitions. Join us at Aristocrat and contribute to crafting ambitious, world-class gaming experiences. Let&aposs bring happiness to life together! Why Aristocrat Aristocrat is a world leader in gaming content and technology, and a top-tier publisher of free-to-play mobile games. We deliver great performance for our B2B customers and bring joy to the lives of the millions of people who love to play our casino and mobile games. And while we focus on fun, we never forget our responsibilities. We strive to lead the way in responsible gameplay, and to lift the bar in company governance, employee wellbeing and sustainability. Were a diverse business united by shared values and an inspiring mission to bring joy to life through the power of play. We aim to create an environment where individual differences are valued, and all employees have the opportunity to realize their potential. We welcome and encourage applications from all people regardless of age, gender, race, ethnicity, cultural background, disability status or LGBTQ+ identity. EEO M/F/D/V World Leader in Gaming Entertainment Robust benefits package Global career opportunities Our Values All about the Player Talent Unleashed Collective Brilliance Good Business Good Citizen Travel Expectations None Additional Information Depending on the nature of your role, you may be required to register with the Nevada Gaming Control Board (NGCB) and/or other gaming jurisdictions in which we operate. At this time, we are unable to sponsor work visas for this position. Candidates must be authorized to work in the job posting location for this position on a full-time basis without the need for current or future visa sponsorship. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Company Description R25_0009780 NIQ, a leader in understanding consumer buying behavior, is looking for a Senior Platform Engineer to join our Enterprise Platform Engineering team in Chennai, India. You'll be a crucial part of our mission to deliver world-class corporate technologies for over 30,000 global employees, supporting a wide range of critical services, including our AI/ML initiatives. This is a full-time position where you'll play a key role in a highly skilled team. You will design, build, and maintain the core frameworks and platforms that power NIQ, working with a diverse and cutting-edge tech stack. Job Description Design and architect scalable, resilient platforms that empower other engineering teams to confidently deploy and run their services. Build and maintain robust platform frameworks that support various engineering needs, including data science and machine learning workflows. Collaborate closely with application development, data science, and site reliability engineering (SRE) teams to deliver effective solutions. Deepen your expertise in core platform technologies like Kubernetes (EKS, AKS), Helm, Terraform, and GitOps tools (ArgoCD). Ensure seamless deployment and operation of platforms by building and maintaining CI/CD pipelines (GitLab, Azure Pipelines). Proactively monitor, analyze, and optimize system performance and security using tools like Prometheus, Grafana, and Datadog. Continuously improve platform reliability, scalability, and availability by applying SRE principles like SLOs and error budgets. Create and maintain comprehensive documentation for all platforms and frameworks. Qualifications 5+ years of experience in software development or DevOps, with at least 2 years specifically in platform engineering. Strong hands-on experience with Docker, Kubernetes, and GitOps tooling. Proficiency in cloud platforms, with experience in AWS and Azure. Familiarity with monitoring and observability tools like Corelogix, Prometheus, Grafana, Datadog, or OpenTelemetry. Solid understanding of CI/CD pipelines leveraging tools like GitLab or GitHub actions Experience with Infrastructure as Code (Terraform, Ansible) and configuration management tools. Knowledge of networking concepts (TCP/IP, DNS, REST APIs) and API management tools. Experience with building and managing ML platforms is a plus, specifically with tools like MLflow or Kubeflow. Proficiency in scripting languages like Python or Bash. Excellent communication skills, both verbal and written, with the ability to clearly articulate complex technical concepts. A team-oriented mindset with the ability to work effectively both collaboratively and independently. A Bachelor’s degree in Computer Science or Computer Engineering, or equivalent practical work experience. Additional Information We offer a flexible working mode in Chennai Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion At NIQ, we are steadfast in our commitment to fostering an inclusive workplace that mirrors the rich diversity of the communities and markets we serve. We believe that embracing a wide range of perspectives drives innovation and excellence. All employment decisions at NIQ are made without regard to race, color, religion, sex (including pregnancy, sexual orientation, or gender identity), national origin, age, disability, genetic information, marital status, veteran status, or any other characteristic protected by applicable laws. We invite individuals who share our dedication to inclusivity and equity to join us in making a meaningful impact. To learn more about our ongoing efforts in diversity and inclusion, please visit the https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

kochi, kerala, india

On-site

Introduction A career in IBM Software means you’ll be part of a team that transforms our customer’s challenges into solutions. Seeking new possibilities and always staying curious, we are a team dedicated to creating the world’s leading AI-powered, cloud-native software solutions for our customers. Our renowned legacy creates endless global opportunities for our IBMers, so the door is always open for those who want to grow their career. IBM’s product and technology landscape includes Research, Software, and Infrastructure. Entering this domain positions you at the heart of IBM, where growth and innovation thrive. Your Role And Responsibilities We are seeking a DevOps Automation Engineer with strong experience in Kubernetes/OpenShift, ArgoCD, CI/CD automation, cloud infrastructure, and scripting. You will be instrumental in building and maintaining scalable, secure, and automated deployment pipelines, while working closely with development and operations teams to streamline the software delivery process. Design, develop, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI, Argo CD, and/or Tekton to enable robust, scalable, and secure delivery processes. Manage and optimize container orchestration platforms including Kubernetes and RedHat OpenShift, with hands-on expertise in deploying, scaling, and troubleshooting workloads. Lead GitOps-based deployments using ArgoCD for declarative, version-controlled infrastructure and application delivery. Work across cloud environments such as IBM, AWS, and Azure to support hybrid and multi-cloud deployments. Implement and maintain infrastructure provisioning and automation using Terraform, Ansible, and Helm, ensuring consistency and reliability. Required Technical And Professional Expertise 5+ years of experience. Proven experience in designing and supporting CI/CD pipelines and DevOps workflows. Strong understanding of Kubernetes and/or OpenShift operations and troubleshooting. Hands-on experience with cloud platforms (IBM, AWS, Azure). Proficiency with IaC tools (Terraform, Ansible, Helm) and configuration management practices. Strong grasp of automation best practices, GitOps methodologies, and modern DevOps principles. Excellent collaboration skills, with the ability to work across development, operations, and product teams to deliver resilient infrastructure and streamlined delivery pipelines. Preferred Technical And Professional Experience Certifications in Kubernetes (CKA/CKAD), AWS, or Red Hat OpenShift.

Posted 2 weeks ago

Apply

0 years

0 Lacs

gurgaon, haryana, india

On-site

CI/CD (Continuous Integration/Delivery/Deployment) The core requirements for the job include the following: Tools: Jenkins, GitHub Actions, GitLab CI, CircleCI, ArgoCD, Spinnaker. Concepts: Pipeline design (build, test, deploy), Blue-green / canary deployments, Rollbacks and artifact versioning, GitOps practices. Infrastructure As Code (IaC) Tools: Terraform, Pulumi, AWS CloudFormation, Ansible, Helm. Skills: Writing modular IaC code. Secret and state management. Policy enforcement (OPA, Sentinel). DRY patterns and IaC testing (e. g., Terratest). Cloud Platforms Platforms: AWS, Azure, GCP, OCI. Skills: VPC/networking setup, IAM policies, Managed services (EKS, GKE, AKS, RDS, Lambda), Billing, cost control, tagging governance, Cloud automation with CLI/SDKs. Containerization And Orchestration Tools: Docker, Podman, Kubernetes, OpenShift. Skills: Dockerfile optimization, multi-stage builds, Helm charts, Kustomize, K8s RBAC, admission controllers, pod security policies, Service mesh (Istio, Linkerd). Security And Compliance Tools: HashiCorp Vault, AWS Secrets Manager, Aqua, Snyk. Practices: Image scanning and runtime protection, Least privilege access models, Network policies, TLS enforcement, Audit logging, and compliance automation. Observability And Monitoring Tools: Prometheus, Grafana, ELK stack, Datadog, New Relic. Skills: Metrics, tracing, log aggregation, alerting thresholds and SLOs, Distributed tracing (Jaeger, OpenTelemetry). Reliability And Resilience Engineering Concepts and Tools: SRE practices, error budgets, Chaos engineering (Gremlin, LitmusChaos), Auto-scaling, self-healing infrastructure, Service Level Objectives (SLO/SLI) Platform Engineering (DevEx Focused) Tools: Backstage, Internal Developer Portals, Terraform Cloud. Practices: Golden paths and reusable blueprints, Self-service pipelines, Developer onboarding automation, Platform as a Product mindset. Source Control And Collaboration Tools: Git, Bitbucket, GitHub, GitLab. Practices: Branching strategies (Git Flow, trunk-based), Code reviews, merge policies, commit signing, and DCO enforcement. Scripting And Automation Languages: Bash, Python, Go, PowerShell. Skills: Writing CLI tools, Cron jobs and job runners, ChatOps and automation bots (Slack, MS Teams). This job was posted by Bhavya Chauhan from CloudTechner.

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

pune, maharashtra, india

Remote

About Prismforce Prismforce is a Vertical SaaS company revolutionizing the Talent Supply Chain for global Technology, R&D/Engineering, and IT Services companies. Our AI-powered product suite enhances business performance by enabling operational flexibility, accelerating decision-making, and boosting profitability. Our mission is to become the leading industry cloud/SaaS platform for tech services and talent organizations worldwide. We’re hiring a DevOps Lead with 10+ years of experience to lead and scale our DevOps/Platform Engineering team (currently 5 engineers). This is a hands-on leadership role where you’ll drive infrastructure strategy, build a secure and scalable platform, and enable rapid product delivery. Job Description Role: Devops Lead Reporting to: Sr VP Technology Location: Mumbai/Bangalore/Pune/Kolkata What You’ll Do Lead the design and evolution of our AWS-first cloud infrastructure (EKS, VPC, IAM, Terraform, etc.) Build and operate Kubernetes platforms, CI/CD pipelines, observability tooling, and internal developer workflows Drive SRE practices: define SLAs/SLOs, set up alerting/on-call, manage incidents Champion automation, security, and reliability across the stack Collaborate with engineering, security, and product to support scale and delivery velocity Contribute to our multi-cloud readiness as we grow ✅ We’re Looking For 10+ years in DevOps/SRE/Infra roles, with 3+ years leading teams or projects Deep experience with AWS, Kubernetes, CI/CD systems, and Terraform Strong focus on security, automation, observability, and scalability Familiarity with other clouds (GCP, Azure), GitOps (ArgoCD), FinOps, or platform engineering is a big plus Excellent communicator and mentor — you thrive in fast-paced, collaborative environments 🌟 Why Join Us High-impact leadership role at a pivotal growth stage Influence our infra and engineering culture from the ground up Work with a sharp, driven team on real-world scaling challenges Competitive salary, meaningful equity, remote-first flexibility What Makes Us Unique First-Mover Advantage: We are the only Vertical SaaS product company addressing Talent Supply Chain challenges in the IT services industry. Innovative Product Suite: Our solutions offer forward-thinking features that outshine traditional ERP systems. Strategic Expertise: Guided by an advisory board of ex-CXOs from top global IT firms, providing unmatched industry insights. Experienced Leadership: Our founding team brings deep expertise from leading firms like McKinsey, Deloitte, Amazon, Infosys, TCS, and Uber. Diverse and Growing Team: We have grown to 160+ employees across India, with hubs in Mumbai, Pune, Bangalore, and Kolkata. Strong Financial Backing: Series A-funded by Sequoia, with global IT companies using our product as a core solution. Why Join Prismforce Competitive Compensation: We offer an attractive salary and benefits package that rewards your contributions. Innovative Projects: Work on pioneering projects with cutting-edge technologies transforming the Talent Supply Chain. Collaborative Environment: Thrive in a dynamic, inclusive culture that values teamwork and innovation. Growth Opportunities: Continuous learning and development are core to our philosophy, helping you advance your career. Flexible Work: Enjoy flexible work arrangements that balance your work-life needs. By joining Prismforce, you'll become part of a rapidly expanding, innovative company that's reshaping the future of tech services and talent management. Perks & Benefits Work with the best in the industry: Work with a high-pedigree leadership team that will challenge you, build on your strengths and invest in your personal development Insurance Coverage-Group Mediclaim cover for self,spouse,kids and parents & Group Term Life Insurance Policy for self. Flexible Policies Retiral Benefits Hybrid Work Model Self-driven career progression tool

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

pune, maharashtra, india

On-site

Hello Visionary! We know that the only way a business thrive is if our people are growing. That’s why we always put our people first. Our global, diverse team would be happy to support you and challenge you to grow in new ways. Who knows where our shared journey will take you? We are seeking a Full Stack Developer with hands-on experience in Angular v15+ and Go, specializing in cloud-native application development. You will design and build innovative, scalable, and secure solutions for Smart Building applications, working within an SAFe Agile environment. You’ll make a difference by: Designing, developing, and maintaining cloud-native applications using Angular and Go. Building responsive UI with HTML5, CSS, JavaScript, and TypeScript. Effectively investigating and reporting software defects and provide solutions. Implementing automated testing (unit, integration, contract) using Jasmine/Jest, Cypress, or Playwright. Leading and maintaining CI/CD pipelines, ensure high code quality through Test-Driven Development (TDD). Utilizing container technologies like Docker and orchestration tools like Kubernetes (GitOps experience is a plus). Driving innovation by contributing new ideas, PoCs, or participating in internal hackathons. You’ll win us over by: Holding a graduate BE / B.Tech / MCA/M.Tech/M.Sc with good academic record. 3- 5 Years of Experience in software development with a strong on Angular and Go (Golang). Strong knowledge of object-oriented programming and modern testing practices. Proficiency in HTML5, CSS, JavaScript, TypeScript, Node.js, SQLite. Exposure to networking concepts, GitHub, JSON handling, REST API, Web of Things, and communication protocols. Working knowledge of Angular (intermediate or above) and full-stack technologies. Familiarity with distributed systems, message queues, and API design best practices. Having Experience with observability tools for logging, monitoring, and tracing. Passion for innovation and building quick PoCs in a startup-like environment. Personal Attributes: Excellent problem-solving and communication skills, able to articulate technical ideas clearly to stakeholders. Adaptable to fast-paced environments with a solution-oriented, startup mindset. Proactive and self-driven, with a strong sense of ownership and accountability. Actively seeks clarification and asks questions rather than waiting for instructions. Create a better #TomorrowWithUs! This role, based in Pune, is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We are dedicated to equality and welcome applications that reflect the diversity of the communities we serve. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and imagination, and help us shape tomorrow Find out more about Siemens careers at: www.siemens.com/careers

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies