Home
Jobs

421 Gitops Jobs - Page 10

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 - 3.0 years

4 - 8 Lacs

Bengaluru

Hybrid

Naukri logo

Working Mode : Hybrid Payroll: IDESLABS Location : Pan India PF Detection is mandatory Job Description: Snowflake: Administration Experience: Managing user access, roles, and security protocols Setting up and maintaining database replication and failover procedures Setting up programmatic access OpenSearch OpenSearch Experience: Deploying and scaling OpenSearch domains Managing security and access controls Setting up monitoring and alerting General AWS Skills: Infrastructure as Code (CloudFormation)Experience building cloud native infrastructure, applications and services on AWS, Azure Hands-on experience managing Kubernetes clusters (Administrative knowledge), ideally AWS EKS and/or Azure AKS Experience with Istio or other Service Mesh technologies Experience with container technology and best practices, including container and supply chain security Experience with declarative infrastructure-as-code with tools like Terraform, Crossplane Experience with GitOps with tools like ArgoCD

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Job Description Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role as associate level What you'll do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. We are seeking a highly skilled and motivated AWS Cloud Engineer with deep expertise in Amazon EKS, Kubernetes, Docker, and Helm chart development. The ideal candidate will be responsible for designing, implementing, and maintaining scalable, secure, and resilient containerized applications in the cloud. You’ll also be: Design, deploy, and manage Kubernetes clusters using Amazon EKS. Develop and maintain Helm charts for deploying containerized applications. Build and manage Docker images and registries for microservices. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Monitor and troubleshoot Kubernetes workloads and cluster health. Support CI/CD pipelines for containerized applications. Collaborate with development and DevOps teams to ensure seamless application delivery. Ensure security best practices are followed in container orchestration and cloud environments. Optimize performance and cost of cloud infrastructure. The skills you'll need You’ll need a background in software engineering, software design, architecture, and an understanding of how your area of expertise supports our customers. You'll need experience in Java full stack including Microservices, ReactJS, AWS, Spring, SpringBoot, SpringBatch, Pl/SQL, Oracle, PostgreSQL, Junit, Mockito, Cloud, REST API, API Gateway, Kafka and API development. You’ll also need: 3+ years of hands-on experience with AWS services, especially EKS, EC2, IAM, VPC, and CloudWatch. Strong expertise in Kubernetes architecture, networking, and resource management. Proficiency in Docker and container lifecycle management. Experience in writing and maintaining Helm charts for complex applications. Familiarity with CI/CD tools such as Jenkins, GitLab CI, or GitHub Actions. Solid understanding of Linux systems, shell scripting, and networking concepts. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of security practices in cloud and container environments. Preferred Qualifications: AWS Certified Solutions Architect or AWS Certified DevOps Engineer. Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools like ArgoCD or Flux. Experience with logging and observability tools (e.g., ELK stack, Fluentd). Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Summary: We are seeking a highly skilled and experienced Lead Infrastructure Engineer to join our dynamic team. The ideal candidate will be passionate about building and maintaining complex systems, with a holistic approach to architecting infrastructure that survives and thrives in production. You will play a key role in designing, implementing, and managing cloud infrastructure, ensuring scalability, availability, security, and optimal performance vs spend. You will also provide technical leadership and mentorship to other engineers, and engage with clients to understand their needs and deliver effective solutions. Responsibilities: Design, architect, and implement scalable, highly available, and secure infrastructure solutions, primarily on Amazon Web Services (AWS). Develop and maintain Infrastructure as Code (IaC) using Terraform or AWS CDK for enterprise-scale maintainability and repeatability. Implement robust access control via IAM roles and policy orchestration, ensuring least-privilege and auditability across multi-environment deployments. Contribute to secure, scalable identity and access patterns, including OAuth2-based authorization flows and dynamic IAM role mapping across environments. Support deployment of infrastructure lambda functions. Troubleshoot issues and collaborate with cloud vendors on managed service reliability and roadmap alignment. Utilize Kubernetes deployment tools such as Helm/Kustomize in combination with GitOps tools such as ArgoCD for container orchestration and management. Design and implement CI/CD pipelines using platforms like GitHub, GitLab, Bitbucket, Cloud Build, Harness, etc., with a focus on rolling deployments, canaries, and blue/green deployments. Ensure auditability and observability of pipeline states. Implement security best practices, audit, and compliance requirements within the infrastructure. Provide technical leadership, mentorship, and training to engineering staff. Engage with clients to understand their technical and business requirements, and provide tailored solutions. If needed, lead agile ceremonies and project planning, including developing agile boards and backlogs with support from our Service Delivery Leads. Troubleshoot and resolve complex infrastructure issues. Potentially participate in pre-sales activities and provide technical expertise to sales teams. Qualifications: 10+ years of experience in an Infrastructure Engineer or similar role. Extensive experience with Amazon Web Services (AWS). Proven ability to architect for scale, availability, and high-performance workloads. Ability to plan and execute zero-disruption migrations. Experience with enterprise IAM and familiarity with authentication technology such as OAuth2 and OIDC. Deep knowledge of Infrastructure as Code (IaC) with Terraform and/or AWS CDK. Strong experience with Kubernetes and related tools (Helm, Kustomize, ArgoCD). Solid understanding of git, branching models, CI/CD pipelines and deployment strategies. Experience with security, audit, and compliance best practices. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills, with the ability to engage with both technical and non-technical stakeholders. Experience in technical leadership, mentoring, team-forming and fostering self-organization and ownership. Experience with client relationship management and project planning. Certifications: Relevant certifications (for example Kubernetes Certified Administrator,AWS Certified Solutions Architect - Professional, AWS Certified DevOps Engineer - Professional, etc). Software development experience (for example Terraform, Python). Experience with machine learning infrastructure. Education: B.Tech/B.E in computer sciences, a related field, or equivalent experience. Sandeep Kumar sandeep.vinaganti@quesscorp.com Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Sahibzada Ajit Singh Nagar, Punjab, India

On-site

Linkedin logo

Everything we do is powered by our customers! Featured on Deloitte's Technology Fast 500 list and G2's leaderboard, Maropost offers a connected experience that our customers anticipate, transforming marketing, merchandising, and operations with commerce tools designed to scale with fast-growing businesses. With a relentless focus on our customers’ success, we are motivated by curiosity, creativity, and collaboration to power 5,000+ global brands. Driven by a customer-first mentality, we empower businesses to achieve their goals and grow alongside us. If you're ready to make a significant impact and be part of our transformative journey, Maropost is the place for you. Become a part of Maropost today and help shape the future of commerce! About The Position We are seeking an experienced QA Lead to lead our Quality Assurance function and drive high standards across our testing lifecycle. As a QA Lead, you will be responsible for defining test strategies, mentoring QA team members, and ensuring the successful delivery of robust, scalable, and reliable software products. You’ll collaborate closely with cross-functional teams, including Product, Engineering, and DevOps, to implement best practices and ensure seamless integration of QA processes throughout the Agile development lifecycle. This is a hands-on leadership role that blends test planning, automation strategy, team management, and process optimization. You’ll be instrumental in building a culture of quality across the organization. What You'll Be Responsible For Lead and mentor a team of QA Analysts and Automation Engineers, ensuring high levels of performance, engagement, and growth. Define, implement, and evolve comprehensive QA strategies and frameworks aligned with business goals. Oversee the planning, design, and execution of manual and automated test cases across functional, integration, system, and regression testing levels. Collaborate with Product and Engineering teams to understand requirements and create effective test plans and scenarios. Establish and manage test metrics, reporting dashboards, and quality KPIs to track progress and highlight risks. Champion the adoption of test automation, ensuring appropriate tools, frameworks, and best practices are in place and followed. Conduct regular code reviews and quality checks on automated test scripts. Ensure effective integration of QA processes into CI/CD pipelines and Agile workflows. Lead defect triage meetings, ensuring timely identification, tracking, and resolution of issues. Foster a culture of continuous improvement through retrospectives, feedback loops, and training sessions. Ensure compliance with accessibility standards, security practices, and performance benchmarks. What You'll Bring To Maropost Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. 6+ years of professional experience in software QA, with at least 2 years in a leadership or mentoring capacity. Proven track record in managing QA teams and driving quality initiatives across the SDLC. Strong expertise in both manual and automated testing practices. Hands-on experience with test automation tools like Selenium, Cypress, Playwright, or equivalent. Experience with performance testing tools such as JMeter or LoadRunner is a plus. Proficient in Agile methodologies (Scrum/Kanban) and working in CI/CD environments. Excellent understanding of QA metrics and reporting tools. Strong problem-solving skills and the ability to make data-driven decisions. Familiarity with API testing tools (e.g., Postman, REST Assured). Proficiency in version control systems like Git and familiarity with GitOps practices. Exceptional communication, organizational, and leadership skills. Ability to work across teams, manage priorities, and meet deadlines in a dynamic environment. Preferred Qualifications Experience testing SaaS platforms, marketing tech, or e-commerce solutions. Exposure to cloud platforms (AWS, GCP, or Azure) and containerized environments (Docker, Kubernetes). Familiarity with test data management, service virtualization, and mocking frameworks. Experience implementing shift-left testing practices and quality gates in pipelines. What’s in it for you? You will have the autonomy to take ownership of your role and contribute to the growth and success of our brand. If you are driven to make an immediate impact, achieve results, thrive in a high performing team and want to grow in a dynamic and rewarding environment – You belong to Maropost! Show more Show less

Posted 1 week ago

Apply

4.0 years

4 Lacs

Ahmedabad

Remote

Job Information Department Name Product Development Date Opened 06/03/2025 Job Type Full time Industry Technology Work Experience 4-5 years Salary 420000 City Ahmadabad City State/Province Gujarat Country India Zip/Postal Code 380006 About Us Immunity Networks & Technologies Pvt. Ltd. is a leading Indian IT networking and cybersecurity company committed to delivering secure, scalable, and performance-driven solutions for businesses of all sizes. Established with a vision to empower organizations through cutting-edge technology, Immunity Networks specializes in network infrastructure, wireless solutions, firewall deployments, and managed security services. With a robust presence across Maharashtra and Gujarat, we cater to enterprise, SMB, and government clients, offering reliable products, technical expertise, and end-to-end support. Job Description We're looking for a Cloud & DB Admin who can manage Docker deployments, secure database systems, and monitor overall system health. You will play a critical role in scaling and securing the infrastructure. The Right resource would have good experience of working with GITHUB, Linux deployments and maintenance, Handling with Fine tuning and monitoring services on the OS and MySql. Experience of Dockers and instances is a plus. Requirements 3–5 years of hands-on experience in managing Linux-based servers (Ubuntu, Debian, or CentOS), preferably for SaaS or web-based platforms Proficient with GitHub and CI/CD workflows – including automated deployments, rollback, and GitOps practices Solid experience in MySQL/MariaDB administration , including: Schema design and optimization Query tuning and indexing Backup and restore strategies Master-slave or primary-replica replication Strong knowledge of Docker : building, deploying, and managing containers across development and production environments Experience with process monitoring and alerting tools : Prometheus + Grafana, Netdata, or similar Good understanding of Linux OS internals and performance tuning , including memory, I/O, disk usage, and network bottlenecks Ability to harden servers for production use: firewall settings, fail2ban, ssh access control, SELinux/apparmor basics Experience managing production-grade web services , ideally including PHP/Laravel environments Familiarity with SSL management , NGINX/Apache configuration , and basic load balancing setups (Bonus) Knowledge of FreeRADIUS schema tuning , logs, and monitoring (Bonus) Basic scripting skills (e.g., Bash, Python) to automate common admin tasks Strong documentation habits for runbooks, incident response, and change logs Comfortable working in remote teams , attending standups, and updating task management systems (e.g., ClickUp, Jira) Benefits Ownership of full infrastructure setup Certifications sponsored (AWS, Linux, DB Admin) Access to premium monitoring tools High visibility role with performance bonuses

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

About Tide At Tide, we are building a business management platform designed to save small businesses time and money. We provide our members with business accounts and related banking services, but also a comprehensive set of connected administrative solutions from invoicing to accounting. Launched in 2017, Tide is now used by over 1 million small businesses across the world and is available to UK, Indian and German SMEs. Headquartered in central London, with offices in Sofia, Hyderabad, Delhi, Berlin and Belgrade, Tide employs over 2,000 employees. Tide is rapidly growing, expanding into new products and markets and always looking for passionate and driven people. Join us in our mission to empower small businesses and help them save time and money. About The Team Our 40+ engineering teams are working on designing, creating and running the rich product catalogue across our business areas (e.g. Payments Services, Business Services). We have a long roadmap ahead of us and always have interesting problems to tackle. We trust and empower our engineers to make real technical decisions that affect multiple teams and shape the future of Tide’s Global One Platform. It’s an exceptional opportunity to make a real difference by taking ownership of engineering practices in a rapidly expanding company! We work in small autonomous teams, grouped under common domains owning the full lifecycle of some microservices in Tide’s service catalogue. Our engineers self-organize, gather together to discuss technical challenges, and set their own guidelines in the different Communities of Practice regardless of where they currently stand in our Growth Framework. About The Role Contribute to our event-driven Microservice Architecture (currently 200+ services owned by 40+ teams). You will define and maintain the services your team owns (you design it, you build it, you run it, you scale it globally) Use Java 17, Spring Boot and JOOQ to build your services. Expose and consume RESTful APIs. We value good API design and we treat our APIs as Products (in the world of Open Banking often times they are gonna be public!) Use SNS+SQS and Kafka to send events Utilise PostgreSQL via Aurora as your primary datastore (we are heavy AWS users) Deploy your services to Production as often as you need to (this usually means multiple times per day!). This is enabled by our CI/CD pipelines powered by GitHub with GitHub actions, and solid JUnit/Pact testing (new joiners are encouraged to have something deployed to production in their first 2 weeks) Experience modern GitOps using ArgoCD. Our Cloud team uses Docker, Terraform, EKS/Kubernetes to run the platform. Have DataDog as your best friend to monitor your services and investigate issues Collaborate closely with Product Owners to understand our Users’ needs, Business opportunities and Regulatory requirements and translate them into well-engineered solutions What We Are Looking For Have some experience building server-side applications and detailed knowledge of the relevant programming languages for your stack. You don’t need to know Java, but bear in mind that most of our services are written in Java, so you need to be willing to learn it when you have to change something there! Have a sound knowledge of a backend framework (e.g. Spring/Spring Boot) that you’ve used to write microservices that expose and consume RESTful APIs Have experience engineering scalable and reliable solutions in a cloud-native environment (the most important thing for us is understanding the fundamentals of CI/CD, practical Agile so to speak) Demonstrate a mindset of delivering secure, well-tested and well-documented software that integrates with various third party providers and partners (we do that a lot in the fintech industry) Our Tech Stack Java 17, Spring Boot and JOOQ to build the RESTful APIs of our microservices Event-driven architecture with messages over SNS+SQS and Kafka to make them reliable Primary datastores are MySQL and PostgreSQL via RDS or Aurora (we are heavy AWS users) Docker, Terraform, EKS/Kubernetes used by the Cloud team to run the platform DataDog, ElasticSearch/Fluentd/Kibana and Rollbar to keep it running GitHub with GitHub actions for Sonarcloud, Snyk and solid JUnit/Pact testing to power the CI/CD pipelines What You Will Get In Return Competitive salary Self & Family Health Insurance Term & Life Insurance OPD Benefits Mental wellbeing through Plumm Learning & Development Budget WFH Setup allowance 25 Annual leaves Family & Friendly Leaves Tidean Ways Of Working At Tide, we champion a flexible workplace model that supports both in-person and remote work to cater to the specific needs of our different teams. While remote work is supported, we believe in the power of face-to-face interactions to foster team spirit and collaboration. Our offices are designed as hubs for innovation and team-building, where we encourage regular in-person gatherings to foster a strong sense of community. TIDE IS A PLACE FOR EVERYONE At Tide, we believe that we can only succeed if we let our differences enrich our culture. Our Tideans come from a variety of backgrounds and experience levels. We consider everyone irrespective of their ethnicity, religion, sexual orientation, gender identity, family or parental status, national origin, veteran, neurodiversity or differently-abled status. We celebrate diversity in our workforce as a cornerstone of our success. Our commitment to a broad spectrum of ideas and backgrounds is what enables us to build products that resonate with our members’ diverse needs and lives. We are One Team and foster a transparent and inclusive environment, where everyone’s voice is heard. At Tide, we thrive on diversity, embracing various backgrounds and experiences. We welcome all individuals regardless of ethnicity, religion, sexual orientation, gender identity, or disability. Our inclusive culture is key to our success, helping us build products that meet our members' diverse needs. We are One Team, committed to transparency and ensuring everyone’s voice is heard. You personal data will be processed by Tide for recruitment purposes and in accordance with Tide's Recruitment Privacy Notice . Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Overview: We are seeking a skilled and proactive Support Engineer with deep expertise in Azure cloud services, Kubernetes, and DevOps practices. with 5+ years of industry experience with same technologies. The ideal candidate will have experience working with Azure services, including Kubernetes, API management, monitoring tools, and various cloud infrastructure services. You will be responsible for providing technical support, managing cloud-based systems, troubleshooting complex issues, and ensuring smooth operation and optimization of services within the Azure ecosystem. Key Responsibilities Provide technical support for Azure-based cloud services, including Azure Kubernetes Service (AKS), Azure API Management, Application Gateway, Web Application Firewall, Azure Monitor with KQL queries Manage and troubleshoot various Azure services such as Event Hub, Azure SQL, Application Insights, Virtual Networks and WAF. Work with Kubernetes environments, troubleshoot deployments, utilizing Helm Charts, check resource utilization and managing GitOps processes. Utilize Terraform to automate cloud infrastructure provisioning, configuration, and management. Troubleshoot and resolve issues in MongoDB and Microsoft SQL Server databases, ensuring high availability and performance. Monitor cloud infrastructure health using Grafana and Azure Monitor, providing insights and proactive alerts. Provide root-cause analysis for technical incidents, propose and implement corrective actions to prevent recurrence. Continuously optimize cloud services and infrastructure to improve performance, scalability, and security. Required Skills & Qualifications Azure Certification (e.g., Azure Solutions Architect, Azure Administrator) with hands-on experience in Azure services such as AKS, API Management, Application Gateway, WAF, and others. Any Kubernetes Certification (e.g CKAD or CKA) with Strong hands-on expertise in Kubernetes Helm Charts, and GitOps principles for managing/toubleshooting deployments. Hands-on experience with Terraform for infrastructure automation and configuration management. Proven experience in MongoDB and Microsoft SQL Server, including deployment, maintenance, performance tuning, and troubleshooting. Familiarity with Grafana for monitoring, alerting, and visualization of cloud-based services. Experience using Azure DevOps tools, including Repos and Pipelines for CI/CD automation and source code management. Strong knowledge of Azure Monitor, KQL Queries, Event Hub, and Application Insights for troubleshooting and monitoring cloud infrastructure. Solid understanding of Virtual Networks, WAF, Firewalls, and other related Azure networking tools. Excellent troubleshooting, analytical, and problem-solving skills. Strong written and verbal communication skills, with the ability to explain complex technical issues to non-technical stakeholders. Ability to work in a fast-paced environment and manage multiple priorities effectively. Preferred Skills Experience with cloud security best practices in Azure. Knowledge of infrastructure as code (IaC) concepts and tools. Familiarity with containerized applications and Docker. Education Bachelors degree in Computer Science, Information Technology, or a related field, or equivalent work experience. Show more Show less

Posted 1 week ago

Apply

9.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

Experience: 6–9 Years Education: BE/BTech Key Responsibilities: Develop and maintain Jenkins pipelines using Groovy and Jenkinsfile (Pipeline as Code) Manage and deploy Kubernetes clusters and containerized applications Implement DevOps best practices around automation, monitoring, and infrastructure as code Work with tools including Git, SonarQube, FossID, Elastic Stack, Helm, Prometheus, Grafana, Istio, and JFrog Artifactory Manage secrets and configurations using tools like HashiCorp Vault and Doppler Collaborate with development and security teams to maintain CI/CD pipelines and microservices architecture Required Skills: Jenkins, Groovy scripting, Docker, Kubernetes GitOps tools and container orchestration Prometheus, Grafana, Elastic Stack for monitoring/logging HashiCorp Vault, Doppler for secrets management Strong understanding of CI/CD, DevSecOps, and cloud-native architecture Excellent analytical, communication, and team collaboration skills Show more Show less

Posted 1 week ago

Apply

4.0 - 7.0 years

4 - 8 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Qualifications: Basic experience in data engineering and distributed data processing. Proficiency in Python (PySpark) is essential; knowledge of Scala and Java is beneficial. Familiarity with Spark optimization and scalable data processing. Foundation in developing data transformation pipelines. Experience with Kubernetes,GitOps, andmodern cloud stacks. Interest in AI & ML technologies and industry trends. Goodcommunication skills for effective collaboration in a global team. Eagerness to learn about SAP Data Processing solutions and platform initiatives.

Posted 1 week ago

Apply

4.0 - 8.0 years

5 - 7 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Qualifications: Solid experience in data engineering, distributed data processing, and SAP HANA or related databases. Proficiencyin Python (PySpark) isrequired; Scala and Java are beneficial. Experience in Spark optimization and scalable data processing. Expertisein developing data transformation pipelines. Familiarity with Kubernetes,GitOps,and modern cloud stacks Understanding of AI & ML technologies and industry trends. Excellent communication skills for teamwork in a global context. Backgroundin SAP Data Processing solutions isa plus.

Posted 1 week ago

Apply

9.0 - 13.0 years

9 - 13 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Qualifications: Extensive experience in data engineering, distributed data processing, and expertise in SAP HANA or similar databases. Proficiency in Python (PySpark) is essential; knowledge of Scala and Java is advantageous. Advanced understanding of Spark optimization and scalable data processing techniques. Proven experience in architecting data transformation pipelines. Knowledge of Kubernetes,GitOps, andmodern cloud stacks. Strong understanding of AI & ML technologies and industry trends. Effective communication skills within a global, multi-cultural environment. Proven track record of leadership in data processing and platform initiatives.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Linkedin logo

We're looking for a Cloud & DB Admin who can manage Docker deployments, secure database systems, and monitor overall system health. You will play a critical role in scaling and securing the infrastructure. The Right resource would have good experience of working with GITHUB, Linux deployments and maintenance, Handling with Fine tuning and monitoring services on the OS and MySql. Experience of Dockers and instances is a plus. Requirements 3–5 years of hands-on experience in managing Linux-based servers (Ubuntu, Debian, or CentOS), preferably for SaaS or web-based platforms Proficient with GitHub and CI/CD workflows – including automated deployments, rollback, and GitOps practices Solid experience in MySQL/MariaDB administration, including: Schema design and optimization Query tuning and indexing Backup and restore strategies Master-slave or primary-replica replication Strong knowledge of Docker: building, deploying, and managing containers across development and production environments Experience with process monitoring and alerting tools: Prometheus + Grafana, Netdata, or similar Good understanding of Linux OS internals and performance tuning, including memory, I/O, disk usage, and network bottlenecks Ability to harden servers for production use: firewall settings, fail2ban, ssh access control, SELinux/apparmor basics Experience managing production-grade web services, ideally including PHP/Laravel environments Familiarity with SSL management, NGINX/Apache configuration, and basic load balancing setups (Bonus) Knowledge of FreeRADIUS schema tuning, logs, and monitoring (Bonus) Basic scripting skills (e.g., Bash, Python) to automate common admin tasks Strong documentation habits for runbooks, incident response, and change logs Comfortable working in remote teams, attending standups, and updating task management systems (e.g., ClickUp, Jira) Benefits Ownership of full infrastructure setup Certifications sponsored (AWS, Linux, DB Admin) Access to premium monitoring tools High visibility role with performance bonuses Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the V Team Life. What You’ll Be Doing... The work you'll be doing as a test Engineer for Verizon's SDN Planning team is testing and automation pipelines and other tooling for the infrastructure. You will be testing a highly reliable infrastructure running critical network functions. Developing and maintaining test environments and infrastructure. Monitoring network performance and identify potential issues. Contributing to the continuous improvement of our QE processes. Documenting test plans, test cases, and test results. Collaborating effectively with cross-functional teams, including development, operations, and product management. Thoroughly understanding the Architecture, workflow of core Telecom Wireline, Fiber, Ethernet backhaul technologies Interacting with product management, software/hardware development team, and System & Technology to understand customer and product requirements Assisting in development of automated test environments and QA strategies Driving, executing and coordinating the test activities with vendors for issue resolution, config / template redesign etc. Designing, developing, and executing comprehensive regression test suites for network elements and services, including functional, performance, and stability testing. Automating test cases and contribute to the development of our test automation framework. Collaborating with developers to reproduce and resolve bugs. Participating in code reviews and contribute to improving code quality. What We’re Looking For… We are seeking a highly motivated and experienced Software Engineer to join our team, focusing on Quality Engineering (QE) regression testing and operational support within our telecom/networking domain. This role will be critical in ensuring the reliability and performance of our network infrastructure and services. You will work closely with development, operations, and other QE engineers to design, develop, and execute test plans, identify and resolve issues, and contribute to the continuous improvement of our testing processes. You will also leverage your operations experience to provide support and troubleshooting for production systems. You'll Need To Have Bachelor’s degree or four or more years of work experience. Four or more years of relevant work experience. Four or more years of testing experience in Networking domain. Proficient in Platform and Protocols testing of various Networking and Routing Protocols, VLANs, IP Routing, TCP/IP, MPLS, BGP/OSPF, IS-IS, QoS, L1-L2-L3 protocols, Optical Networks (OTN), for Transport Backhaul of Wireline and Wireless network. Proficient in Manual and automated Regression, Functional, and E2E testing, and later Automation testing across multiple products and vendor domains Strong analytical skills, work closely with development teams to triage failures, bug analysis Possess good understanding of Linux virtualisation, containerisation, Cloud technologies Knowledge of Agile Processes and DevOps - CI/CD, GitOps, Jenkins, and other test tools Experience in analysing test results, identifying root causes of defects, and reporting issues clearly and concisely. Even better if you have one or more of the following: Experience with a high-performance, high-availability environment. Experience with Network technologies like SDN/NFV Strong analytical, debugging skills. Good communication and presentation skills. Relevant certifications. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

Hyderabad, Telangana, India Category: Information Technology Hire Type: Employee Job ID 9330 Date posted 02/24/2025 We Are: At Synopsys, we drive the innovations that shape the way we live and connect. Our technology is central to the Era of Pervasive Intelligence, from self-driving cars to learning machines. We lead in chip design, verification, and IP integration, empowering the creation of high-performance silicon chips and software content. Join us to transform the future through continuous technological innovation. You Are: You are a forward-thinking Cloud DevOps Engineer with a passion for modernizing infrastructure and enhancing the capabilities of CI/CD pipelines, containerization strategies, and hybrid cloud deployments. You thrive in environments where you can leverage your expertise in cloud infrastructure, distributed processing workloads, and AI-driven automation. Your collaborative spirit drives you to work closely with development, data, and GenAI teams to build resilient, scalable, and intelligent DevOps solutions. You are adept at integrating cutting-edge technologies and best practices to enhance both traditional and AI-driven workloads. Your proactive approach and problem-solving skills make you an invaluable asset to any team. What You’ll Be Doing: Designing, implementing, and optimizing CI/CD pipelines for cloud and hybrid environments. Integrating AI-driven pipeline automation for self-healing deployments and predictive troubleshooting. Leveraging GitOps (ArgoCD, Flux, Tekton) for declarative infrastructure management. Implementing progressive delivery strategies (Canary, Blue-Green, Feature Flags). Containerizing applications using Docker & Kubernetes (EKS, AKS, GKE, OpenShift, or on-prem clusters). Optimizing service orchestration and networking with service meshes (Istio, Linkerd, Consul). Implementing AI-enhanced observability for containerized services using AIOps-based monitoring. Automating provisioning with Terraform, CloudFormation, Pulumi, or CDK. Supporting and optimizing distributed computing workloads, including Apache Spark, Flink, or Ray. Using GenAI-driven copilots for DevOps automation, including scripting, deployment verification, and infra recommendations. The Impact You Will Have: Enhancing the efficiency and reliability of CI/CD pipelines and deployments. Driving the adoption of AI-driven automation to reduce downtime and improve system resilience. Enabling seamless application portability across on-prem and cloud environments. Implementing advanced observability solutions to proactively detect and resolve issues. Optimizing resource allocation and job scheduling for distributed processing workloads. Contributing to the development of intelligent DevOps solutions that support both traditional and AI-driven workloads. What You’ll Need: 5+ years of experience in DevOps, Cloud Engineering, or SRE. Hands-on expertise with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI, ArgoCD, Tekton, etc.). Strong experience with Kubernetes, container orchestration, and service meshes. Proficiency in Terraform, CloudFormation, Pulumi, or Infrastructure as Code (IaC) tools. Experience working in hybrid cloud environments (AWS, Azure, GCP, on-prem). Strong scripting skills in Python, Bash, or Go. Knowledge of distributed data processing frameworks (Spark, Flink, Ray, or similar). Who You Are: You are a collaborative and innovative professional with a strong technical background and a passion for continuous learning. You excel in problem-solving and thrive in dynamic environments where you can your expertise to drive significant improvements. Your excellent communication skills enable you to work effectively with diverse teams, and your commitment to excellence ensures that you consistently deliver high-quality results. The Team You’ll Be A Part Of: You will join a dynamic team focused on optimizing cloud infrastructure and enhancing workloads to contribute to overall operational efficiency. This team is dedicated to driving the modernization and optimization of Infrastructure CI/CD pipelines and hybrid cloud deployments, ensuring that Synopsys remains at the forefront of technological innovation. Rewards and Benefits: We offer a comprehensive range of health, wellness, and financial benefits to cater to your needs. Our total rewards include both monetary and non-monetary offerings. Your recruiter will provide more details about the salary range and benefits during the hiring process. At Synopsys, we want talented people of every background to feel valued and supported to do their best work. Synopsys considers all applicants for employment without regard to race, color, religion, national origin, gender, sexual orientation, age, military veteran status, or disability.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the V Team Life. What You’ll Be Doing… You will be part of the Network Planning group in GNT organization supporting development of deployment automation pipelines and other tooling for the Verizon Cloud Platform. You will be supporting a highly reliable infrastructure running critical network functions. You will be responsible for solving issues that are new and unique, which will provide the opportunity to innovate. You will have a high level of technical expertise and daily hands-on implementation working in a planning team designing and developing automation. This entitles programming and orchestrating the deployment of feature sets into the Kubernetes CaaS platform along with building containers via a fully automated CI/CD pipeline utilizing Ansible playbooks, Python and CI/CD tools and process like JIRA, GitLab, ArgoCD, or any other scripting technologies. Leveraging monitoring tools such as Redfish, Splunk, and Grafana to monitor system health, detect issues, and proactively resolve them. Design and configure alerts to ensure timely responses to critical events. Working with the development and Operations teams to design, implement, and optimize CI/CD pipelines using ArgoCD for efficient, automated deployment of applications and infrastructure. Implementing security best practices for cloud and containerized services and ensure adherence to security protocols. Configure IAM roles, VPC security, encryption, and compliance policies. Continuously optimize cloud infrastructure for performance, scalability, and cost-effectiveness. Use tools and third-party solutions to analyze usage patterns and recommend cost-saving strategies. Working closely with the engineering and operations teams to design and implement cloud-based solutions. Maintaining detailed documentation of cloud architecture and platform configurations and regularly provide status reports and performance metrics. What We’re Looking For... You’ll need to have: Bachelor’s degree or one or more year of work experience. Experience years in Kubernetes administration Hands-on experience with one or more of the following platforms: EKS, Red Hat OpenShift, GKE, AKS, OCI GitOps CI/CD workflows (ArgoCD, Flux) and Very Strong Expertise in the following: Ansible, Terraform, Helm, Jenkins, Gitlab VSC/Pipelines/Runners, Artifactory Strong proficiency with monitoring/observability tools such as New Relic, Prometheus/Grafana, logging solutions (Fluentd/Elastic/Splunk) to include creating/customizing metrics and/or logging dashboards Backend development experience with languages to include Golang (preferred), Spring Boot, and Python Development Experience with the Operator SDK, HTTP/RESTful APIs, Microservices Familiarity with Cloud cost optimization (e.g. Kubecost) Strong experience with infra components like Flux, cert-manager, Karpenter, Cluster Autoscaler, VPC CNI, Over-provisioning, CoreDNS, metrics-server Familiarity with Wireshark, tshark, dumpcap, etc., capturing network traces and performing packet analysis Demonstrated expertise with the K8S ecosystem (inspecting cluster resources, determining cluster health, identifying potential application issues, etc.) Strong Development of K8S tools/components which may include standalone utilities/plugins, cert-manager plugins, etc. Development and working experience with Service Mesh lifecycle management and configuring, troubleshooting applications deployed on Service Mesh and Service Mesh related issues Expertise in RBAC and Pod Security Standards, Quotas, LimitRanges, OPA & Gatekeeper Policies Working experience with security tools such as Sysdig, Crowdstrike, Black Duck, etc. Demonstrated expertise with the K8S security ecosystem (SCC, network policies, RBAC, CVE remediation, CIS benchmarks/hardening, etc.) Networking of microservices, solid understanding of Kubernetes networking and troubleshooting Certified Kubernetes Administrator (CKA) Demonstrated very strong troubleshooting and problem-solving skills Excellent verbal communication and written skills Even better if you have one or more of the following: Certified Kubernetes Application Developer (CKAD) Red Hat Certified OpenShift Administrator Familiarity with creating custom EnvoyFilters for Istio service mesh and integrating with existing web application portals Experience with OWASP rules and mitigating security vulnerabilities using security tools like Fortify, Sonarqube, etc. Database experience (RDBMS, NoSQL, etc.) Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Gurgaon

On-site

Who We Are BCG partners with clients from the private, public, and not‐for profit sectors in all regions of the globe to identify their highest value opportunities, address their most critical challenges, and transform their enterprises. We work with the most innovative companies globally, many of which rank among the world’s 500 largest corporations. Our global presence makes us one of only a few firms that can deliver a truly unified team for our clients – no matter where they are located. Our ~22,000 employees, located in 90+ offices in 50+ countries, enable us to work in collaboration with our clients, to tailor our solutions to each organization. We value and utilize the unique talents that each of these individuals brings to BCG; the wide variety of backgrounds of our consultants, specialists, and internal staff reflects the importance we place on diversity. Our employees hold degrees across a full range of disciplines – from business administration and economics to biochemistry, engineering, computer science, psychology, medicine, and law. What You'll Do BCG X develops innovative and AI driven solutions for the Fortune 500 in their highest‐value use cases. The BCG X Software group productizes repeat use‐cases, creating both reusable components as well as single‐tenant and multi‐tenant SaaS offerings that are commercialized through the BCG consulting business. BCG X is currently looking for a Software Engineering Architect to drive impact and change for the firms engineering and analytics engine and bring new products to BCG clients globally. This will include: Serving as a leader within BCG X and specifically the KEY Impact Management by BCG X Tribe (Transformation, Post-Merger-Integration related software and data products) overseeing the delivery of high-quality software: driving technical roadmap, architectural decisions and mentoring engineers Influencing and serving as a key decision maker in BCG X technology selection & strategy Active “hands-on” role, building intelligent analytical products to solve problems, write elegant code, and iterate quickly Overall responsibility for the engineering and architecture alignment of all solutions delivered within the tribe. Responsible for technology roadmap of existing and new components delivered. Architecting and implementing backend and frontend solutions primarily using .NET, C#, MS SQL Server, Angular, and other technologies best suited for the goals, including open source i.e. Node, Django, Flask, Python where needed. What You'll Bring 10+ years of technology and software engineering experience in a complex and fast-paced business environment (ideally agile environment) with exposure to a variety of technologies and solutions, with at least 5 year’ experience in Architect role. Experience with a wide range of Application and Data architectures, platforms and tools including: Service Oriented Architecture, Clean Architecture, Software as a Service, Web Services, Object-Oriented Languages (like C# or Java), SQL Databases (like Oracle or SQL Server), Relational, Non-relational Databases, Hands on experience with analytics tools and reporting tools, Data Science experience etc. Thoroughly up to date in technology: o Modern cloud architectures including AWS, Azure, GCP, Kubernetes o Very strong particularly in .NET, C#, MS SQL Server, Angular technologies o Open source stacks including NodeJs, React, Angular, Flask are good to have o CI/CD / DevSecOps / GitOps toolchains and development approaches o Knowledge in machine learning & AI frameworks o Big data pipelines and systems: Spark, Snowflake, Kafka, Redshift, Synapse, Airflow At least Bachelors degree; Master’s degree and/or MBA preferred Team player with excellent work habits and interpersonal skills Care deeply about product quality, reliability, and scalability Passion about the people and culture side of engineering teams Outstanding written and oral communications skills The ability to travel, depending on project requirements. Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

We are seeking a highly skilled and motivated Cloud Security Engineer with hands-on experience in WAAP (Web Application & API Protection) solutions such as Cloudflare or Akamai. The ideal candidate will have a deep understanding of web and API security, IaC automation, and CDN-based DDoS protection strategies. This role demands strong collaboration skills to work cross-functionally with development, networking, and security teams. --- Key Responsibilities: 1. Cloudflare / Akamai Security Management: Configure and manage WAAP policies, WAF rules, bot protection, and rate limiting features. Monitor and respond to security events, DDoS alerts, and bot traffic anomalies. Optimize security posture while minimizing performance impact. 2. Web Application & API Security: Apply security best practices based on OWASP Top 10 and OWASP API Security Top 10. Conduct threat modeling and security reviews for web and API endpoints. Assist in secure design and testing during application development lifecycle. 3. Infrastructure as Code (IaC): Implement and manage Terraform modules for deploying and enforcing security configurations. Automate WAAP/WAF provisioning and policy management as code. Integrate security checks into CI/CD pipelines. 4. Network & CDN Security: Design and manage DNS, CDN configurations, and origin protection strategies. Respond to Layer 3/4 DDoS attacks using mitigation controls and rate limits. Collaborate on resilient edge security architectures. 5. Cross-Functional Collaboration & Troubleshooting: Work closely with developers, DevOps, network, and security teams to support application and infrastructure migrations. Troubleshoot WAAP policy conflicts, latency issues, and misconfigured firewall rules. Communicate security risks and mitigation strategies to stakeholders clearly and effectively. --- Qualifications : 5+ years of experience with Cloudflare, Akamai, or similar WAAP platforms. Strong grasp of web protocols (HTTP/S, TLS), DNS, and CDN architectures. Proficiency with Terraform and GitOps practices. Familiarity with CI/CD workflows and DevSecOps practices. Excellent analytical and communication skills. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Linkedin logo

Job Title: Azure DevOps Engineer Location: Pune Experience: 5-7 Years Job Description 5+ years of Platform Engineering, DevOps, or Cloud Infrastructure experience Platform Thinking: Strong understanding of platform engineering principles, developer experience, and self-service capabilities Azure Expertise: Advanced knowledge of Azure services including compute, networking, storage, and managed services Infrastructure as Code: Proficient in Terraform, ARM templates, or Azure Bicep with hands-on experience in large-scale deployments DevOps and Automation CI/CD Pipelines: Expert-level experience with Azure DevOps, GitHub Actions, or Jenkins Automation Scripting: Strong programming skills in Python, PowerShell, or Bash for automation and tooling Git Workflows: Advanced understanding of Git branching strategies, pull requests, and code review processes Cloud Architecture and Security Cloud Architecture: Deep understanding of cloud design patterns, microservices, and distributed systems Security Best Practices: Implementation of security scanning, compliance automation, and zero-trust principles Networking: Advanced Azure networking concepts including VNets, NSGs, Application Gateways, and hybrid connectivity Identity Management: Experience with Azure Active Directory, RBAC, and identity governance Monitoring and Observability Azure Monitor: Advanced experience with Azure Monitor, Log Analytics, and Application Insights Metrics and Alerting: Implementation of comprehensive monitoring strategies and incident response Logging Solutions: Experience with centralized logging and log analysis platforms Performance Optimization: Proactive performance monitoring and optimization techniques Roles And Responsibilities Platform Development and Management Design and build self-service platform capabilities that enable development teams to deploy and manage applications independently Create and maintain platform abstractions that simplify complex infrastructure for development teams Develop internal developer platforms (IDP) with standardized templates, workflows, and guardrails Implement platform-as-a-service (PaaS) solutions using Azure native services Establish platform standards, best practices, and governance frameworks Infrastructure as Code (IaC) Design and implement Infrastructure as Code solutions using Terraform, ARM templates, and Azure Bicep Create reusable infrastructure modules and templates for consistent environment provisioning Implement GitOps workflows for infrastructure deployment and management Maintain infrastructure state management and drift detection mechanisms Establish infrastructure testing and validation frameworks DevOps and CI/CD Build and maintain enterprise-grade CI/CD pipelines using Azure DevOps, GitHub Actions, or similar tools Implement automated testing strategies including infrastructure testing, security scanning, and compliance checks Create deployment strategies including blue-green, canary, and rolling deployments Establish branching strategies and release management processes Implement secrets management and secure deployment practices Platform Operations and Reliability Implement monitoring, logging, and observability solutions for platform services Establish SLAs and SLOs for platform services and developer experience metrics Create self-healing and auto-scaling capabilities for platform components Implement disaster recovery and business continuity strategies Maintain platform security posture and compliance requirements Preferred Qualifications Bachelor’s degree in computer science or a related field (or equivalent work experience) Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

🚀 We're Hiring: DevOps Specialist – Azure 📍 Location: Kochi | Chennai | Pune 🕒 Experience: 6+ Years Are you someone who thinks in YAML, breathes automation, and dreams in containers? We're on the hunt for a DevOps Specialist who can build, secure, and scale modern cloud infrastructure on Microsoft Azure . 🔧 What You’ll Work On: Deploy and manage scalable Azure environments Build efficient CI/CD pipelines with GitHub Actions Implement GitOps with ArgoCD Own cloud security, monitoring, and automation Define cloud architecture strategies that deliver 🛠️ Tech You’ll Tackle: AKS | Docker | Kubernetes | GitHub Actions Prometheus | Grafana | Loki | Terraform | Ansible OpenTelemetry | Azure EventHub | Service Bus | Key Vault ✅ What You Bring: ✔ 6+ years of DevOps experience ✔ Deep hands-on Azure expertise ✔ Passion for clean, scalable infrastructure ✔ Proactive mindset and automation-first thinking If this sounds like your kind of role—or you know someone perfect for it—let's connect! 📩Apply now. #Hiring #DevOps #Azure #CI/CD #Kubernetes #GitOps #CloudJobs #TechJobs #DevOpsEngineer #Terraform #Grafana #OpenTelemetry Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Key Responsibilities: Architect and implement container orchestration solutions using Kubernetes in production-grade environments. Lead the design and integration of OpenStack with Kubernetes-based platforms. Collaborate with infrastructure, DevOps, and software teams to design cloud-native applications and CI/CD pipelines. Define architectural standards, best practices, and governance models for Kubernetes-based workloads. Assess current system architecture and recommend improvements or migrations to Kubernetes. Mentor and guide junior engineers and DevOps teams on Kubernetes and cloud-native tools. Troubleshoot complex infrastructure and containerization issues. Key Requirements: 8+ years of experience in IT architecture with at least 4+ years working on Kubernetes. Deep understanding of Kubernetes architecture (control plane, kubelet, etcd, CNI plugins, etc.) Strong hands-on experience with containerization technologies like Docker and container runtimes. Proven experience working with OpenStack and integrating it with container platforms. Solid knowledge of cloud infrastructure, networking, and persistent storage in Kubernetes. Familiarity with Helm, Istio, service mesh, and other cloud-native tools is a plus. Experience with CI/CD pipelines, infrastructure as code (e.g., Terraform), and GitOps practices. Excellent problem-solving skills and ability to work in fast-paced environments. Preferred Qualifications: Certified Kubernetes Administrator (CKA) or Certified Kubernetes Application Developer (CKAD) Experience with multiple cloud platforms (AWS, Azure, GCP, or private cloud) Background in networking or storage architecture is highly desirable. Show more Show less

Posted 2 weeks ago

Apply

0.0 - 5.0 years

0 Lacs

Ahmedabad, Gujarat

Remote

Indeed logo

Job Information Department Name Product Development Date Opened 06/03/2025 Job Type Full time Industry Technology Work Experience 4-5 years Salary 420000 City Ahmadabad City State/Province Gujarat Country India Zip/Postal Code 380006 About Us Immunity Networks & Technologies Pvt. Ltd. is a leading Indian IT networking and cybersecurity company committed to delivering secure, scalable, and performance-driven solutions for businesses of all sizes. Established with a vision to empower organizations through cutting-edge technology, Immunity Networks specializes in network infrastructure, wireless solutions, firewall deployments, and managed security services. With a robust presence across Maharashtra and Gujarat, we cater to enterprise, SMB, and government clients, offering reliable products, technical expertise, and end-to-end support. Job Description We're looking for a Cloud & DB Admin who can manage Docker deployments, secure database systems, and monitor overall system health. You will play a critical role in scaling and securing the infrastructure. The Right resource would have good experience of working with GITHUB, Linux deployments and maintenance, Handling with Fine tuning and monitoring services on the OS and MySql. Experience of Dockers and instances is a plus. Requirements 3–5 years of hands-on experience in managing Linux-based servers (Ubuntu, Debian, or CentOS), preferably for SaaS or web-based platforms Proficient with GitHub and CI/CD workflows – including automated deployments, rollback, and GitOps practices Solid experience in MySQL/MariaDB administration , including: Schema design and optimization Query tuning and indexing Backup and restore strategies Master-slave or primary-replica replication Strong knowledge of Docker : building, deploying, and managing containers across development and production environments Experience with process monitoring and alerting tools : Prometheus + Grafana, Netdata, or similar Good understanding of Linux OS internals and performance tuning , including memory, I/O, disk usage, and network bottlenecks Ability to harden servers for production use: firewall settings, fail2ban, ssh access control, SELinux/apparmor basics Experience managing production-grade web services , ideally including PHP/Laravel environments Familiarity with SSL management , NGINX/Apache configuration , and basic load balancing setups (Bonus) Knowledge of FreeRADIUS schema tuning , logs, and monitoring (Bonus) Basic scripting skills (e.g., Bash, Python) to automate common admin tasks Strong documentation habits for runbooks, incident response, and change logs Comfortable working in remote teams , attending standups, and updating task management systems (e.g., ClickUp, Jira) Benefits Ownership of full infrastructure setup Certifications sponsored (AWS, Linux, DB Admin) Access to premium monitoring tools High visibility role with performance bonuses

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Indeed logo

Noida,Uttar Pradesh,India Job ID 762745 Join our Team About this opportunity: We are seeking a Senior OpenShift Engineer to lead the migration, modernization, and management of enterprise container platforms using Red Hat OpenShift. This role involves migrating legacy applications to OpenShift, optimizing workloads, and ensuring high availability across hybrid and multi-cloud environments. The ideal candidate will be skilled in container orchestration, DevOps automation, and cloud-native transformations. What you will do: Lead migration projects to move workloads from legacy platforms ( on-prem running on KVM/VMware/Openstack, on-prem Kubernetes, OpenShift 3.x) to OpenShift 4.x. Assess and optimize monolithic applications for containerization and microservices architecture. Develop strategies for stateful and stateless application migrations with minimal downtime. Work with developers and architects to refactor or replatform applications for cloud-native environments. Implement migration automation using Ansible, Helm, or OpenShift GitOps (ArgoCD/FluxCD). Design, deploy, and manage scalable, highly available OpenShift clusters across on-prem and cloud. Implement multi-cluster, hybrid cloud, and multi-cloud OpenShift architectures. Define resource quotas, auto-scaling policies, and workload optimizations for performance tuning. Oversee OpenShift upgrades, patching, and lifecycle management. The skills you bring: Deep hands-on experience with Red Hat OpenShift (OCP 4.x+), Kubernetes, and Docker. Strong knowledge of application migration strategies (Lift & Shift, Replatforming, Refactoring). Proficiency in cloud-native application development and microservices. Expertise in Cloud Platforms (AWS, Azure, GCP) with OpenShift deployments. Advanced scripting and automation using Bash, Python, Ansible, or Terraform. Experience with GitOps methodologies (ArgoCD, FluxCD) and Infrastructure as Code (IaC). Certifications (Preferred but not Mandatory): Red Hat Certified Specialist in OpenShift Administration (EX280) Certified Kubernetes Administrator (CKA) AWS/Azure/GCP Kubernetes/OpenShift-related certifications Strong problem-solving skills with a strategic mindset for complex migrations. Experience in leading technical projects and mentoring engineers. Excellent communication and documentation skills. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 762745

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Position: DevOps Engineer Experience Required: 4+ years Employment Type: Full-Time Location: Pune Role Summary: We are seeking skilled DevOps Engineers with at least 4 years of experience in managing cloud infrastructure, automation, and modern CI/CD workflows. This role requires strong hands-on expertise in designing, deploying, and maintaining scalable cloud environments using Infrastructure-as-Code (IaC) principles. Candidates must be comfortable working with container technologies, cloud security, networking, and monitoring tools to ensure system efficiency and reliability in large-scale applications. Key Responsibilities: Design and manage cloud infrastructure using platforms like AWS, Azure , or GCP . Write and maintain Infrastructure-as-Code (IaC) using tools such as Terraform or CloudFormation . Develop and manage CI/CD pipelines with tools like GitHub Actions, Jenkins, GitLab CI/CD, Bitbucket Pipelines , or AWS CodePipeline . Deploy and manage containers using Kubernetes, OpenShift, AWS EKS, AWS ECS , and Docker . Ensure security compliance with frameworks including SOC 2, PCI, HIPAA, GDPR , and HITRUST . Lead and support cloud migration projects from on-premise to cloud infrastructure. Implement and fine-tune monitoring and alerting systems using tools such as Datadog, Dynatrace, CloudWatch, Prometheus, ELK , or Splunk . Automate infrastructure setup and configuration with Ansible, Chef, Puppet , or equivalent tools. Diagnose and resolve complex issues involving cloud performance, networking , and server management . Collaborate across development, security, and operations teams to enhance DevSecOps practices. Required Skills & Experience: 3+ years in a DevOps, cloud infrastructure , or platform engineering role. Strong knowledge and hands-on experience with AWS Cloud . In-depth experience with Kubernetes, ECS, OpenShift , and container orchestration. Skilled in writing IaC using Terraform , CloudFormation , or similar tools. Proficiency in automation using Python, Bash , or PowerShell . Familiar with CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD , or Bitbucket Pipelines . Solid background in Linux distributions (RHEL, SUSE, Ubuntu, Amazon Linux) and Windows Server environments. Strong grasp of networking concepts : VPCs, subnets, load balancers, firewalls, and security groups. Experience working with monitoring/logging platforms such as Datadog, Prometheus, ELK, Dynatrace, etc. Excellent communication skills and a collaborative mindset. Understanding of cloud security practices including IAM policies, WAF, GuardDuty , and vulnerability management . Preferred/Good-to-Have Skills: Exposure to cloud-native security platforms (e.g., AWS Security Hub, Azure Security Center, Google SCC). Familiarity with regulatory compliance standards like SOC 2, PCI, HIPAA, GDPR , and HITRUST . Experience managing Windows Server environments in tandem with Linux. Understanding of centralized logging tools such as Splunk, Fluentd , or AWS OpenSearch . Knowledge of GitOps methodologies using tools like ArgoCD or Flux . Background in penetration testing, threat detection , and security assessments . Proven experience with cloud cost optimization strategies . A passion for coaching, mentoring , and sharing DevOps best practices within the team. Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

Position DevOps with Github Action Job Description DevOps principles and Agile practices, including Infrastructure as Code (IaC) and GitOps, to streamline and enhance development workflows. Infrastructure Management: Oversee the management of Linux-based infrastructure and understand networking concepts, including microservices communication and service mesh implementations. Containerization & Orchestration: Leverage Docker and Kubernetes for containerization and orchestration, with experience in service discovery, auto-scaling, and network policies. Automation & Scripting: Automate infrastructure management using advanced scripting and IaC tools such as Terraform, Ansible, Helm Charts, and Python. AWS and Azure Services Expertise: Utilize a broad range of AWS and Azure services, including IAM, EC2, S3, Glacier, VPC, Route53, EBS, EKS, ECS, RDS, Azure Virtual Machines, Azure Blob Storage, Azure Kubernetes Service (AKS), and Azure SQL Database, with a focus on integrating new cloud innovations. Incident Management: Manage incidents related to GitLab pipelines and deployments, perform root cause analysis, and resolve issues to ensure high availability and reliability. Development Processes: Define and optimize development, test, release, update, and support processes for GitLab CI/CD operations, incorporating continuous improvement practices. Architecture & Development Participation: Contribute to architecture design and software development activities, ensuring alignment with industry best practices and GitLab capabilities. Strategic Initiatives: Collaborate with the leadership team on process improvements, operational efficiency, and strategic technology initiatives related to GitLab and cloud services. Required Skills & Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience: 7-9+ years of hands-on experience with GitLab CI/CD, including implementing, configuring, and maintaining pipelines, along with substantial experience in AWS and Azure cloud services. Location: IN-GJ-Ahmedabad, India-Ognaj (eInfochips) Time Type Full time Job Category Engineering Services Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Job Description Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role as associate level What you'll do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. We are seeking a highly skilled and motivated AWS Cloud Engineer with deep expertise in Amazon EKS, Kubernetes, Docker, and Helm chart development. The ideal candidate will be responsible for designing, implementing, and maintaining scalable, secure, and resilient containerized applications in the cloud. You’ll also be: Design, deploy, and manage Kubernetes clusters using Amazon EKS. Develop and maintain Helm charts for deploying containerized applications. Build and manage Docker images and registries for microservices. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Monitor and troubleshoot Kubernetes workloads and cluster health. Support CI/CD pipelines for containerized applications. Collaborate with development and DevOps teams to ensure seamless application delivery. Ensure security best practices are followed in container orchestration and cloud environments. Optimize performance and cost of cloud infrastructure. The skills you'll need You’ll need a background in software engineering, software design, architecture, and an understanding of how your area of expertise supports our customers. You'll need experience in Java full stack including Microservices, ReactJS, AWS, Spring, SpringBoot, SpringBatch, Pl/SQL, Oracle, PostgreSQL, Junit, Mockito, Cloud, REST API, API Gateway, Kafka and API development. You’ll also need: 3+ years of hands-on experience with AWS services, especially EKS, EC2, IAM, VPC, and CloudWatch. Strong expertise in Kubernetes architecture, networking, and resource management. Proficiency in Docker and container lifecycle management. Experience in writing and maintaining Helm charts for complex applications. Familiarity with CI/CD tools such as Jenkins, GitLab CI, or GitHub Actions. Solid understanding of Linux systems, shell scripting, and networking concepts. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of security practices in cloud and container environments. Preferred Qualifications: AWS Certified Solutions Architect or AWS Certified DevOps Engineer. Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools like ArgoCD or Flux. Experience with logging and observability tools (e.g., ELK stack, Fluentd). Show more Show less

Posted 2 weeks ago

Apply

Exploring GitOps Jobs in India

With the increasing adoption of DevOps practices in the tech industry, GitOps has emerged as a popular approach for managing infrastructure and deployments. Job opportunities in the field of GitOps are on the rise in India, with many companies looking for professionals who are skilled in this area.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Gurgaon
  5. Chennai

Average Salary Range

The average salary range for GitOps professionals in India varies based on experience levels: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum

Career Path

In the GitOps field, a typical career path may include roles such as: 1. Junior GitOps Engineer 2. GitOps Engineer 3. Senior GitOps Engineer 4. GitOps Architect 5. GitOps Manager

Related Skills

Besides GitOps expertise, professionals in this field are often expected to have knowledge in: - DevOps practices - Infrastructure as Code (IaC) tools like Terraform - Containerization technologies like Docker and Kubernetes - Continuous Integration/Continuous Deployment (CI/CD) pipelines

Interview Questions

  • What is GitOps and how does it differ from traditional DevOps practices? (medium)
  • Explain the role of Git in the GitOps workflow. (basic)
  • How would you implement a rollback strategy in a GitOps environment? (medium)
  • What are some common challenges faced while implementing GitOps in an organization? (medium)
  • Describe your experience with GitOps tools like Argo CD or Flux. (medium)
  • How do you ensure security in a GitOps pipeline? (advanced)
  • What are the benefits of using GitOps for infrastructure management? (basic)
  • Explain the concept of declarative infrastructure in the context of GitOps. (medium)
  • How do you handle secrets and sensitive information in a GitOps workflow? (medium)
  • What is the difference between GitOps and ChatOps? (basic)
  • Describe a scenario where you had to troubleshoot a failed deployment in a GitOps pipeline. (medium)
  • How do you monitor and audit changes made to infrastructure using GitOps? (advanced)
  • What are some best practices for version controlling infrastructure code in GitOps? (medium)
  • How can you ensure high availability and scalability in a GitOps setup? (medium)
  • Explain the concept of GitOps synchronization and reconciliation. (medium)
  • How do you handle configuration drift in a GitOps environment? (advanced)
  • What are the limitations of GitOps and how can they be mitigated? (advanced)
  • Describe your experience with GitOps observability tools. (medium)
  • How do you collaborate with development teams in a GitOps workflow? (basic)
  • What are some key metrics you would track to measure the success of a GitOps implementation? (medium)
  • How do you automate the testing process in a GitOps pipeline? (medium)
  • Explain the concept of progressive delivery and how it can be implemented in GitOps. (advanced)
  • What are some strategies for disaster recovery in a GitOps setup? (advanced)
  • How do you handle dependencies between different components in a GitOps deployment? (medium)

Closing Remark

As the demand for GitOps professionals continues to grow in India, now is the perfect time to upskill and prepare for exciting job opportunities in this field. Stay updated with the latest trends, practice your technical skills, and approach interviews with confidence to land your dream GitOps job. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies