Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Role : Cloud Automation DevOps Engineer Job Location: Noida ( Hybrid) Job Summary: We are seeking a highly skilled and innovative Cloud Automation DevOps Engineer to join our Automation Center of Excellence (CoE). The ideal candidate will have hands-on experience in designing, developing solution demo’s, setting up lab and piloting\ implementing scalable Automation solutions across IT Operations (ITOps), DevOps, and CloudOps domains using a combination of traditional automation frameworks and cutting-edge technologies, including Agentic AI and Generative AI-driven automation platforms. Key Responsibilities: Design and develop end-to-end automation workflows across cloud infrastructure, CI/CD pipelines, IT service management, monitoring, and incident response. Lead the architecture and implementation of intelligent automation use cases using GenAI, AgenticAI, and orchestration tools. Integrate automation solutions across multi-cloud environments (AWS, Azure, GCP) and hybrid infrastructure. Collaborate with platform and operations teams to build reusable automation templates, libraries, and toolkits. Drive assessment, use case discovery, feasibility analysis, POC, and prototyping of intelligent automation initiatives across ITOps, DevOps, and CloudOps. Develop and maintain IaC (Infrastructure-as-Code) using tools such as Terraform, Ansible, and CloudFormation. Implement observability-driven automation workflows integrating tools like Prometheus, Datadog, Dynatrace, Splunk, or SolarWinds etc. Maintain best practices in security, compliance, and governance for automation solutions. Document solutions and provide knowledge transfer to internal-Presales teams as part of the CoE enablement model. Required Skills and Experience: Proven experience (6+ years) in automation across DevOps, ITOps, and CloudOps domains. Strong proficiency in automation tools: Terraform, Ansible, Jenkins, GitOps tools, Python/Shell scripting / Java scripting / REST API’s etc. Experience with cloud platforms: AWS, Azure, GCP – including automation of provisioning, scaling, and monitoring. Hands-on experience building automation using GenAI models (e.g., OpenAI, LangChain) and Agentic AI frameworks. Familiarity with AIOps tools and platforms and their integration into automation workflows. Experience with container orchestration (Kubernetes, Docker) and related automation. Strong understanding of CI/CD pipelines, version control (Git), and release engineering. Excellent problem-solving skills and the ability to lead PoCs and pilot automation projects. Preferred Qualifications: Exposure to ITSM tools (ServiceNow, BMC Helix) with automation integrations. Experience in building modular automation components or automation-as-a-service models. Prior experience working in an Automation Center of Excellence or contributing to enterprise-wide automation strategies. Knowledge of security automation and compliance controls in cloud-native environments. Why Join Us: Be a key contributor to building an enterprise-grade Automation CoE focused on next-gen intelligent automation. Work on innovative use cases integrating traditional automation with Agentic and GenAI solutions. Collaborate with cross-functional teams on high-impact initiatives across global operations. Apply Now and be part of the future of automation! Show more Show less
Posted 2 weeks ago
8.0 years
3 - 8 Lacs
Chennai
On-site
Meet the Team The Cisco AI Software & Platform Group incubates and delivers Generative AI based solutions to reinvent Cisco's existing Products and how customers interact with them. Our Group is also introducing new offerings that help customers roll out Generative AI at scale while doing so responsibly. Ultimately, we are doing so through internal platforms that unlock the benefits of this technology for Cisco teams and partners across our Security, Enterprise Networking, Collaboration and Splunk portfolios. Your Impact Cisco is looking for a highly experienced and innovative DevOps Engineer to join our global DevOps team. In this critical role, you will architect and build scalable, secure cloud infrastructure and lead the adoption of best-in-class DevOps practices across the organization. You will work closely with cross-functional teams to ensure flawless code delivery, system reliability, and operational excellence for our SaaS platforms. Key Responsibilities Implement end-to-end CI/CD workflows in a large-scale, distributed environment—enabling both on-demand and scheduled builds and deployments with zero downtime. Scale microservice-based platforms across multiple geographic regions, with a focus on container orchestration (Kubernetes), cost optimization, and automated scaling solutions. Lead cross-functional DevOps projects from inception to completion defining success criteria, coordinating execution, and measuring outcomes against clear KPIs. Collaborate with engineering, QA, and product teams to streamline delivery pipelines and promote DevOps best practices throughout the development lifecycle. Advocate automation in every layer of the infrastructure stack using Infrastructure as Code (IaC) principles and tools such as Terraform, Helm, and GitOps frameworks. Continuously evaluate and adopt emerging DevOps tools and technologies to increase system resilience, reduce operational overhead, and enhance developer productivity. Participate in on-call rotation. Serve as a subject matter expert within the DevOps organization—promoting a culture of ownership, collaboration, and continuous improvement. Minimum Qualifications Bachelors degree in Comp Science, Engineering (or related field /industry) + 8 years of DevOps experience, Masters + 5 years of related experience, or PhD + 3 years of related experience. Strong understanding of CI/CD pipelines and automation tools. Knowledge of cloud platforms (AWS, Azure, GCP). Solid scripting and automation skills (e.g., Python, Bash, Go). Preferred Qualifications: Deep expertise in CI/CD tooling and practices, including hands-on experience with systems like Jenkins, GitLab, ArgoCD, or similar. Strong proficiency in Kubernetes, Docker, and cloud-native patterns in AWS, Azure, or GCP. Demonstrated success scaling containerized microservices across regions with cost and performance optimizations. Proven leadership in cross-functional engineering projects with measurable outcomes. Excellent communication, collaboration, and mentoring skills. #WeAreCisco #WeAreCisco where every individual brings their unique skills and perspectives together to pursue our purpose of powering an inclusive future for all. Our passion is connection—we celebrate our employees’ diverse set of backgrounds and focus on unlocking potential. Cisconians often experience one company, many careers where learning and development are encouraged and supported at every stage. Our technology, tools, and culture pioneered hybrid work trends, allowing all to not only give their best, but be their best. We understand our outstanding opportunity to bring communities together and at the heart of that is our people. One-third of Cisconians collaborate in our 30 employee resource organizations, called Inclusive Communities, to connect, foster belonging, learn to be informed allies, and make a difference. Dedicated paid time off to volunteer—80 hours each year—allows us to give back to causes we are passionate about, and nearly 86% do! Our purpose, driven by our people, is what makes us the worldwide leader in technology that powers the internet. Helping our customers reimagine their applications, secure their enterprise, transform their infrastructure, and meet their sustainability goals is what we do best. We ensure that every step we take is a step towards a more inclusive future for all. Take your next step and be you, with us! Message to applicants applying to work in the U.S. and/or Canada: When available, the salary range posted for this position reflects the projected hiring range for new hire, full-time salaries in U.S. and/or Canada locations, not including equity or benefits. For non-sales roles the hiring ranges reflect base salary only; employees are also eligible to receive annual bonuses. Hiring ranges for sales positions include base and incentive compensation target. Individual pay is determined by the candidate's hiring location and additional factors, including but not limited to skillset, experience, and relevant education, certifications, or training. Applicants may not be eligible for the full salary range based on their U.S. or Canada hiring location. The recruiter can share more details about compensation for the role in your location during the hiring process. U.S. employees have access to quality medical, dental and vision insurance, a 401(k) plan with a Cisco matching contribution, short and long-term disability coverage, basic life insurance and numerous wellbeing offerings. Employees receive up to twelve paid holidays per calendar year, which includes one floating holiday (for non-exempt employees), plus a day off for their birthday. Non-Exempt new hires accrue up to 16 days of vacation time off each year, at a rate of 4.92 hours per pay period. Exempt new hires participate in Cisco’s flexible Vacation Time Off policy, which does not place a defined limit on how much vacation time eligible employees may use, but is subject to availability and some business limitations. All new hires are eligible for Sick Time Off subject to Cisco’s Sick Time Off Policy and will have eighty (80) hours of sick time off provided on their hire date and on January 1st of each year thereafter. Up to 80 hours of unused sick time will be carried forward from one calendar year to the next such that the maximum number of sick time hours an employee may have available is 160 hours. Employees in Illinois have a unique time off program designed specifically with local requirements in mind. All employees also have access to paid time away to deal with critical or emergency issues. We offer additional paid time to volunteer and give back to the community. Employees on sales plans earn performance-based incentive pay on top of their base salary, which is split between quota and non-quota components. For quota-based incentive pay, Cisco typically pays as follows: .75% of incentive target for each 1% of revenue attainment up to 50% of quota; 1.5% of incentive target for each 1% of attainment between 50% and 75%; 1% of incentive target for each 1% of attainment between 75% and 100%; and once performance exceeds 100% attainment, incentive rates are at or above 1% for each 1% of attainment with no cap on incentive compensation. For non-quota-based sales performance elements such as strategic sales objectives, Cisco may pay up to 125% of target. Cisco sales plans do not have a minimum threshold of performance for sales incentive compensation to be paid.
Posted 2 weeks ago
0 years
0 Lacs
Chennai
Remote
Chennai, India Hyderabad, India Job ID: R-1058074 Apply prior to the end date: June 3rd, 2025 When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you’ll be doing... You will be part of a World Class Container Platform team that builds and operates highly scalable Kubernetes based container platforms(EKS, OCP, OKE and GKE) at a large scale for Global Technology Solutions at Verizon, a top 20 Fortune 500 company. This individual will have a high level of technical expertise and daily hands-on implementation working in a product team developing services in two week sprints using agile principles. This entitles programming and orchestrating the deployment of feature sets into the Kubernetes CaaS platform along with building Docker containers via a fully automated CI/CD pipeline utilizing AWS, Jenkins Ansible playbooks, AWS, CI/CD tools and process (Jenkins, JIRA, GitLab, ArgoCD), Python, Shell Scripts or any other scripting technologies. You will have autonomous control over day-to-day activities allocated to the team as part of agile development of new services Automation and testing of different platform deployments, maintenance and decommissioning Full Stack Development Participate in POC(Proof of Concept) technical evaluations for new technologies for use in the cloud What we’re looking for... You’ll need to have: Bachelor's degree or four or more years of work experience. Three or more years of relevant kubernetes-centric development experience Address Jira tickets opened by platform customers Hands-on experience with one or more of the following platforms: EKS, Red Hat OpenShift, GKE, AKS, OCI RBAC and Pod Security Standards, Quotas, LimitRanges, OPA & Gatekeeper Policies Expertise in one or more of the following: Ansible, Terraform, Helm, Jenkins, Gitlab VSC/Pipelines/Runners, Artifactory Proficiency with monitoring/observability tools such as New Relic, Prometheus/Grafana, logging solutions (Fluentd/Elastic/Fluentbit/OTEL/ADOT/Splunk) to include creating/customizing metrics and/or logging dashboards Infra components like Flux, cert-manager, Karpenter, Cluster Autoscaler, VPC CNI, Over-provisioning, CoreDNS, metrics-server Familiarity with Wireshark, tshark, dumpcap, etc., capturing network traces and performing packet analysis Working experience with Service Mesh lifecycle management and configuring, troubleshooting applications deployed on Service Mesh and Service Mesh related issues Demonstrated expertise with the K8S ecosystem (inspecting cluster resources, determining cluster health, identifying potential application issues, etc.) Experience creating self-healing automation scripts/pipelines Bash scripting experience to include automation scripting (netshoot, RBAC lookup, etc.) Demonstrated Strong troubleshooting and problem-solving skills Demonstrated expertise with the K8S security ecosystem (SCC, network policies, RBAC, CVE remediation, CIS benchmarks/hardening, etc.) Strong troubleshooting and problem-solving skills Certified Kubernetes Administrator (CKA) Excellent cross collaboration and communication skills Even better if you have one or more of the following: GitOps CI/CD workflows (ArgoCD, Flux) and Working in Agile Ceremonies Model Working experience with security tools such as Sysdig, Crowdstrike, Black Duck, Xray, etc. Networking of microservices - Solid understanding of Kubernetes networking and troubleshooting Experience with monitoring tools like NewRelic working experience with Kiali, Jaeger Lifecycle management and assisting app teams on how they could leverage these tools for their observability needs K8s SRE Tools for Troubleshooting Certified Kubernetes Administrator (CKA) Certified Kubernetes Application Developer (CKAD) Red Hat Certified OpenShift Administrator If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Apply Now Save Saved Open sharing options Share Related Jobs Engineer II-Cloud Save Chennai, India Technology Engineer II-Cloud Save Chennai, India Technology Engr III Specialist-DevOps Save Chennai, India Technology Shaping the future. Connect with the best and brightest to help innovate and operate some of the world’s largest platforms and networks.
Posted 2 weeks ago
12.0 years
5 - 9 Lacs
Bengaluru
On-site
Meet the Team The Cisco AI Software & Platform Group incubates and delivers Generative AI based solutions to reinvent Cisco's existing Products and how customers interact with them. Our Group is also introducing new offerings that help customers roll out Generative AI at scale while doing so responsibly. Ultimately, we are doing so through internal platforms that unlock the benefits of this technology for Cisco teams and partners across our Security, Enterprise Networking, Collaboration and Splunk portfolios. Your Impact Cisco is looking for a highly experienced and innovative DevOps Engineer to join our global DevOps team. In this critical role, you will architect and build scalable, secure cloud infrastructure and lead the adoption of best-in-class DevOps practices across the organization. You will work closely with cross-functional teams to ensure flawless code delivery, system reliability, and operational excellence for our SaaS platforms. Key Responsibilities Design and implement end-to-end CI/CD workflows in a large-scale, distributed environment—enabling both on-demand and scheduled builds and deployments with zero downtime. Architect and scale microservice-based platforms across multiple geographic regions, with a focus on container orchestration (Kubernetes), cost optimization, and automated scaling solutions. Lead complex, cross-functional DevOps projects from inception to completion defining success criteria, coordinating execution, and measuring outcomes against clear KPIs. Collaborate with engineering, QA, and product teams to streamline delivery pipelines and promote DevOps best practices throughout the development lifecycle. Champion automation in every layer of the infrastructure stack using Infrastructure as Code (IaC) principles and tools such as Terraform, Helm, and GitOps frameworks. Continuously evaluate and adopt emerging DevOps tools and technologies to increase system resilience, reduce operational overhead, and enhance developer productivity. Participate in on-call rotation. Serve as a mentor and strategic advisor within the DevOps organization—promoting a culture of ownership, collaboration, and continuous improvement. Minimum Qualifications Bachelors degree in Comp Science, Engineering (or related field /industry) + 12 years of DevOps experience, Masters + 8 years of related experience, or PhD + 5 years of related experience. Strong understanding of CI/CD pipelines and automation tools. Knowledge of cloud platforms (AWS, Azure, GCP). Solid scripting and automation skills (e.g., Python, Bash, Go). Preferred Qualifications: Deep expertise in CI/CD tooling and practices, including hands-on experience with systems like Jenkins, GitLab, ArgoCD, or similar. Strong proficiency in Kubernetes, Docker, and cloud-native patterns in AWS, Azure, or GCP. Proven success scaling containerized microservices across regions with cost and performance optimizations. Proven leadership in cross-functional engineering projects with measurable outcomes. Excellent communication, collaboration, and mentoring skills. #WeAreCisco #WeAreCisco where every individual brings their unique skills and perspectives together to pursue our purpose of powering an inclusive future for all. Our passion is connection—we celebrate our employees’ diverse set of backgrounds and focus on unlocking potential. Cisconians often experience one company, many careers where learning and development are encouraged and supported at every stage. Our technology, tools, and culture pioneered hybrid work trends, allowing all to not only give their best, but be their best. We understand our outstanding opportunity to bring communities together and at the heart of that is our people. One-third of Cisconians collaborate in our 30 employee resource organizations, called Inclusive Communities, to connect, foster belonging, learn to be informed allies, and make a difference. Dedicated paid time off to volunteer—80 hours each year—allows us to give back to causes we are passionate about, and nearly 86% do! Our purpose, driven by our people, is what makes us the worldwide leader in technology that powers the internet. Helping our customers reimagine their applications, secure their enterprise, transform their infrastructure, and meet their sustainability goals is what we do best. We ensure that every step we take is a step towards a more inclusive future for all. Take your next step and be you, with us! Message to applicants applying to work in the U.S. and/or Canada: When available, the salary range posted for this position reflects the projected hiring range for new hire, full-time salaries in U.S. and/or Canada locations, not including equity or benefits. For non-sales roles the hiring ranges reflect base salary only; employees are also eligible to receive annual bonuses. Hiring ranges for sales positions include base and incentive compensation target. Individual pay is determined by the candidate's hiring location and additional factors, including but not limited to skillset, experience, and relevant education, certifications, or training. Applicants may not be eligible for the full salary range based on their U.S. or Canada hiring location. The recruiter can share more details about compensation for the role in your location during the hiring process. U.S. employees have access to quality medical, dental and vision insurance, a 401(k) plan with a Cisco matching contribution, short and long-term disability coverage, basic life insurance and numerous wellbeing offerings. Employees receive up to twelve paid holidays per calendar year, which includes one floating holiday (for non-exempt employees), plus a day off for their birthday. Non-Exempt new hires accrue up to 16 days of vacation time off each year, at a rate of 4.92 hours per pay period. Exempt new hires participate in Cisco’s flexible Vacation Time Off policy, which does not place a defined limit on how much vacation time eligible employees may use, but is subject to availability and some business limitations. All new hires are eligible for Sick Time Off subject to Cisco’s Sick Time Off Policy and will have eighty (80) hours of sick time off provided on their hire date and on January 1st of each year thereafter. Up to 80 hours of unused sick time will be carried forward from one calendar year to the next such that the maximum number of sick time hours an employee may have available is 160 hours. Employees in Illinois have a unique time off program designed specifically with local requirements in mind. All employees also have access to paid time away to deal with critical or emergency issues. We offer additional paid time to volunteer and give back to the community. Employees on sales plans earn performance-based incentive pay on top of their base salary, which is split between quota and non-quota components. For quota-based incentive pay, Cisco typically pays as follows: .75% of incentive target for each 1% of revenue attainment up to 50% of quota; 1.5% of incentive target for each 1% of attainment between 50% and 75%; 1% of incentive target for each 1% of attainment between 75% and 100%; and once performance exceeds 100% attainment, incentive rates are at or above 1% for each 1% of attainment with no cap on incentive compensation. For non-quota-based sales performance elements such as strategic sales objectives, Cisco may pay up to 125% of target. Cisco sales plans do not have a minimum threshold of performance for sales incentive compensation to be paid.
Posted 2 weeks ago
2.0 years
5 - 9 Lacs
Pune
On-site
Every day, Global Payments makes it possible for millions of people to move money between buyers and sellers using our payments solutions for credit, debit, prepaid and merchant services. Our worldwide team helps over 3 million companies, more than 1,300 financial institutions and over 600 million cardholders grow with confidence and achieve amazing results. We are driven by our passion for success and we are proud to deliver best-in-class payment technology and software solutions. Join our dynamic team and make your mark on the payments technology landscape of tomorrow. Summary of This Role Responsible for availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning. Creates a bridge between development and operations by applying a software engineering mindset to system administration topics. Splits time between operations/on-call duties and developing systems and software that help increase site reliability and performance. What Part Will You Play? Chaos engineering - you’re expected to think laterally about how our systems might fail in theory, design tests to demonstrate how they behave in practice, and then formulate and implement remediation plans, as appropriate. Pushing our systems to their limits, and then coming up with designs for how to get them to the next performance tier. Use practices from DevOps and GitOps to improve automation and processes to make self service possible. Safeguarding reliability. Ensuring that our services are highly available, resilient against disasters, self-monitoring, and self-healing. Running “game days” to test assumptions about reliability and learn what will break before it matters to customers. Reviewing designs with an eye toward increasing the holistic stability of our platform and identifying potential risks. Building systems to proactively monitor the health, performance and security of our production and non-production virtualized infrastructure. Improving our monitoring and alerting systems to make sure engineers get paged when it matters (and don’t get paged when it doesn’t). Troubleshooting systems and network issues, alongside our Technical Operations Team. Evolving our SDLC, practices, and tooling to account for Site Reliability considerations and best practices. Developing runbooks and improving documentation. What Are We Looking For in This Role? Minimum Qualifications BS in Computer Science, Information Technology, Business / Management Information Systems or related field Typically minimum of 2 years relevant experience Preferred Qualifications Nothing provided What Are Our Desired Skills and Capabilities? Skills / Knowledge - Developing professional expertise, applies company policies and procedures to resolve a variety of issues. Job Complexity - Works on problems of moderate scope where analysis of situations or data requires a review of a variety of factors. Exercises judgment within defined procedures and practices to determine appropriate action. Builds productive internal/external working relationships. Supervision - Normally receives general instructions on routine work, detailed instructions on new projects or assignments. Experience in Public and Private Clouds, Jenkins, Terraform, Ansible, OpenShift, Kubernetes or AWS EKS Global Payments Inc. is an equal opportunity employer. Global Payments provides equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex (including pregnancy), national origin, ancestry, age, marital status, sexual orientation, gender identity or expression, disability, veteran status, genetic information or any other basis protected by law. If you wish to request reasonable accommodations related to applying for employment or provide feedback about the accessibility of this website, please contact jobs@globalpay.com.
Posted 2 weeks ago
3.0 years
0 Lacs
Calcutta
On-site
Walk-In Interview on 4 June 2025 (11:00-17:00) We are seeking a DevOps Engineer with 3+ years of experience specializing in AWS, Git, and VPS management. The ideal candidate will be responsible for automating deployments, managing cloud infrastructure, and optimizing CI/CD pipelines for seamless development and operations. Key Responsibilities: ✅ AWS Infrastructure Management – Deploy, configure, and optimize AWS services (EC2, S3, RDS, Lambda, etc.). ✅ Version Control & GitOps – Manage repositories, branching strategies, and workflows using Git/GitHub/GitLab. ✅ VPS Administration – Configure, maintain, and optimize VPS servers for high availability and performance. ✅ CI/CD Pipeline Development – Implement automated Git-based CI/CD workflows for smooth software releases. ✅ Containerization & Orchestration – Deploy applications using Docker and Kubernetes. ✅ Infrastructure as Code (IaC) – Automate deployments using Terraform or CloudFormation. ✅ Monitoring & Security – Implement logging, monitoring, and security best practices. Required Skills & Experience: 3+ years of experience in AWS, Git, and VPS management. Strong knowledge of AWS services (EC2, VPC, IAM, S3, CloudWatch, etc.). Expertise in Git and GitOps workflows. Hands-on experience with VPS hosting, Nginx, Apache, and server management. Experience with CI/CD tools (Jenkins, GitHub Actions, GitLab CI). Knowledge of Infrastructure as Code (Terraform, CloudFormation). Strong scripting skills (Bash, Python, or Go). Preferred Qualifications: Experience with server security hardening on VPS servers. Familiarity with AWS Lambda & Serverless architecture. Knowledge of DevSecOps best practices. Bring your updated resume and be in formal attire. Job Types: Full-time, Permanent, Contractual / Temporary Benefits: Provident Fund Schedule: Day shift Work Location: In person
Posted 2 weeks ago
8.0 - 10.0 years
10 - 15 Lacs
Hyderabad
Work from Office
Summary As an employee at Thomson Reuters, you will play a role in shaping and leading the global knowledge economy. Our technology drives global markets and helps professionals around the world make decisions that matter. As the worlds leading provider of intelligent information, we want your unique perspective to create the solutions that advance our businessand your career. About the Role As a Senior DevOps Engineer you will be responsible for building and supporting AWS infrastructure used to host a platform offering audit solutions. This engineer is constantly looking to optimize systems and services for security, automation, and performance/availability, while ensuring solutions developed adhere and align to architecture standards. This individual is responsible for ensuring that technology systems and related procedures adhere to organizational values. The person will also assist Developers with technical issues in the initiation, planning, and execution phases of projects. These activities include: the definition of needs, benefits, and technical strategy; research & development within the project life cycle; technical analysis and design; and support of operations staff in executing, testing and rolling-out the solutions. This role will be responsible for: Plan, deploy, and maintain critical business applications in prod/non-prod AWS environments Design and implement appropriate environments for those applications, engineer suitable release management procedures and provide production support Influence broader technology groups in adopting Cloud technologies, processes, and best practices Drive improvements to processes and design enhancements to automation to continuously improve production environments Maintain and contribute to our knowledge base and documentation Provide leadership, technical support, user support, technical orientation, and technical education activities to project teams and staff Manage change requests between development, staging, and production environments Provision and configure hardware, peripherals, services, settings, directories, storage, etc. in accordance with standards and project/operational requirements Perform daily system monitoring, verifying the integrity and availability of all hardware, server resources, systems and key processes, reviewing system and application logs, and verifying completion of automated processes Perform ongoing performance tuning, infrastructure upgrades, and resource optimization as required Provide Tier II support for incidents and requests from various constituencies Investigate and troubleshoot issues Research, develop, and implement innovative and where possible automated approaches for system administration tasks About you You are fit for the role of a Senior DevOps Engineering role if your background includes: Required: 8+ years at Senior DevOps Level. Knowledge of Azure AWS cloud platform s3, cloudfront, cloudformation, RDS, OpenSearch, Active MQ. Knowledge of CI/CD, preferably on AWS Developer tools Scripting knowledge, preferably in Python Bash or Powershell Have contributed as a DevOps engineer responsible for planning, building and deploying cloud-based solutions Knowledge on building and deploying containers Kubernetes. (also, exposure to AWS EKS is preferable) Knowledge on Infrastructure as code like: Bicep or Terraform, Ansible Knowledge on GitHub Action, Powershell and GitOps Nice to have: Experience with build and deploying .net core java-based solutions Strong understanding on API first strategy Knowledge and some experience implementing testing strategy in a continuous deployment environment Have owned and operated continuous delivery deployment. Have setup monitoring tools and disaster recovery plans to ensure business continuity.
Posted 2 weeks ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Summary Description Summary of This Role Responsible for availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning. Creates a bridge between development and operations by applying a software engineering mindset to system administration topics. Splits time between operations/on-call duties and developing systems and software that help increase site reliability and performance. What Part Will You Play? Chaos engineering - you’re expected to think laterally about how our systems might fail in theory, design tests to demonstrate how they behave in practice, and then formulate and implement remediation plans, as appropriate. Pushing our systems to their limits, and then coming up with designs for how to get them to the next performance tier. Use practices from DevOps and GitOps to improve automation and processes to make self service possible. Safeguarding reliability. Ensuring that our services are highly available, resilient against disasters, self-monitoring, and self-healing. Running “game days” to test assumptions about reliability and learn what will break before it matters to customers. Reviewing designs with an eye toward increasing the holistic stability of our platform and identifying potential risks. Building systems to proactively monitor the health, performance and security of our production and non-production virtualized infrastructure. Improving our monitoring and alerting systems to make sure engineers get paged when it matters (and don’t get paged when it doesn’t). Troubleshooting systems and network issues, alongside our Technical Operations Team. Evolving our SDLC, practices, and tooling to account for Site Reliability considerations and best practices. Developing runbooks and improving documentation. What Are We Looking For in This Role? Minimum Qualifications BS in Computer Science, Information Technology, Business / Management Information Systems or related field Typically minimum of 2 years relevant experience Preferred Qualifications Nothing provided What Are Our Desired Skills and Capabilities? Skills / Knowledge - Developing professional expertise, applies company policies and procedures to resolve a variety of issues. Job Complexity - Works on problems of moderate scope where analysis of situations or data requires a review of a variety of factors. Exercises judgment within defined procedures and practices to determine appropriate action. Builds productive internal/external working relationships. Supervision - Normally receives general instructions on routine work, detailed instructions on new projects or assignments. Experience in Public and Private Clouds, Jenkins, Terraform, Ansible, OpenShift, Kubernetes or AWS EKS Show more Show less
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Experience Range: 8 to 12 years Location: Bangalore OpenShift Expertise (Azure Redhat OpenShift): Lead the design, deployment, and optimization of OpenShift clusters, ensuring high availability, security, and scalability, installation, upgrades, administration, and troubleshooting. • Strong expertise in Azure Kubernetes Services, containerization (Docker), and cloud-native development. • Platform Automation: Develop and maintain Infrastructure as Code (IaC) using Terraform and implement GitOps practices with ArgoCD Show more Show less
Posted 2 weeks ago
0.0 - 1.0 years
0 Lacs
Pune, Maharashtra
On-site
Role: DevOps Engineer / Platform Engineer Job Type: Full time Location: Pune (On-site) Salary: 15 to 20 LPA Time: IST - Normal Shift Role Overview We are looking for experienced DevOps Engineers (4+ years) with a strong background in cloud infrastructure, automation, and CI/CD processes. The ideal candidate will have hands-on experience in building, deploying, and maintaining cloud solutions using Infrastructure-as-Code (IaC) best practices. The role requires expertise in containerization, cloud security, networking, and monitoring tools to optimize and scale enterprise-level applications. Key Responsibilities Design, implement, and manage cloud infrastructure solutions on AWS, Azure, or GCP . Develop and maintain Infrastructure-as-Code (IaC) using Terraform, CloudFormation, or similar tools. Implement and manage CI/CD pipelines using tools like GitHub Actions, Jenkins, GitLab CI/CD, BitBucket Pipelines, or AWS CodePipeline . Manage and orchestrate containers using Kubernetes, OpenShift, AWS EKS, AWS ECS, and Docker . Work on cloud migrations , helping organizations transition from on-premises data centers to cloud-based infrastructure. Ensure system security and compliance with industry standards such as SOC 2, PCI, HIPAA, GDPR, and HITRUST . Set up and optimize monitoring, logging, and alerting using tools like Datadog, Dynatrace, AWS CloudWatch, Prometheus, ELK, or Splunk . Automate deployment, configuration, and management of cloud-native applications using Ansible, Chef, Puppet, or similar configuration management tools . Troubleshoot complex networking, Linux/Windows server issues , and cloud-related performance bottlenecks. Collaborate with development, security, and operations teams to streamline the DevSecOps process. Must-Have Skills 3 + years of experience in DevOps, cloud infrastructure, or platform engineering. Expertise in at least one major cloud provider : AWS, Azure, or GCP . Strong experience with Kubernetes, ECS, OpenShift , and container orchestration technologies. Hands-on experience in Infrastructure-as-Code (IaC) using Terraform, AWS CloudFormation, or similar tools . Proficiency in scripting/programming languages like Python, Bash, or PowerShell for automation. Strong knowledge of CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, or BitBucket Pipelines . Experience with Linux operating systems (RHEL, SUSE, Ubuntu, Amazon Linux) and Windows Server administration . Expertise in networking (VPCs, Subnets, Load Balancing, Security Groups, Firewalls) . Experience in log management and monitoring tools like Datadog, CloudWatch, Prometheus, ELK, Dynatrace . Strong communication skills to work with cross-functional teams and external customers. Knowledge of Cloud Security best practices, including IAM, WAF, GuardDuty, CVE scanning, vulnerability management . Good-to-Have Skills Knowledge of cloud-native security solutions (AWS Security Hub, Azure Security Center, Google Security Command Center). Experience in compliance frameworks (SOC 2, PCI, HIPAA, GDPR, HITRUST). Exposure to Windows Server administration alongside Linux environments. Familiarity with centralized logging solutions (Splunk, Fluentd, AWS OpenSearch). GitOps experience with tools like ArgoCD or Flux . Background in penetration testing, intrusion detection, and vulnerability scanning . Experience in cost optimization strategies for cloud infrastructure. Passion for mentoring teams and sharing DevOps best practices. Job Types: Full-time, Permanent Pay: ₹1,500,000.00 - ₹2,000,000.00 per year Schedule: Day shift Evening shift Monday to Friday Ability to commute/relocate: Pune, Maharashtra: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: DevOps: 4 years (Required) Terraform: 1 year (Required) Work Location: In person
Posted 2 weeks ago
10.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Who We Are BCG partners with clients from the private, public, and not‐for profit sectors in all regions of the globe to identify their highest value opportunities, address their most critical challenges, and transform their enterprises. We work with the most innovative companies globally, many of which rank among the world’s 500 largest corporations. Our global presence makes us one of only a few firms that can deliver a truly unified team for our clients – no matter where they are located. Our ~22,000 employees, located in 90+ offices in 50+ countries, enable us to work in collaboration with our clients, to tailor our solutions to each organization. We value and utilize the unique talents that each of these individuals brings to BCG; the wide variety of backgrounds of our consultants, specialists, and internal staff reflects the importance we place on diversity. Our employees hold degrees across a full range of disciplines – from business administration and economics to biochemistry, engineering, computer science, psychology, medicine, and law. What You'll Do BCG X develops innovative and AI driven solutions for the Fortune 500 in their highest‐value use cases. The BCG X Software group productizes repeat use‐cases, creating both reusable components as well as single‐tenant and multi‐tenant SaaS offerings that are commercialized through the BCG consulting business. BCG X is currently looking for a Software Engineering Architect to drive impact and change for the firms engineering and analytics engine and bring new products to BCG clients globally. This Will Include Serving as a leader within BCG X and specifically the KEY Impact Management by BCG X Tribe (Transformation, Post-Merger-Integration related software and data products) overseeing the delivery of high-quality software: driving technical roadmap, architectural decisions and mentoring engineers Influencing and serving as a key decision maker in BCG X technology selection & strategy Active “hands-on” role, building intelligent analytical products to solve problems, write elegant code, and iterate quickly Overall responsibility for the engineering and architecture alignment of all solutions delivered within the tribe. Responsible for technology roadmap of existing and new components delivered. Architecting and implementing backend and frontend solutions primarily using .NET, C#, MS SQL Server, Angular, and other technologies best suited for the goals, including open source i.e. Node, Django, Flask, Python where needed. What You'll Bring 10+ years of technology and software engineering experience in a complex and fast-paced business environment (ideally agile environment) with exposure to a variety of technologies and solutions, with at least 5 year’ experience in Architect role. Experience with a wide range of Application and Data architectures, platforms and tools including: Service Oriented Architecture, Clean Architecture, Software as a Service, Web Services, Object-Oriented Languages (like C# or Java), SQL Databases (like Oracle or SQL Server), Relational, Non-relational Databases, Hands on experience with analytics tools and reporting tools, Data Science experience etc. Thoroughly up to date in technology: Modern cloud architectures including AWS, Azure, GCP, Kubernetes Very strong particularly in .NET, C#, MS SQL Server, Angular technologies Open source stacks including NodeJs, React, Angular, Flask are good to have CI/CD / DevSecOps / GitOps toolchains and development approaches Knowledge in machine learning & AI frameworks Big data pipelines and systems: Spark, Snowflake, Kafka, Redshift, Synapse, Airflow At least Bachelors degree; Master’s degree and/or MBA preferred Team player with excellent work habits and interpersonal skills Care deeply about product quality, reliability, and scalability Passion about the people and culture side of engineering teams Outstanding written and oral communications skills The ability to travel, depending on project requirements.#BCGXjob Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify. Show more Show less
Posted 2 weeks ago
0.0 - 7.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
Noida,Uttar Pradesh,India Job ID 763678 Join our Team About this opportunity As a Senior Engineer- CNIS , you will be responsible for the design, deployment, administration, and lifecycle management of Ericsson’s CNIS-based infrastructure. This includes deep hands-on work in Kubernetes platforms , Cloud-Native networking , Rook/Ceph storage , and CNF onboarding for 5G core networks and telecom workloads. What you will do CNIS Deployment & Operations Install and configure CNIS platforms on bare-metal or virtual infrastructure. Set up and manage ECCD/Kubernetes clusters using tools like Rancher (RKE2), CEE, or kubeadm. Configure CNI plugins (Multus, Calico, SR-IOV) and CSI drivers (Rook/Ceph). Manage Day to Day operations for CNFs including Helm deployments, scaling, and upgrades and troubleshooting for existing opration Issues. Infrastructure & Storage Manage Rook-Ceph clusters and troubleshoot persistent storage issues. Monitor hardware health (compute/storage/network) and ensure infrastructure compliance. Support cluster high availability, resilience, and performance tuning. Networking & Security Hands on experience on SDI/Juniper/Extreme SLX switches. Configure pod and service networking, BGP peering, VLAN/VXLAN overlays, and L2/L3 IP planning. Implement and audit network policies, RBAC, and security postures using standard Kubernetes practices. Integrate LDAP/Keycloak/SSO for user authentication and IAM. Hands on on ESM. Monitoring & Troubleshooting Integrate Prometheus/Grafana, EFK/ELK or Loki for observability. Use kubectl, ceph, and Linux tools for troubleshooting nodes, pods, storage, and networking. Analyse CNF behaviour during failures and generate RCA documentation. Use Netcool & OMC for monitoring. CNF Lifecycle & DevOps Collaborate with CNF vendors to validate Helm charts and deployment manifests. Use GitOps tools (e.g., ArgoCD, FluxCD) for deployment automation. Work closely with DevOps and CI/CD pipelines for CNF rollout and validation. You will bring 3–7 years of experience in cloud-native infrastructure or telco platforms. Strong hands-on in Kubernetes (certifications like CKA/CNCF preferred). Expertise in: CNIs (Multus, Calico, SR-IOV) Storage (Rook/Ceph, CSI) Linux (RHEL/CentOS/Ubuntu) Networking (L2/L3, BGP, VLAN, IPAM, Juniper/Extreme Switches) Helm, YAML, Git Ericsson tools: ENM, CENX, EO-EVNFM, OMC & ESM. Familiarity with Ericsson CNIS framework is strongly preferred . Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Noida Req ID: 763678
Posted 2 weeks ago
4.0 - 5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Prospecta Founded in 2002 in Sydney, Australia, with additional offices in India, North America, Canada, and a local presence in Europe, the UK, and Southeast Asia, Prospecta began with a mission to provide top-tier data management and automation software for enterprise clients. Over the years, we have grown into a leading data management software company. Our flagship product, MDO (Master Data Online), is an enterprise Master Data Management (MDM) platform that facilitates comprehensive data management processes—from creating accurate, compliant, and relevant master data to efficient data disposal. We have established robust processes in asset-intensive industries such as Energy and Utilities, Oil and Gas, Mining, Infrastructure, and Manufacturing. Culture at Prospecta At Prospecta, our culture is centred around growth and the excitement of embracing new challenges. We have a passionate team that collaborates seamlessly to create value for our customers. Our diverse backgrounds make Prospecta an exhilarating place to work, bringing a rich tapestry of perspectives and ideas. We strive to foster an environment that is focused on both professional and personal development. Career progression here isn't just about climbing a ladder—it's about experiencing a continuous flow of exciting, meaningful opportunities that enhance personal development and technical mastery, all under the mentorship of exceptional leaders. Our interconnected organizational structure focuses on agility, responsiveness, and achieving tangible outcomes. If you're someone who thrives in a dynamic environment, enjoys wearing multiple hats, and is willing to go the extra mile to achieve goals, Prospecta is the workplace for you. We courageously push boundaries in everything we do, while sharing a sense of fun and celebrating both small and big wins. Key Responsibilities Disaster Recovery & High Availability Design and implement robust disaster recovery (DR) strategies, including backup/restore procedures and DR drills. Ensure high availability across infrastructure components, minimizing downtime and data loss. 2.Monitoring & Observability Set up and maintain monitoring solutions using New Relic, Prometheus, and Grafana . Analyze metrics, logs, and traces to identify performance bottlenecks and optimize system health. Develop alerts, dashboards, and incident response runbooks for proactive issue resolution. 3. Containerization & Orchestration Lead the design, deployment, and management of Kubernetes/OpenShift clusters. Manage containerized workloads, including Deployments, ConfigMaps, Secrets, and Operators. Oversee RBAC configurations and security best practices within Kubernetes/OpenShift. 4.Messaging & Streaming Work with messaging and event streaming technologies such as RabbitMQ (RMQ), Kafka, and AWS SNS . Ensure data reliability, optimize throughput, and handle scaling challenges. 5.AWS Infrastructure & Cost Optimization Architect and manage AWS services including VPC, EC2, RDS, S3, IAM, SSM, AWS Secrets Manager, and OpenSearch . Implement cost optimization strategies across AWS resources (e.g., reserved instances, right-sizing). Configure load balancing, tunnels, and security groups to ensure secure and scalable infrastructure. 6.Infrastructure as Code & GitOps Utilize Terraform (and/or other IaC tools) to define, build, and manage infrastructure in a repeatable, version-controlled manner. Implement GitOps best practices to streamline IaC workflows and maintain consistency across environments. 7.CI/CD Pipelines Build and maintain CI/CD pipelines using Tekton on OpenShift or comparable tools. Automate code testing, packaging, and deployment to support rapid, reliable product releases. 8.Scripting & Automation Develop and maintain Bash and Python scripts for automating routine tasks. Create custom tooling for complex automations, and ensure robust error handling and logging. 9.Linux Systems & Networking Administer and troubleshoot Linux systems across multiple distributions and environments. Configure and secure web servers such as Nginx , including advanced setups (e.g., basic auth at specific endpoints). 10.Additional Tools & Services Manage Sonatype Nexus or similar artifact repositories. Implement gateway or API management solutions (e.g., OpenShift 3scale ) for routing, security, and scaling of microservices. Integrate Redis caches, ensure optimal performance and data durability. Use LUA or other scripting languages as needed for custom integrations. 11.Collaboration & Knowledge Sharing Collaborate with cross-functional teams (Development, QA, Infrastructure) to define requirements and deliver solutions. Conduct training and mentorship for junior engineers on best practices, technologies, and tools. Continuously research emerging technologies, sharing insights and thought leadership with the team. Qualifications 4-5 years of hands-on DevOps experience. Proven expertise in AWS services (VPC, EC2, RDS, S3, IAM, SSM, AWS Secrets Manager, OpenSearch). Advanced knowledge of Kubernetes/OpenShift (Deployments, ConfigMaps, Secrets, Operators, RBAC). Experience in disaster recovery planning and implementation. Proficiency with monitoring tools (New Relic, Prometheus, Grafana). Hands-on experience with CI/CD pipelines (Tekton, Jenkins, or similar). Ability to use Terraform and other IaC tools effectively. Strong scripting skills in Bash, Python, and/or LUA . Competence with Linux system administration (networking, performance tuning, security). Understanding of messaging systems (RabbitMQ, Kafka, or AWS SNS). Familiarity with Nginx configuration and management. Demonstrable skill in cost optimization strategies and best practices within cloud environments. Excellent communication and teamwork skills, with a drive to mentor and teach. What will you get: Growth Path: At Prospecta, your career journey is one of growth and opportunity. Here, depending on your career journey you can either kickstart your career or accelerate your professional development in a dynamic environment. Your success is our priority, and as you demonstrate your abilities and achieve results, you'll have the chance to advance into a lead role. We're committed to helping you elevate your experience and skillsets, providing you with the tools, support, and opportunities to reach new heights in your career. Benefits: Competitive salary; Comprehensive health insurance: Generous paid time off; Flexible hybrid working model Ongoing learning & career development; Onsite work opportunities; and Annual company events and workshops. Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Job Title: DevOps Engineer Employment Type: Full-Time, 5 Days/Week Working Hours: Flexible Compensation: ₹8–12 LPA Location: Remote (India-based candidates only) Why Join Us? Remote-First : Work from anywhere in India with complete scheduling flexibility. Growth & Learning : Exposure to cutting-edge cloud-native, automation, and DevSecOps technologies. Innovative Culture : Collaborative, inclusive environment that values innovation, ownership, and continuous improvement. Perks : Unlimited leave policy, health and wellness support, and regular virtual team-building events. Role Overview We’re hiring a DevOps Engineer to architect, implement, and manage the CI/CD infrastructure, cloud deployments, and automation workflows that power our AI-driven platforms. You’ll work closely with software engineers, QA, and product teams to improve development velocity, system reliability, and operational efficiency. Key Responsibilities Design, implement, and maintain CI/CD pipelines using tools like Jenkins , GitHub Actions , or GitLab CI/CD . Manage infrastructure as code (IaC) using Terraform , Pulumi , or CloudFormation . Set up, monitor, and scale containerized applications using Docker and Kubernetes (EKS/GKE/AKS) . Automate build, test, and deployment processes to support agile software delivery . Monitor system performance and uptime using Prometheus , Grafana , ELK Stack , or Datadog . Collaborate with development and QA teams to integrate automated testing and ensure release readiness. Implement and manage cloud infrastructure across AWS , Azure , or Google Cloud Platform (GCP) . Ensure security and compliance by integrating DevSecOps practices and tools like Aqua Security , Snyk , or Trivy . Respond to incidents, troubleshoot environments, and optimize system performance and resilience. Required Skills & Qualifications 3+ years of hands-on experience in a DevOps, Site Reliability Engineer (SRE), or infrastructure automation role. Strong experience with CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD). Proficient in managing container orchestration platforms (Docker, Kubernetes). Hands-on experience with cloud services : AWS, Azure, or GCP. Knowledge of scripting languages like Bash , Python , or Go . Expertise in IaC tools like Terraform or CloudFormation. Strong grasp of Linux systems administration and networking fundamentals . Experience with monitoring and logging frameworks (ELK, Grafana, Prometheus). Familiarity with Git workflows , release management, and Agile development processes. Nice to Have Experience with service mesh technologies like Istio or Linkerd . Exposure to serverless architectures (AWS Lambda, Google Cloud Functions). Familiarity with security compliance frameworks (SOC2, ISO 27001). Relevant certifications: AWS Certified DevOps Engineer , CKA/CKAD , HashiCorp Certified: Terraform Associate . Hiring Process Phone Screen – Introduction and background check. Technical Interview – Deep dive into DevOps skills, cloud systems, and live problem-solving. Culture-Fit Interview – Meet the leadership team to assess alignment with our values and mission. Keywords for SEO DevOps Engineer, DevOps jobs India, CI/CD automation, Docker, Kubernetes, Terraform, Jenkins, GitHub Actions, AWS DevOps, Azure DevOps, SRE jobs remote, Infrastructure as Code, cloud infrastructure, DevSecOps, Site Reliability Engineering, remote DevOps jobs, Python scripting, monitoring tools, cloud-native engineering, EKS, GKE, AKS, Grafana, Prometheus, GitOps, Kubernetes jobs India. If you're passionate about automation, cloud-native architecture, and scaling AI-powered platforms, we'd love to hear from you. Apply now with your resume and a short note on your DevOps experience! Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
India
On-site
Junior Software Engineer Requirements Professional working proficiency in English (IRL-3 or CEFR-B2) Strong sense of ownership, urgency, and drive Excellent problem-solving skills Experience in modern application design and event-driven architectures Strong understanding of cloud architecture and distributed system design 2+ years of technical experience in building back-end APIs using C# or equivalent languages (e.g., C, C++, Rust, Java, Python, Go) 2+ years of experience implementing cloud-based applications Experience implementing API governance concerns (including authentication, authorization, resource modeling, endpoint design, etc.) Strong knowledge of Git fundamentals and branching strategies Bachelor’s degree in computer science or equivalent experience Preferred Requirements Experience in leading teams and delivery software projects Knowledge of modern DevOps technologies and processes, including Kubernetes, GitOps, Terraform, etc. Experience with AWS cloud infrastructure Strong knowledge of SQL Server, PostgreSQL, and/or Apache Kafka Responsibilities Write high-quality, maintainable, and reusable code consistent with SOLID principles Operate as part of an agile-based development team, actively participating in grooming and planning discussions and sharing ownership of the problem and solution Tackle complex system integration challenges with internal and external team members Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Surat, Gujarat, India
Remote
Job Title : Lead DevOps Engineer Experience Required : 4 to 5 years in DevOps or related fields Employment Type : Full-time About The Role We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational Responsibilities : Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP). CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality. Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible. Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog. Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes. Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals. Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise. Incident Management: Oversee production system reliability, including root cause analysis and performance Skills & Qualifications Expertise: Strong proficiency in cloud platforms like AWS, Azure, or GCP. Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes). Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi. Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI. Proficiency in scripting languages (e.g., Python, Bash, Skills : Excellent communication and leadership skills. Strong analytical and problem-solving abilities. Proven ability to manage and lead a team : 4 years + of experience in DevOps or Site Reliability Engineering (SRE). 4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration. Strong understanding of microservices, APIs, and serverless to Have : Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar. Experience with GitOps tools such as ArgoCD or Flux. Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO & Benefits : Competitive salary and performance bonuses. Comprehensive health insurance for you and your family. Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise. Flexible working hours and remote work options. Collaborative and inclusive work culture. (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Python, Bash, Powershell 2. REST API Programming, System integration, workflow creation 3. Jenkins, GitOps, CI/CD Pipeline 4. Ansible and Terraform (knowledge or hands-on experience) 5. Knowledge in Linux system administration, monitoring and networking 6. Familiarity with Source code management tool - Git/Gitlab and DevOps practices 7. Public Cloud (AWS/GCP) knowledge will be an added advantage day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to ensure effective Design, Development, Validation and Support activities, to assure that our clients are satisfied with the high levels of service in the technology domain. You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements. You will play a key role in the overall estimation of work requirements to provide the right information on project estimations to Technology Leads and Project Managers. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Expected to work closely with SRE Leads to gather the requirements and develop automation solutions. The ideal candidate has a strong background in Python and Bash scripting, IaC, GitOps practices, Cloud Infrastructure and Monitoring systems. Show more Show less
Posted 2 weeks ago
2.0 - 5.0 years
7 - 12 Lacs
Gurugram
Work from Office
Redhat Openshift Engineer with 3+ years of hands-on experience in Red Hat OpenShift . The ideal candidate will be responsible for managing, configuring, and maintaining container orchestration and cloud infrastructure environments to support enterprise-grade applications and services. Key Responsibilities: Deploy, configure, and maintain OpenShift clusters in production and development environments. Monitor system performance, availability, and capacity planning. Automate infrastructure provisioning and application deployment using CI/CD pipelines. Troubleshoot and resolve issues related to container orchestration, cloud networking, and virtualized environments. Implement security best practices for containerized and cloud-native applications. Collaborate with development, QA, and operations teams to ensure seamless delivery pipelines. Create and maintain documentation related to architecture, processes, and troubleshooting. Required Skills: Strong hands-on experience with Red Hat OpenShift (v4.x preferred). Experience with Kubernetes concepts, Helm charts, and Operators. Familiarity with Linux system administration (RHEL/CentOS). Proficiency in scripting languages like Bash, Python, or Ansible. Understanding of CI/CD pipelines and tools like Jenkins, GitLab CI, or Tekton. Knowledge of cloud networking, load balancers, firewalls, and DNS. Preferred Qualifications: RHCSA/RHCE or OpenShift certification (EX280/EX180). Exposure to monitoring tools such as Prometheus, Grafana, or ELK stack. Experience with GitOps workflows (e.g., ArgoCD or Flux). Basic understanding of ITIL processes and DevOps culture. Education: Bachelors degree in Computer Science, Information Technology, or related field.
Posted 2 weeks ago
3.0 years
0 Lacs
India
Remote
Data is at the core of modern business, yet many teams struggle with its overwhelming volume and complexity. At Atlan, we’re changing that. As the world’s first active metadata platform, we help organisations transform data chaos into clarity and seamless collaboration. From Fortune 500 leaders to hyper-growth startups, from automotive innovators redefining mobility to healthcare organisations saving lives, and from Wall Street powerhouses to Silicon Valley trailblazers — we empower ambitious teams across industries to unlock the full potential of their data. Recognised as leaders by Gartner and Forrester and backed by Insight Partners, Atlan is at the forefront of reimagining how humans and data work together. Joining us means becoming part of a movement to shape a future where data drives extraordinary outcomes. We're seeking a versatile Cloud Platform Engineer passionate about building and maintaining a highly reliable, scalable, and cloud-native infrastructure. You'll be vital in bridging the gap between development, operations, and SRE, ensuring our applications run smoothly on Kubernetes across multiple cloud platforms. Your deep understanding of Kubernetes, cloud technologies, and automation will be instrumental in empowering our teams to deliver high-quality software quickly and reliably. What will you do? Design, deploy, and operate Kubernetes clusters across AWS, Azure, and GCP. Optimize cluster performance, ensure high availability, and implement robust security practices. Build and maintain cloud-native infrastructure components (load balancers, networking, storage, etc.) to support applications running on Kubernetes. Leverage Infrastructure as Code (IaC) with Terraform to automate and manage infrastructure provisioning and configuration. Embrace GitOps principles using ArgoCD to automate deployments and configuration changes and ensure consistency between the desired and actual system state. Establish comprehensive monitoring, logging, and alerting systems to gain insights into platform health and performance. Troubleshoot incidents swiftly and apply SRE principles to improve reliability and resilience. Develop automation scripts and tools (Python, Go, or other languages) to streamline workflows, eliminate manual tasks, and reduce operational overhead. Partner closely with development teams to understand their needs, provide guidance on platform best practices, and enable smooth integration and deployment of their applications. Implement and maintain stringent security measures for Kubernetes and cloud environments, ensuring compliance with industry standards and data protection regulations. Analyze resource usage and implement optimization strategies to maximize performance while controlling cloud costs. Participate in an on-call rotation, troubleshooting and resolving production issues promptly. What makes you a match? 3+ years of experience working with Kubernetes in production environments. Deep understanding of cluster operations, networking, storage, and security within Kubernetes. Strong knowledge of AWS, Azure, and GCP, including core services, networking concepts, and security best practices. Proven experience implementing GitOps workflows with ArgoCD and managing infrastructure using Terraform. Fluency in at least one programming language (Python, Go, Java) for automation, scripting, and tool development. Familiarity with SRE practices like SLOs (Service Level Objectives), error budgeting, and blameless postmortems. Excellent analytical and troubleshooting skills to identify and resolve issues in complex cloud environments. Ability to communicate effectively with development, operations, and security teams to drive cross-functional initiatives. Ability to work from 8.30 PM to 5.30 AM IST to provide coverage for US time zones. Why Atlan for You? At Atlan, we believe the future belongs to the humans of data. From curing diseases to advancing space exploration, data teams are powering humanity's greatest achievements. Yet, working with data can be chaotic—our mission is to transform that experience. We're reimagining how data teams collaborate by building the home they deserve, enabling them to create winning data cultures and drive meaningful progress. Joining Atlan Means Ownership from Day One: Whether you're an intern or a full-time teammate, you’ll own impactful projects, chart your growth, and collaborate with some of the best minds in the industry. Limitless Opportunities: At Atlan, your growth has no boundaries. If you’re ready to take initiative, the sky’s the limit. A Global Data Community: We’re deeply embedded in the modern data stack, contributing to open-source projects, sponsoring meet-ups, and empowering team members to grow through conferences and learning opportunities. As a fast-growing, fully remote company trusted by global leaders like Cisco, Nasdaq, and HubSpot, we’re creating a category-defining platform for data and AI governance. Backed by top investors, we’ve achieved 7X revenue growth in two years and are building a talented team spanning 15+ countries. If you’re ready to do your life’s best work and help shape the future of data collaboration, join Atlan and become part of a mission to empower the humans of data to achieve more, together. We are an equal opportunity employer At Atlan, we’re committed to helping data teams do their lives’ best work. We believe that diversity and authenticity are the cornerstones of innovation, and by embracing varied perspectives and experiences, we can create a workplace where everyone thrives. Atlan is proud to be an equal opportunity employer and does not discriminate based on race, color, religion, national origin, age, disability, sex, gender identity or expression, sexual orientation, marital status, military or veteran status, or any other characteristic protected by law. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Hyderābād
Remote
Hyderabad, India Chennai, India Job ID: R-1055178 Apply prior to the end date: May 31st, 2025 When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you’ll be doing... You will be part of a World Class Container Platform team that builds and operates highly scalable Kubernetes based container platforms(EKS, OCP, OKE and GKE)at a large scale for Global Technology Solutions at Verizon, a top 20 Fortune 500 company. This individual will have a sound technical expertise and daily hands-on implementation working in a product team developing services in two week sprints using agile principles. This entitles programming and orchestrating the deployment of feature sets into the Kubernetes CaaS platform along with building Docker containers via a fully automated CI/CD pipeline utilizing AWS, Jenkins Ansible playbooks, AWS, CI/CD tools and process ( Jenkins, JIRA, GitLab, ArgoCD), Python, Shell Scripts or any other scripting technologies. You will have autonomous control over day-to-day activities allocated to the team as part of agile development of new services. Automation and testing of different platform deployments, maintenance and decommissioning Full Stack Development What we’re looking for... You’ll need to have: Bachelors degree or two or more years of experience. Address Jira tickets opened by platform customers GitOps CI/CD workflows (ArgoCD, Flux) and Working in Agile Ceremonies Model Expertise of SDLC and Agile Development Design, develop and implement scalable React/Node based applications (Full stack developer) Experience with development with HTTP/RESTful APIs, Microservices Experience with Serverless Lambda Development, AWS Event Bridge, AWS Step Functions, DynamoDB, Python, RDBMS, NoSQL, etc. Experience with OWASP rules and mitigating security vulnerabilities using security tools like Fortify, Sonarqube, etc. Familiarity integrating with existing web application portals and backend development experience with languages to include Golang (preferred), Spring Boot, and Python. Experience with GitLab, GitLab CI/CD, Jenkins, Helm, Terraform, Artifactory Development of K8S tools/components which may include standalone utilities/plugins, cert-manager plugins, etc. Development and Working experience with Service Mesh lifecycle management and configuring, troubleshooting applications deployed on Service Mesh and Service Mesh related issues Experience with Terraform and/or Ansible Experience with Bash scripting experience Effective code review, quality, performance tuning experience, Test Driven Development. Certified Kubernetes Application Developer (CKAD) Excellent cross collaboration and communication skills Even better if you have one or more of the following: GitOps CI/CD workflows (ArgoCD, Flux) and Working in Agile Ceremonies Model Working experience with security tools such as Sysdig, Crowdstrike, Black Duck, Xray, etc. Networking of Microservices Solid understanding of Kubernetes networking and troubleshooting Experience with monitoring tools like NewRelic working experience with Kiali, Jaeger Lifecycle management and assisting app teams on how they could leverage these tools for their observability needs K8S SRE Tools for Troubleshooting Certified Kubernetes Administrator (CKA) Certified Kubernetes Security Specialist (CKS) Red Hat Certified OpenShift Administrator Your benefits package will vary depending on the country in which you work. subject to business approval Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Apply Now Save Saved Open sharing options Share Related Jobs Engineer II-Cloud Save Chennai, India Technology Engineer II-Cloud Save Chennai, India Technology Engr III Specialist-DevOps Save Chennai, India Technology Shaping the future. Connect with the best and brightest to help innovate and operate some of the world’s largest platforms and networks.
Posted 2 weeks ago
5.0 years
6 - 10 Lacs
Gurgaon
On-site
Job Information Date Opened 05/30/2025 Job Type Full time Industry Financial Services Work Experience 5+ years City Gurgaon State/Province Haryana Country India Zip/Postal Code 122002 About Us indiagold has built a product & technology platform that enables regulated entities to launch or grow their asset backed products across geographies; without investing in operations, technology, people or taking any valuation, storage or transit risks. Our use of deep-tech is changing how asset backed loans have been done traditionally. Some examples of our innovation are – lending against digital gold, 100% paperless/digital loan onboarding process, computer vision to test gold purity as opposed to manual testing, auto- scheduling of feet-on-street, customer self-onboarding, gold locker model to expand TAM & launch zero-touch gold loans, zero network business app & many more. We are rapidly growing team passionate about solving massive challenges around financial well-being. We are a rapidly growing organisation with empowered opportunities across Sales, Business Development, Partnerships, Sales Operations, Credit, Pricing, Customer Service, Business Product, Design, Product, Engineering, People & Finance across several cities. We value the right aptitude & attitude than past experience in a related role, so feel free to reach out if you believe we can be good for each other. Job Description About the Role We are seeking a Staff Software Engineer to lead and mentor engineering teams while driving the architecture and development of robust, scalable backend systems and cloud infrastructure. This is a senior hands-on role with a strong focus on technical leadership, system design, and cross-functional collaboration across development, DevOps, and platform teams. Key Responsibilities Mentor engineering teams to uphold high coding standards and best practices in backend and full-stack development using Java, Spring Boot, Node.js, Python, and React. Guide architectural decisions to ensure performance, scalability, and reliability of systems. Architect and optimize relational data models and queries using MySQL. Define and evolve cloud infrastructure using Infrastructure as Code (Terraform) across AWS or GCP. Lead DevOps teams in building and managing CI/CD pipelines, Kubernetes clusters, and related cloud-native tooling. Drive best practices in observability using tools like Grafana, Prometheus, OpenTelemetry, and centralized logging frameworks (e.g., ELK, CloudWatch, Stackdriver). Provide architectural leadership for microservices-based systems deployed via Kubernetes, including tools like ArgoCD for GitOps-based deployment strategies. Design and implement event-driven systems that are reliable, scalable, and easy to maintain. Own security and compliance responsibilities in cloud-native environments, ensuring alignment with frameworks such as ISO 27001, CISA, and CICRA. Ensure robust design and troubleshooting of container and Kubernetes networking, including service discovery, ingress, and inter-service communication. Collaborate with product and platform teams to define long-term technical strategies and implementation plans. Perform code reviews, lead technical design discussions, and contribute to engineering-wide initiatives. Requirements Required Qualifications 7+ years of software engineering experience with a focus on backend development and system architecture. Deep expertise in Java and Spring Boot, with strong working knowledge of Node.js, Python, and React.js. Proficiency in MySQL and experience designing complex relational databases. Hands-on experience with Terraform and managing infrastructure across AWS or GCP. Strong understanding of containerization, Kubernetes, and CI/CD pipelines. Solid grasp of container and Kubernetes networking principles and troubleshooting techniques. Experience with GitOps tools such as ArgoCD and other Kubernetes ecosystem components. Deep knowledge of observability practices, including metrics, logging, and distributed tracing. Experience designing and implementing event-driven architectures using modern tooling (e.g., Kafka, Pub/Sub, etc.). Demonstrated experience in owning and implementing security and compliance measures, with practical exposure to standards like ISO 27001, CISA, and CICRA. Excellent communication skills and a proven ability to lead cross-functional technical efforts. Preferred (Optional) Qualifications Contributions to open-source projects or technical blogs. Experience leading or supporting compliance audits such as ISO 27001, SOC 2, or similar. Exposure to service mesh technologies (e.g., Istio, Linkerd). Experience with policy enforcement in Kubernetes (e.g., OPA/Gatekeeper, Kyverno). Benefits Why Join Us? Lead impactful engineering initiatives and mentor talented developers. Work with a modern, cloud-native stack across AWS, GCP, Kubernetes, and Terraform. Contribute to architectural evolution and long-term technical strategy. Competitive compensation, benefits, and flexible work options. Inclusive and collaborative engineering culture.
Posted 2 weeks ago
3.0 years
3 - 5 Lacs
Chennai
On-site
Join us as we work to create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. athenahealth is a progressive, innovation-driven software product company. We partner with healthcare organizations across the care continuum to drive clinical and financial results. Our expert teams build modern technology on an open, connected ecosystem, yielding insights that make a difference for our customers and their patients. We maintain a unique values-driven employee culture and offer a flexible work-life balance. As evidence of our rapid growth and industry leadership, we were acquired by the world’s leading private equity firm “Bain Capital” for $17bn! and we have many new strategic product initiatives We are headquartered in Boston , and our other offices are in Atlanta, Austin, Belfast, and Burlington. In India , we have offices in Chennai, Bangalore and Pune We are looking for a seasoned “DevOps Kubernetes Engineer – MTS” to join our high-impact Cloud Infrastructure Engineering division in Chennai. Our team ensures the continuous availability, scalability, and security of the technologies powering athenahealth’s critical medical office services. We manage thousands of servers, petabytes of storage, and handle massive volumes of web requests, supporting athenahealth’s mission of simplifying healthcare administration for doctors. Who You Are: A passionate and experienced engineer with a proven track record of identifying and resolving reliability and scalability challenges in large-scale, containerized applications. A curious and collaborative team player who thrives in a fast-paced environment, eager to explore, learn, and improve processes—particularly around Kubernetes deployments and management. An efficiency enthusiast, skilled at automating solutions and continuously innovating container orchestration and management. A nimble learner, capable of grasping complex Kubernetes concepts and an excellent communicator who can advocate for best practices in Kubernetes operations. The Cloud Infrastructure Engineering Team: Our team consists of passionate Platform Engineers and SREs dedicated to reliability, automation, and scalability. We leverage an agile framework to prioritize business needs and use both private and public cloud solutions based on data-driven decisions. We are relentless in automating repetitive tasks, freeing ourselves for impactful projects that propel the business forward. Our primary focus is leveraging Kubernetes for efficient and reliable deployment of containerized applications. Primary Responsibilities: Kubernetes Deployment & Automation Design, deploy, and manage highly available and scalable Kubernetes clusters on AWS EKS using Terraform and/or Cross plane. Implement Infrastructure-as-Code (IaC) best practices for managing EKS clusters and related infrastructure. Kubernetes Operations & GitOps Configure and maintain Kubernetes deployments, services, ingresses, and other resources using YAML manifests or GitOps workflows. Implement GitOps practices with FluxCD for automated deployments and configuration management of containerized applications. Reliability, Security & Scalability Proactively ensure the reliability, security, and scalability of AWS production systems, with a particular focus on Kubernetes clusters and containerized applications. Resolve complex problems across multiple platforms and application domains, using advanced system troubleshooting techniques. Operational Support & Monitoring Provide primary operational support and engineering expertise for all cloud and enterprise deployments, with a focus on Kubernetes. Monitor system performance, identify downtime incidents, and diagnose underlying causes, particularly related to Kubernetes cluster and container health. Cost Optimization Design and develop cost-effective Kubernetes solutions within allocated budgets, ensuring efficient resource utilization. Secondary Responsibilities: Collaboration & Process Improvement Work closely with developers, testers, and system administrators to ensure smooth deployments and operations of containerized applications. Champion the implementation of new processes, tools, and methodologies to enhance efficiency throughout the software development lifecycle (SDLC) and pipeline management. Security Integration Integrate robust security measures into the development lifecycle, considering the specific security requirements of containerized applications. Typical Qualifications: 3 to 5 years of experience building, scaling, and supporting highly available systems and services. 2+ years of experience managing and operating Kubernetes clusters in production. Proven experience in building and managing AWS platforms, with a strong focus on Amazon EKS (Elastic Kubernetes Service). Deep knowledge of Kubernetes architecture, core concepts, best practices, and security considerations. Expertise in Infrastructure-as-Code (IaC) tools like Terraform and Cross plane. Familiarity with GitOps principles and experience with FluxCD (a plus). Proficiency in at least one scripting/programming language (Python, Go, Ruby, Shell). Experience in Site Reliability Engineering (SRE) and DevOps principles, including CI/CD and version control (Bitbucket, GitHub, etc.). Familiarity with telemetry, observability, and modern monitoring tools (Prometheus, Alert manager, Grafana, etc.), particularly for Kubernetes monitoring. Strong expertise in system visibility to facilitate rapid detection and resolution of issues within Kubernetes clusters. Key Behaviors & Abilities Required: A strong ability to learn and adapt in a fast-paced environment, especially as Kubernetes and container orchestration technologies evolve. Excellent teamwork skills, collaborating effectively across cross-functional teams including developers, testers, and system administrators. Strong prioritization and problem-solving skills, adept at troubleshooting complex Kubernetes-related issues. Ability to manage multiple projects simultaneously, ensuring projects stay on track with clear progress updates. Ability to handle unexpected challenges while effectively context-switching between tasks. Willingness to participate in rotational on-call duties to ensure continuous monitoring and support of Kubernetes clusters. A strong work ethic and commitment to continuous learning and improvement in Kubernetes and container orchestration technologies About athenahealth Our vision: In an industry that becomes more complex by the day, we stand for simplicity. We offer IT solutions and expert services that eliminate the daily hurdles preventing healthcare providers from focusing entirely on their patients — powered by our vision to create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. Our company culture: Our talented employees — or athenistas, as we call ourselves — spark the innovation and passion needed to accomplish our vision. We are a diverse group of dreamers and do-ers with unique knowledge, expertise, backgrounds, and perspectives. We unite as mission-driven problem-solvers with a deep desire to achieve our vision and make our time here count. Our award-winning culture is built around shared values of inclusiveness, accountability, and support. Our DEI commitment: Our vision of accessible, high-quality, and sustainable healthcare for all requires addressing the inequities that stand in the way. That's one reason we prioritize diversity, equity, and inclusion in every aspect of our business, from attracting and sustaining a diverse workforce to maintaining an inclusive environment for athenistas, our partners, customers and the communities where we work and serve. What we can do for you: Along with health and financial benefits, athenistas enjoy perks specific to each location, including commuter support, employee assistance programs, tuition assistance, employee resource groups, and collaborative workspaces — some offices even welcome dogs. We also encourage a better work-life balance for athenistas with our flexibility. While we know in-office collaboration is critical to our vision, we recognize that not all work needs to be done within an office environment, full-time. With consistent communication and digital collaboration tools, athenahealth enables employees to find a balance that feels fulfilling and productive for each individual situation. In addition to our traditional benefits and perks, we sponsor events throughout the year, including book clubs, external speakers, and hackathons. We provide athenistas with a company culture based on learning, the support of an engaged team, and an inclusive environment where all employees are valued. Learn more about our culture and benefits here: athenahealth.com/careers https://www.athenahealth.com/careers/equal-opportunity
Posted 2 weeks ago
10.0 years
0 Lacs
India
Remote
📜 Project Summary We’re hiring a senior Network & Security Architect (contractor, not employee) to design a resilient, regulator-compliant banking enterprise network that spans dual data-centres, disaster-recovery sites, regional branches/ATMs, and hybrid-cloud workloads. Your HLD/LLD and playbooks will serve as the blueprint for our deployment team. 📡 Network Topology Requirements Data-Centre & DR Dual active-active DCs with spine-leaf fabric, MACsec on inter-DC links, isolated OOB network Campus / HQ Redundant core & distribution, Wi-Fi 6/6E access, NAC-enforced segmentation Branches & ATMs SD-WAN overlays (MPLS + LTE/5G) with local Internet break-out, zero-touch provisioning Cloud Edge Direct Connect / ExpressRoute / IPsec VPN-GW, micro-segmented VNET/VPCs Internet DMZ Reverse proxies, WAF, DDoS scrubber, SWIFT-CSP-isolated zone 🌐 Services to Be Supported Core Banking & Treasury (ISO 8583, MQ, micro-services APIs) Digital & Mobile Banking (Open-Banking APIs, web/mobile channels) Payments – RTGS/NEFT/IMPS/UPI, SWIFT, card-switch, POS Unified Comms – VoIP/SBC, contact-centre SIP, VC Enterprise IT – AD/Azure AD, M365, SaaS & SOC/SIEM feeds 🔐 Security-First Architecture Zero-Trust segmentation (macro + micro, user/device-aware) Next-Gen Firewalls & virtual NGFWs at every trust boundary Inline IPS / sandboxing for east-west and north-south traffic Layer-7 WAF & API GW in DMZ; TLS 1.3 everywhere Compliance: PCI-DSS 4.0, RBI/IRDA cyber controls, SWIFT CSP, ISO 27001 HA everywhere – clustered firewalls, ECMP, BGP GR, IPsec FVRF 🧠 Technical Requirements Routing/Switching: OSPF v2/v3, IS-IS, eBGP/iBGP, MP-BGP EVPN/VXLAN, MPLS L2/L3 VPN, Segment Routing (SR-MPLS/SRv6) Overlay & SD-WAN: DMVPN, SD-WAN (Viptela/Versa/Fortinet or similar) Automation: GitOps source-of-truth, Ansible/Terraform-ready design hooks Observability: gRPC telemetry, NetFlow/IPFIX, Syslog/SIEM pipelines Future-proof: IPv6-first; QoS placeholders (no policy config in scope) 📦 Deliverables HLD – logical & topological views, security zones, resiliency model LLD – device roles, interface matrices, VRF maps, protocol timers IPv4/IPv6 Address Plan – summarised, dual-stack, hierarchically allocated Security Architecture Guide – segmentation tables, object-based FW rules, crypto standards Routing & Service Flow Docs – Core Banking, SWIFT, Digital channels, UC, Branch/ATM paths Procedure Playbooks – onboarding branches/cloud VPCs, DR fail-over, patch-window checklist ❌ CLI configurations and QoS policies are out-of-scope (architecture only). 🧪 Mandatory Qualification Round Submit all required artefacts via this form: 👉 https://forms.office.com/r/4cCw88zP4c 🖼️ Digital Topology Diagram – DC, campus, branch, cloud edges & security zones 📝 One paragraph per major service – rationale, resiliency & security approach 📋 Routing, Overlay & Security Controls List – protocols, segmentation, crypto, automation hooks ✅ Service Checklist – confirm every item in the RFP is covered ⚠️ Only complete form submissions are reviewed. ❗ Important Eligibility Notice – Read Before Applying This contract demands proven senior-level expertise in banking/financial-sector network & security architecture . If you do not meet all Ideal Candidate criteria—hands-on banking designs and the certifications listed below— please do not apply . Junior or incomplete submissions will be disqualified without review. ✅ Ideal Candidate 10 + years designing regulated financial networks & security Certifications: CCIE (Enterprise or Security) / JNCIE-SP and CISSP or CISM ; PCNSE or NSE 7 is a plus Demonstrable PCI-DSS 4.0 and SWIFT CSP project history Comfortable with NetDevOps tooling and hybrid-cloud fabrics 💰 Remuneration 💵💵 USD $$$$ + — premium project rate, fully commensurate with senior-level experience ⏳ Timeline 4 weeks (possible 1-week extension if agreed at kick-off) Note: This is a short-term, deliverable-based engagement. It is not a full-time role or permanent position. 📍 Work Mode Remote; overlap with IST business hours preferred 📬 How to Apply Complete the qualification form → https://forms.office.com/r/4cCw88zP4c . Short-listed candidates will be contacted for a technical interview and SOW alignment. Show more Show less
Posted 2 weeks ago
7.0 years
0 Lacs
India
On-site
Design, develop, and deploy IBM Case Manager solutions using IBM Cloud Pak for Business Automation (CP4BA) – Workflow on Azure Red Hat OpenShift (ARO) . Configure and customize IBM Case Manager workflows, case types, and security models within CP4BA . Integrate IBM Case Manager with other CP4BA components (such as FileNet, Business Automation Workflow, and Automation Decision Services). Implement containerized deployments of IBM Case Manager on ARO , ensuring scalability, performance, and security. Develop REST APIs , event-driven integrations, and microservices for case management automation. Troubleshoot and optimize IBM Case Manager performance in a Kubernetes/OpenShift environment. Collaborate with DevOps teams to implement CI/CD pipelines for IBM Case Manager deployments. Provide technical guidance on IBM CP4BA Workflow best practices, including case design, process automation, and user experience enhancements. Work with stakeholders to gather requirements and translate them into scalable case management solutions. Required Skills & Qualifications: 7+ years of experience with IBM Case Manager (on-prem or cloud). Hands-on experience with IBM Cloud Pak for Business Automation (CP4BA) – Workflow . Strong knowledge of Azure Red Hat OpenShift (ARO) or OpenShift Container Platform (OCP) . Experience deploying and managing IBM Case Manager in a Kubernetes/OpenShift environment. Proficiency in Java, JavaScript, REST APIs, and microservices architecture . Familiarity with IBM FileNet P8, Business Automation Workflow (BAW), and IBM Content Navigator . Understanding of DevOps practices, CI/CD pipelines (Tekton, Jenkins, ArgoCD), and GitOps . Knowledge of IBM ODM (Operational Decision Manager) is a plus. Strong problem-solving skills and ability to troubleshoot complex issues in cloud environments. IBM Certified Solution Developer/Architect for Cloud Pak for Business Automation (preferred). Nice to Have: Experience with Azure cloud services (AKS, Azure Storage, Azure AD). Knowledge of Agile/Scrum methodologies . Familiarity with IBM Watson AI/ML integrations for case management. Show more Show less
Posted 2 weeks ago
40.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Angular, Java, Java Script, Spring framework, Spring boot, Microservices, Web Services (REST API) Experience in Mongo DB Oracle DB experience, SQL experience, working experience in Cloud based application development. Expertise in JUnit and Performance Benchmarking of solutions. Should possess basic Unix/Linux knowledge to be able to write and understand basic shell scripts and basic Unix commands Working knowledge on Docker / Kubernetes / OpenShift. Experience with CI/CD build pipelines and toolchain – Git, GitOps, Artifactory, JIRA Experience. Career Level - IC3 Responsibilities Understand technology landscape Contribute to the application architecture Writing technical specifications Development activities Provide SIT, UAT support Production deployment support Coordination and communication with stakeholders as per the project requirements Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
With the increasing adoption of DevOps practices in the tech industry, GitOps has emerged as a popular approach for managing infrastructure and deployments. Job opportunities in the field of GitOps are on the rise in India, with many companies looking for professionals who are skilled in this area.
The average salary range for GitOps professionals in India varies based on experience levels: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum
In the GitOps field, a typical career path may include roles such as: 1. Junior GitOps Engineer 2. GitOps Engineer 3. Senior GitOps Engineer 4. GitOps Architect 5. GitOps Manager
Besides GitOps expertise, professionals in this field are often expected to have knowledge in: - DevOps practices - Infrastructure as Code (IaC) tools like Terraform - Containerization technologies like Docker and Kubernetes - Continuous Integration/Continuous Deployment (CI/CD) pipelines
As the demand for GitOps professionals continues to grow in India, now is the perfect time to upskill and prepare for exciting job opportunities in this field. Stay updated with the latest trends, practice your technical skills, and approach interviews with confidence to land your dream GitOps job. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2