Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
9 - 13 Lacs
Karimnagar
Work from Office
Primary Responsibilities - Develop and maintain applications using Java or Go. - Prior experience in deploying workloads and managing lifecycle on any cloud provider (GCP, AWS) - Troubleshoot and debug issues reported by users or identified through testing and implement effective solutions in a timely manner. - Participate in defining new software architectures, products and solutions. - Conduct code reviews to ensure code quality, performance, and adherence to coding standards. - Write clean, maintainable, and efficient code following industry best practices and coding standards. - Collaborate with cross-functional teams for end-to-end feature development, taking ownership and accountability. Knowledge, Skills, and Abilities - Hands-on programming experience and proficiency in Java or Go. - Experience in React, Node and JavaScript is plus. - Experience and knowledge with a technology like Kubernetes, Artifactory, Terraform, Helm, etc. - Excellent analytic/troubleshooting and debugging skills - Experience with performance tuning in hybrid and cloud environments. - Strong engineering and design skills, with solid understanding of real-time high performance, scalable and distributed systems. Qualifications Typically requires a minimum of 6 years of related experience with a Bachelor's degree; or 3 years and a Master's degree; or a PhD with 1 year of experience; or equivalent work experience.
Posted 2 weeks ago
3.0 - 7.0 years
9 - 13 Lacs
Mumbai
Work from Office
Primary Responsibilities - Develop and maintain applications using Java or Go. - Prior experience in deploying workloads and managing lifecycle on any cloud provider (GCP, AWS) - Troubleshoot and debug issues reported by users or identified through testing and implement effective solutions in a timely manner. - Participate in defining new software architectures, products and solutions. - Conduct code reviews to ensure code quality, performance, and adherence to coding standards. - Write clean, maintainable, and efficient code following industry best practices and coding standards. - Collaborate with cross-functional teams for end-to-end feature development, taking ownership and accountability. Knowledge, Skills, and Abilities - Hands-on programming experience and proficiency in Java or Go. - Experience in React, Node and JavaScript is plus. - Experience and knowledge with a technology like Kubernetes, Artifactory, Terraform, Helm, etc. - Excellent analytic/troubleshooting and debugging skills - Experience with performance tuning in hybrid and cloud environments. - Strong engineering and design skills, with solid understanding of real-time high performance, scalable and distributed systems. Qualifications Typically requires a minimum of 6 years of related experience with a Bachelor's degree; or 3 years and a Master's degree; or a PhD with 1 year of experience; or equivalent work experience.
Posted 2 weeks ago
3.0 - 7.0 years
9 - 13 Lacs
Vijayawada
Work from Office
Primary Responsibilities - Develop and maintain applications using Java or Go. - Prior experience in deploying workloads and managing lifecycle on any cloud provider (GCP, AWS) - Troubleshoot and debug issues reported by users or identified through testing and implement effective solutions in a timely manner. - Participate in defining new software architectures, products and solutions. - Conduct code reviews to ensure code quality, performance, and adherence to coding standards. - Write clean, maintainable, and efficient code following industry best practices and coding standards. - Collaborate with cross-functional teams for end-to-end feature development, taking ownership and accountability. Knowledge, Skills, and Abilities - Hands-on programming experience and proficiency in Java or Go. - Experience in React, Node and JavaScript is plus. - Experience and knowledge with a technology like Kubernetes, Artifactory, Terraform, Helm, etc. - Excellent analytic/troubleshooting and debugging skills - Experience with performance tuning in hybrid and cloud environments. - Strong engineering and design skills, with solid understanding of real-time high performance, scalable and distributed systems. Qualifications Typically requires a minimum of 6 years of related experience with a Bachelor's degree; or 3 years and a Master's degree; or a PhD with 1 year of experience; or equivalent work experience.
Posted 2 weeks ago
3.0 - 5.0 years
11 - 15 Lacs
Bengaluru
Work from Office
Key Responsibilities: Design, develop, and operate CICD pipelines using Jenkins , Groovy , and other related tools to support automation, deployment, and testing. Leverage Helm and Argo to deploy, manage, and automate Kubernetes-based services. Implement and enforce GitOps practices to streamline and secure the deployment process. Ensure that services meet compliance requirements, including SOC2 , ITSS , FSCloud , and internal IBM security guidance . Write automation scripts, develop containers, and build microservices-based solutions using cloud-native technologies. Collaborate with cross-functional teams to enhance service reliability, security, and availability. Troubleshoot and resolve issues in production environments using runbooks and automated alerts. Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise Bachelor's in Engineering, Computer Science, or relevant experience. 3-5 years of programming experience with Python and Go . Strong experience with CICD pipelines , Jenkins , and Groovy . Experience using Helm , Argo , and GitOps practices to manage Kubernetes-based services. In-depth experience with Kubernetes and cloud-native technologies. Familiarity with SOC2 , ITSS , FSCloud , and internal IBM security guidance . 3-5 years of experience in developing and operating highly available, distributed applications in production environments . Experience with service dependency management using Terraform or Ansible . Preferred technical and professional experience Advanced experience with Kubernetes and Helm . Familiarity with SOC2 , ITSS , FSCloud , and internal IBM security guidance compliance frameworks. Experience with PostgreSQL , Kafka , Elastic , MySQL , Redis , or MongoDB . 3-5 years of experience managing Linux environments with configuration management tools such as Chef , Puppet , or Ansible (Debian preferred). Deep knowledge of at least one major cloud provider (AWS, Azure, GCP) and experience with IaaS components such as VPC , Storage , and IAM . We offer a dynamic and collaborative environment with a "You build it, You run it" culture. You will join a follow-the-sun rotation , taking ownership of system alerts and using your troubleshooting and analytical skills to resolve issues.
Posted 2 weeks ago
6.0 - 11.0 years
15 - 30 Lacs
Pune
Hybrid
Please find the JD below : 1. AWS, EKS, OpenSearch, DynamoDb, Terraform, Dockerfile, CICD, Datadog, Helm
Posted 2 weeks ago
7.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
DevOps Engineer_ OCP FSS is seeking a highly skilled DevOps Engineer with hands-on experience in Red Hat OpenShift Container Platform (OCP) and associated tools like Argo CD, Jenkins, and Data Grid. The ideal candidate will drive automation, manage containerized environments, and ensure smooth CI/CD pipelines across hybrid infrastructure to support our financial technology solutions. Experience: 7 -10 years CTC: 20-30 lpa Location: Chennai/ Mumbai Key Responsibilities: OpenShift Platform Engineering: Deploy, manage, and maintain applications on OpenShift Container Platform. Configure and manage Operators, Helm charts, and OpenShift GitOps (Argo CD). Manage Red Hat Data Grid deployments and integrations. Support OCP cluster upgrades, patching, and troubleshooting. CI/CD Implementation & Automation: Design, implement, and manage CI/CD pipelines using Jenkins and Argo CD. Ensure seamless code integration, testing, and deployment processes with development teams. Infrastructure as Code (IaC): Automate infrastructure provisioning with tools like Terraform and Ansible. Manage hybrid infrastructure across on-prem and public clouds (AWS, Azure, or GCP). Monitoring & Performance Optimization: Implement and manage observability stacks (Prometheus, Grafana, ELK, etc.) for OCP and underlying services. Proactively identify and resolve system performance bottlenecks. Security & Compliance: Enforce security best practices in containerized and cloud environments. Conduct vulnerability assessments and ensure compliance with industry standards. Collaboration & Support: Collaborate with developers, QA, and IT teams to optimize DevOps workflows. Provide ongoing support and incident response for production and non-production environments. Required Skills & Qualifications: Technical Skills: Strong hands-on experience with OpenShift (v4.x) administration and operations. Proficiency in CI/CD tools: Jenkins, Argo CD, GitHub Actions, GitLab CI/CD. Deep understanding of Kubernetes, Docker, and container orchestration. Experience with Red Hat Data Grid or other in-memory data grids. Skilled in IaC tools: Terraform, Ansible, CloudFormation. Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK, Splunk). Proficient in scripting languages: Bash, Python, or Shell.
Posted 2 weeks ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
What you’ll do? Design, develop, and operate high scale applications across the full engineering stack. Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Research, create, and develop software applications to extend and improve on Equifax Solutions. Manage sole project priorities, deadlines, and deliverables. Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What experience you need? Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart? Knowledge or experience with Apache Beam for stream and batch data processing. Familiarity with big data tools and technologies like Apache Kafka, Hadoop, or Spark. Experience with containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to data visualization tools or platforms.
Posted 2 weeks ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title:Sr DevOps Engineer Location : Hyderabad & Ahmedabad Employment Type: Full-Time Work Model - 3 Days from office Exp : 7year+ Job Overview Dynamic, motivated individuals deliver exceptional solutions for the production resiliency of the systems. The role incorporates aspects of software engineering and operations, DevOps skills to come up with efficient ways of managing and operating applications. The role will require a high level of responsibility and accountability to deliver technical solutions. Summary The Senior DevOps Engineer is responsible for designing and managing robust, scalable CI/CD pipelines, automating infrastructure with Terraform, and improving deployment efficiency across GCP-hosted environments Experience Required: 5 –8 years in DevOps engineering roles with proven expertise in CI/CD, infrastructure automation, and Kubernetes.. Mandatory OS: Linux Cloud: GCP (Compute Engine, Load Balancing, GKE, IAM) CI/CD: Jenkins, GitHub Actions, Argo CD Containers: Docker, Kubernetes IaC: Terraform, Helm Monitoring: Prometheus, Grafana, ELK Security: Vault, Trivy, OWASP concepts Nice To Have Service Mesh (Istio), Pub/Sub, API Gateway – Kong Advanced scripting (Python, Bash, Node.js) Skywalking, Rancher, Jira, Freshservice Scope Own CI/CD strategy and configuration Implement DevSecOps practices Drive automation-first culture Roles And Responsibilities Design and implement end-to-end CI/CD pipelines using Jenkins, GitHub Actions, and Argo CD for production-grade deployments. Define branching strategies and workflow templates for development teams. Automate infrastructure provisioning using Terraform, Helm, and Kubernetes manifests across multiple environments. Implement and maintain container orchestration strategies on GKE, including Helm-based deployments. Manage secrets lifecycle using Vault and integrate with CI/CD for secure deployments. Integrate DevSecOps tools like Trivy, SonarQube, and JFrog into CI/CD workflows. Collaborate with engineering leads to review deployment readiness and ensure quality gates are met. Monitor infrastructure health and capacity planning using Prometheus, Grafana, and Datadog; implement alerting rules. Implement auto-scaling, self-healing, and other resilience strategies in Kubernetes. Drive process documentation, review peer automation scripts, and provide mentoring to junior DevOps engineers Notice Period: Immediate- 30 Days Email to : sharmila.m@aptita.com
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job title : Devops Engineer Location : Bangalore Experince : 8-12 years Notice period : 30 - 60days DevOps & Cloud Infrastructure Engineer To help us build robust and scalable systems that improve the customer experience, we’re looking for a DevOps engineer who can be responsible for developing and provisioning infrastructure, observability platform tools such as Prometheus, Grafana, and distributed logging and tracing stacks. The ideal candidate will have a background and familiarity with Shell Scripting Python, and will work with developers and engineers to ensure that infrastructure and observability practices and processes work as intended. Objectives of this role Building and implementing new DevOps tools, Terraform modules Work to automate and improve development and release process Design and implement security controls at the infrastructure layer Automate release across environments including disaster recovery region Responsibilities Deploy updates and fixes, and provide Level 2 technical support Build tools to reduce occurrence of errors and improve customer experience Develop software to integrate with internal back-end systems Design and implement distributed logging and tracing stack Develop scripts to automate metrics collection, operational dashboard Design procedures for system troubleshooting and maintenance Required skills and qualifications Experience as a DevOps engineer or in a similar software engineering role Proficiency with Git version control system Good knowledge of Shell Scripting or Python Working knowledge of Terraform, databases and SQL Working knowledge of Prometheus, Grafana Problem-solving attitude and collaborative team spirit Preferred skills and qualifications Bachelor degree in computer science, engineering, or relevant field Experience in civil engineering or customer experience Experience in developing/engineering applications for a large company Prometheus, PromQL expressions Grafana dashboards, PagerDuty, Jaegar (any) OpenTelemetry, OpenTracing (any) EasticSearch, LogStash, Kibana (ELK) stack big plus Micrometer, Loki, Google BigQuery logging (any) Automate failover/scale-up/scale-down Automate operational, perf testing, activities Load testing, chaos testing a plus K6/JMeter/ChaosToolkit/Gremlin Hands on AWS infrastructure-as-code One or more of Kubernetes, Helm, Ansible, Terraform
Posted 2 weeks ago
0.0 - 5.0 years
0 Lacs
Delhi, Delhi
Remote
Role : Senior DevOps Developer (SR1) Location : Remote Job Summary : This is a full-time role for a Senior DevOps Developer (SR1) . We are seeking an experienced DevOps professional to lead our infrastructure strategy, design resilient systems, and drive continuous improvement in our deployment processes. In this role, you will architect scalable solutions, mentor junior engineers, and ensure the highest standards of reliability and security across our cloud infrastructure. The job location is flexible with preference for the Delhi NCR region. Responsibilities Lead comprehensive improvements to CI/CD systems and deployment pipelines. Design and implement resilient, secure, and scalable infrastructure solutions. Proactively identify and resolve infrastructure bottlenecks and performance challenges. Own deployment health, managing Service Level Objectives (SLOs) and Service Level Agreements (SLAs). Conduct thorough infrastructure audits and optimize cost-efficiency. Develop and maintain high availability and robust rollback strategies. Collaborate closely with Development and QA teams to streamline release automation. Mentor Mid-Level and Junior DevOps Engineers, fostering skill development and best practices. Provide technical leadership and guidance in architectural decisions. Lead complex project components with minimal supervision. Develop risk mitigation strategies for infrastructure and deployment challenges. Propose innovative technological solutions aligned with business goals. Requirements Technical Skills Bachelor's or Master's degree in Computer Science, Engineering, or related field. 3-5 years of professional DevOps experience with demonstrated progression. Advanced Linux administration and shell scripting expertise. Comprehensive Git workflow knowledge, including advanced branching and collaboration strategies. Deep Kubernetes knowledge including Helm, StatefulSets, Horizontal Pod Autoscalers, and Network Policies. Advanced Terraform skills with module development, remote backend, and workspace management. Extensive experience with AWS services (EC2, S3, IAM, VPC, CloudWatch). Advanced Docker and Kubernetes container optimization and deployment strategies. Expertise in writing and maintaining complex CI/CD pipelines using Jenkins, GitHub Actions. Advanced secrets management using AWS SSM, HashiCorp Vault. Comprehensive logging and alerting system setup (ELK stack, Prometheus, Alertmanager). Advanced cloud security implementation (IAM roles, Key Management Service, Web Application Firewall). GitOps implementation experience with tools like ArgoCD and Flux. Performance tuning skills for infrastructure and containerized environments. Advanced observability practices covering metrics, logs, and distributed tracing. Soft Skills Cross-functional communication excellence with ability to lead technical discussions. Strong mentorship capabilities for junior and mid-level team members. Advanced strategic thinking and ability to propose innovative solutions. Excellent knowledge transfer skills through documentation and training. Ability to understand and align technical solutions with broader business strategy. Proactive problem-solving approach with focus on continuous improvement. Strong leadership skills in guiding team performance and technical direction. Effective collaboration across development, QA, and business teams. Ability to make complex technical decisions with minimal supervision. Strategic approach to risk management and mitigation. Additional Preferred Qualifications Experience with multi-cloud or hybrid-cloud environments. Exposure to incident management and on-call responsibilities. Advanced scripting skills in Groovy, Python, or Go for CI/CD. Experience with infrastructure testing tools like Terratest or Inspec. Advanced cost analysis and cloud cost optimization skills. Contributions to open-source projects or advanced technical certifications. What We Offer Professional Growth : Continuous learning opportunities through diverse projects and mentorship from experienced leaders Global Exposure : Work with clients from 20+ countries, gaining insights into different markets and business cultures Impactful Work : Contribute to projects that make a real difference, with solutions generating over $1B in revenue Work-Life Balance : Flexible arrangements that respect personal wellbeing while fostering productivity Career Advancement : Clear progression pathways as you develop skills within our growing organization Competitive Compensation : Attractive salary packages that recognize your contributions and expertise
Posted 2 weeks ago
0.0 - 6.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Job Description: Analyst Quality Engineering Will be responsible for software quality engineering and test automation for Service Assurance applications. Required work includes collaborating with system engineers, participating in scrum, design, create and execute test plan and test cases, test automation, performance testing. Required Skills: Strong hand-on experience in Software testing of back-end applications and test automation using Robot framework. Working experience in Linux/Unix, shell scripting. Experience in end-to-end test automation using Robot framework. Hands on experience in performance testing using industry standard tools for backend applications. Hands on experience in programing language like Java, Python. Knowledge of containerization and orchestration with Docker, Helm, and Kubernetes Experience in Databases (SQL/NoSQL) like CASSANDRA, Postgres, MySQL and Snowflake. Knowledge of microservices Architecture and deployment. Knowledge of real-time data streaming and messaging solutions like Kafka. Experience with tools like Maven, GIT, Jenkins, JFrog. Knowledgeable of key networking technologies such as 5G Core, RAN, Transport, IP Routing, Ethernet, and Access Wireline Networks. Data collection approaches through adoption of industry standard methods and open-source capabilities and dashboards through Grafana. Experience with the TICK stack: Telegraf, InfluxDB, Chronograf and Kapacitor is a plus. Experience with the ELK stack: ElasticDB, Logstash, Kibana is a plus. Knowledge of Azure cloud and hands on experience in deployment and troubleshooting of microservices into AKS. Understanding of key protocols including HTTP, SNMP, DNS, SSH. Any prior experience in a telecommunications industry setting is a plus. Excellent written and verbal communication Overall Experience: 3 -6 years in relevant technologies. Weekly Hours: 40 Time Type: Regular Location: Bangalore, Karnataka, India It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made. Job ID R-67594 Date posted 07/17/2025 Benefits Your needs? Met. Your wants? Considered. Take a look at our comprehensive benefits. Paid Time Off Tuition Assistance Insurance Options Discounts Training & Development
Posted 2 weeks ago
0.0 - 12.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Job Information Job Type Permanent Date Opened 07/17/2025 Work Shift 24/7 Work Experience 10 - 12 years Industry IT Services Work Location Chennai - OMR State/Province Tamil Nadu City Chennai Zip/Postal Code 600113 Country India Job Description Requirements Design and implement AWS architectures for complex, enterprise-level applications. Deployment, automation, management, and maintenance of AWS cloud-based production system. Deploy applications on AWS and ensure they run smoothly. Configure and fine-tune cloud infrastructure systems. Deploy and configure AWS services according to best practices. Monitor application performance and optimize AWS services to improve efficiency and reduce costs. Implement auto-scaling and load balancing to handle varying workloads Ensuring availability, performance, security, and scalability of AWS production systems. Management of creation, release, and configuration of production systems. Building and setting up new development tools and infrastructure System troubleshooting and problem resolution across various application domains and platforms. Maintenance reports and logs for AWS infrastructure. Implement and maintain security policies using AWS security tools and best practices. Monitor AWS infrastructure for security vulnerabilities and address them promptly. Ensure data integrity and privacy by implementing encryption and access controls. Terraform scripts for automating infrastructure provisioning. Automated CI/CD pipelines using Kubernetes, Helm, Docker, and CircleCI Backup and long-term storage solutions for the infrastructure Monitoring and log aggregation dashboards and alerts for AWS infrastructure Maintain application reliability and uptime throughout the application lifecycle. Identifying technical problems and developing software updates and fixes Provision of critical system security by leveraging best practices and prolific cloud security solutions. Providing recommendations for architecture and process improvements. Definition and deployment of systems for metrics, logging, and monitoring on AWS platform. Designing, maintenance and management of tools for automation of different operational processes. Requirements: Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent work experience). AWS Certified Solutions Architect, AWS Certified DevOps Engineer, or other relevant AWS certifications. Proven experience in designing, implementing, and managing AWS cloud solutions. Proficiency in Infrastructure as Code tools (CloudFormation, Terraform). Strong understanding of cloud security principles and best practices. Experience with AWS services such as EC2, S3, RDS, Lambda, VPC, IAM, etc. Scripting skills (e.g., Python, Bash) for automation tasks. Experience with monitoring and logging tools for AWS (e.g., CloudWatch, AWS Config, AWS CloudTrail). Strong problem-solving skills and ability to work in a fast-paced, collaborative team environment. Excellent communication skills and ability to convey complex technical concepts to non-technical stakeholders.
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
The Senior DevOps, Platform, and Infra Security Engineer opportunity at FICO's highly modern and innovative analytics and decision platform involves shaping the next generation security for FICO's Platform. You will address cutting-edge security challenges in a highly automated, complex, cloud & microservices-driven environment inclusive of design challenges and continuous delivery of security functionality and features to the FICO platform as well as the AI/ML capabilities used on top of the FICO platform, as stated by the VP of Engineering. In this role, you will secure the design of the next-generation FICO Platform, its capabilities, and services. You will provide full-stack security architecture design from cloud infrastructure to application features for FICO customers. Collaborating closely with product managers, architects, and developers, you will implement security controls within products. Your responsibilities will also include developing and maintaining Kyverno policies for enforcing security controls in Kubernetes environments and defining and implementing policy-as-code best practices in collaboration with platform, DevOps, and application teams. As a Senior DevOps, Platform, and Infra Security Engineer, you will stay updated with emerging threats, Kubernetes security features, and cloud-native security tools. You will define required controls and capabilities for the protection of FICO products and environments, build and validate declarative threat models in a continuous and automated manner, and prepare the product for compliance attestations while ensuring adherence to best security practices. The ideal candidate for this role should have 10+ years of experience in architecture, security reviews, and requirement definition for complex product environments. Strong knowledge and hands-on experience with Kyverno and OPA/Gatekeeper are preferred. Familiarity with industry regulations, frameworks, and practices (e.g., PCI, ISO 27001, NIST) is required. Experience in threat modeling, code reviews, security testing, vulnerability detection, and remediation methods is essential. Hands-on experience with programming languages such as Java, Python, and securing cloud environments, preferably AWS, is necessary. Moreover, experience in deploying and securing containers, container orchestration, and mesh technologies (e.g., EKS, K8S, ISTIO), Crossplane for managing cloud infrastructure declaratively via Kubernetes, and certifications in Kubernetes or cloud security (e.g., CKA, CKAD, CISSP) are desirable. Proficiency with CI/CD tools (e.g., GitHub Actions, GitLab CI, Jenkins, Crossplane) is important. The ability to independently drive transformational security projects across teams and organizations and experience with securing event streaming platforms like Kafka or Pulsar are valued. Hands-on experience with ML/AI model security, IaC (e.g., Terraform, Cloudformation, Helm), and CI/CD pipelines (e.g., Github, Jenkins, JFrog) will be beneficial. Joining FICO as a Senior DevOps, Platform, and Infra Security Engineer offers you an inclusive culture reflecting core values, the opportunity to make an impact and develop professionally, highly competitive compensation and benefits programs, and an engaging, people-first work environment promoting work/life balance, employee resource groups, and social events to foster interaction and camaraderie.,
Posted 2 weeks ago
1.0 - 5.0 years
0 Lacs
chandigarh
On-site
You will be a part of our team as a Junior DevOps Engineer, where you will contribute to building, maintaining, and optimizing our cloud-native infrastructure. Your role will involve collaborating with senior DevOps engineers and development teams to automate deployments, monitor systems, and ensure the high availability, scalability, and security of our applications. Your key responsibilities will include managing and optimizing Kubernetes (EKS) clusters, Docker containers, and Helm charts for deployments. You will support CI/CD pipelines using tools like Jenkins, Bitbucket, and GitHub Actions, and help deploy and manage applications using ArgoCD for GitOps workflows. Monitoring and troubleshooting infrastructure will be an essential part of your role, utilizing tools such as Grafana, Prometheus, Loki, and OpenTelemetry. Working with various AWS services like EKS, ECR, ALB, EC2, VPC, S3, and CloudFront will also be a crucial aspect to ensure reliable cloud infrastructure. Automating infrastructure provisioning using IaC tools like Terraform and Ansible will be another key responsibility. Additionally, you will assist in maintaining Docker image registries and collaborate with developers to enhance observability, logging, and alerting while adhering to security best practices for cloud and containerized environments. To excel in this role, you should have a basic understanding of Kubernetes, Docker, and Helm, along with familiarity with AWS cloud services like EKS, EC2, S3, VPC, and ALB. Exposure to CI/CD tools such as Jenkins, GitHub/Bitbucket pipelines, basic scripting skills (Bash, Python, or Groovy), and knowledge of observability tools like Prometheus, Grafana, and Loki will be beneficial. Understanding GitOps (ArgoCD) and infrastructure as code (IaC), experience with Terraform/CloudFormation, and knowledge of Linux administration and networking are also required skills. This is a full-time position that requires you to work in person. If you are interested in this opportunity, please feel free to reach out to us at +91 6284554276.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
chennai, tamil nadu
On-site
Qualcomm India Private Limited is looking for a highly skilled and experienced MLOps Engineer to join their team and contribute to the development and maintenance of their ML platform both on premises and AWS Cloud. As a MLOps Engineer, your responsibility will include architecting, deploying, and optimizing the ML & Data platform supporting the training of Machine Learning Models using NVIDIA DGX clusters and the Kubernetes platform. Your expertise in AWS services such as EKS, EC2, VPC, IAM, S3, and EFS will be crucial for ensuring the smooth operation and scalability of the ML infrastructure. You will collaborate with cross-functional teams, including data scientists, software engineers, and infrastructure specialists, to ensure the smooth operation and scalability of the ML infrastructure. Your expertise in MLOps, DevOps, and knowledge of GPU clusters will be vital in enabling efficient training and deployment of ML models. Your responsibilities will include architecting, developing, and maintaining the ML platform, designing and implementing scalable infrastructure solutions for NVIDIA clusters on premises and AWS Cloud, collaborating with data scientists and software engineers to define requirements, optimizing platform performance and scalability, monitoring system performance, implementing CI/CD pipelines, maintaining monitoring stack using Prometheus and Grafana, managing AWS services, implementing logging and monitoring solutions, staying updated with the latest advancements in MLOps, distributed computing, and GPU acceleration technologies, and proposing enhancements to the ML platform. Qualcomm is looking for candidates with a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, proven experience as an MLOps Engineer or similar role with a focus on large-scale ML and/or Data infrastructure and GPU clusters, strong expertise in configuring and optimizing NVIDIA DGX clusters, proficiency in using the Kubernetes platform and related technologies, solid programming skills in languages like Python, Go, experience with relevant ML frameworks, in-depth understanding of distributed computing and GPU acceleration techniques, familiarity with containerization technologies and orchestration tools, experience with CI/CD pipelines and automation tools for ML workflows, experience with AWS services and monitoring tools, strong problem-solving skills, excellent communication, and collaboration skills. Qualcomm is an equal opportunity employer and is committed to providing reasonable accommodations to support individuals with disabilities during the hiring process. If you are interested in this role or require more information, please contact Qualcomm Careers.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
navi mumbai, maharashtra
On-site
You have 5+ years of overall experience in Cloud Operations, with a minimum of 5 years of hands-on experience with Google Cloud Platform (GCP) and at least 3 years of experience in Kubernetes administration. It is mandatory to have GCP Certified Professional certification. In this role, you will be responsible for managing and monitoring GCP infrastructure resources to ensure optimal performance, availability, and security. You will also administer Kubernetes clusters, handle deployment, scaling, upgrades, patching, and troubleshooting. Automation for provisioning, scaling, and monitoring using tools like Terraform, Helm, or similar will be implemented and maintained by you. Your key responsibilities will include responding to incidents, performing root cause analysis, and resolving issues within SLAs. Configuring logging, monitoring, and alerting solutions across GCP and Kubernetes environments will also be part of your duties. Supporting CI/CD pipelines, integrating Kubernetes deployments with DevOps processes, and maintaining detailed documentation of processes, configurations, and runbooks are critical aspects of this role. Collaboration with Development, Security, and Architecture teams to ensure compliance and best practices is essential. You will participate in an on-call rotation and respond promptly to critical alerts. The required skills and qualifications for this position include being a GCP Certified Professional (Cloud Architect, Cloud Engineer, or equivalent) with a strong working knowledge of GCP services such as Compute Engine, GKE, Cloud Storage, IAM, VPC, and Cloud Monitoring. Solid experience in Kubernetes cluster administration, proficiency with Infrastructure as Code tools like Terraform, knowledge of containerization concepts and tools like Docker, experience in monitoring and observability with tools like Prometheus, Grafana, and Stackdriver, familiarity with incident management and ITIL processes, ability to work in 24x7 operations with rotating shifts, and strong troubleshooting and problem-solving skills. Preferred skills that would be nice to have for this role include experience supporting multi-cloud environments, scripting skills in Python, Bash, Go, exposure to other cloud platforms like AWS and Azure, and familiarity with security controls and compliance frameworks.,
Posted 2 weeks ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are looking for DevOps Engineer - Pune Job Title : DevOps Engineer Location : Pune Experience : 5-8 years Notice : Immediate to 15 days Mandate skills : DevOps ,Google Cloud Platform (GCP) services, Kubernetes, Terraform , CI/CD tools -Jenkins Experienced DevOps Engineer with strong hands-on expertise in Google Cloud Platform (GCP) Services, Kubernetes, and Terraform. Must-Have Skills Hands-on experience with Google Cloud Platform (GCP) services : Compute Engine, Cloud Storage, Cloud Functions, VPC, IAM, Cloud Monitoring Strong expertise in Kubernetes : Deploying and managing workloads Writing Helm charts and manifests Monitoring, scaling, and managing clusters Proficient in Terraform : Writing reusable modules and managing infrastructure as code Managing Terraform state and deployments Experience building and maintaining CI/CD pipelines Solid scripting skills (Bash, Python) Experience with Git, GitOps, and CI/CD tools (Jenkins, GitLab CI, ArgoCD) Familiarity with monitoring and logging tools (Prometheus, Grafana, Stackdriver) Good understanding of cloud networking and security best practices Familiarity with : Interpreting error codes Analyzing load balancer logs Identifying and troubleshooting failures Responsibilities Design and manage scalable infrastructure on GCP using Terraform and Kubernetes Build and manage CI/CD pipelines for automated deployments Monitor and improve system performance, availability, and security Automate operational processes and deployment workflows Collaborate with development and QA teams for smooth release cycles Troubleshoot issues using logs, error codes, and load balancer diagnostics Nice To Have Exposure to Docker and container security Experience with other cloud platforms (AWS, Azure) Familiarity with configuration management tools (ref:hirist.tech)
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
delhi
On-site
As a Lead Engineer specializing in 4G/5G Packet Core & IMS, your primary role will involve deployment, configuration, testing, integration, and support of 4G/5G Packet Core and IMS product stacks in cloud-native environments. You will collaborate closely with product engineering and customer teams to deliver telecom solutions compliant with industry standards. Your key responsibilities will include deploying and configuring 4G EPC, 5G Core (NSA/SA), and IMS components in environments such as OpenStack, Kubernetes, or OpenShift. You will automate deployments using tools like Ansible, Helm, Terraform, and CI/CD pipelines to ensure secure, scalable, and resilient deployments across various environments. Additionally, you will be responsible for developing and executing test plans for functional, performance, and interoperability testing, using telecom tools like Spirent for validation and KPI tracking. Integration of network elements such as MME, SGW, PGW, AMF, SMF, UPF, HSS, PCRF, and CSCF will be a crucial part of your role, ensuring seamless interworking of legacy and cloud-native network functions. You will also be involved in release management, version control, release cycles, rollout strategies, and maintaining detailed release documentation. Providing technical documentation such as high-level and low-level design documents, Method of Procedure (MoP) & SOPs, deployment guides, technical manuals, and release notes will also be part of your responsibilities. Field support and troubleshooting, collaboration with product development teams, and participation in sprint planning, reviews, and feedback sessions are integral aspects of this role. Your qualifications should include a Bachelors/Masters degree in Telecommunications, Computer Science, or a related field, a deep understanding of 4G/5G Packet Core and IMS architecture and protocols, experience with cloud-native platforms, strong scripting skills, excellent communication skills, and strong troubleshooting abilities. This role presents a unique opportunity to be part of a strategic initiative to build an indigenous telecom technology stack, shaping India's telecom future with high-performance and standards-compliant LTE/5G core network solutions. Joining this cutting-edge team will allow you to contribute to the advancement of telecom technology in India.,
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
andhra pradesh
On-site
The role of Technical Architect for IoT Platform requires a highly skilled individual with over 10 years of experience, who possesses expertise in Java Spring Boot, React.js, IoT system architecture, and a strong foundation in DevOps practices. As a Technical Architect, you will be responsible for designing scalable, secure, and high-performance IoT solutions. Your role will involve leading full-stack teams and collaborating with product, infrastructure, and data teams to ensure the successful implementation of IoT projects. Your key responsibilities will include architecture and design tasks such as implementing scalable and secure IoT platform architecture, defining and maintaining architecture blueprints and technical documentation, leading technical decision-making, and ensuring adherence to best practices and coding standards. You will also be involved in architecting microservices-based solutions using Spring Boot, integrating them with React-based front-ends, defining data flow, event processing pipelines, and device communication protocols. In terms of IoT domain expertise, you will be required to architect solutions for real-time sensor data ingestion, processing, and storage, work closely with hardware and firmware teams for device-cloud communication, support multi-tenant, multi-protocol device integration, and guide the design of edge computing, telemetry, alerting, and digital twin models. Your role will also involve DevOps and infrastructure-related tasks such as defining CI/CD pipelines, managing containerization & orchestration, driving infrastructure automation, ensuring platform monitoring, logging, and observability, and enabling auto-scaling, load balancing, and zero-downtime deployments. As a Technical Architect, you will be expected to demonstrate leadership qualities by collaborating with product managers and business stakeholders, mentoring and leading a team of developers and engineers, conducting code and architecture reviews, setting goals and targets, organizing features and sprints, and providing coaching and professional development to team members. Your technical skills and experience should include proficiency in backend technologies such as Java 11+/17, Spring Boot, Spring Cloud, REST APIs, JPA/Hibernate, PostgreSQL, as well as frontend technologies like React.js, Redux, TypeScript, and Material-UI. Additionally, experience with messaging/streaming platforms, databases, DevOps tools, monitoring tools, cloud platforms, and other relevant technologies is required. Other must-have qualifications for this role include hands-on IoT project experience, designing and deploying multi-tenant SaaS platforms, knowledge of security best practices in IoT and cloud environments, as well as excellent problem-solving, communication, and team leadership skills. It would be beneficial if you have experience with Edge Computing frameworks, AI/ML model integration, industrial protocols, digital twin concepts, and relevant certifications in AWS/GCP, Kubernetes, or Spring. A Bachelor's or Master's degree in Computer Science, Engineering, or a related field is also required. By joining us, you will have the opportunity to lead architecture for cutting-edge industrial IoT platforms, work with a passionate team in a fast-paced and innovative environment, and gain exposure to cross-disciplinary challenges in IoT, AI, and cloud-native technologies.,
Posted 3 weeks ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
Salesforce is the global leader in customer relationship management (CRM) software, pioneering the shift to cloud computing. Today, Salesforce delivers the next generation of social, mobile, and cloud technologies to help companies revolutionize the way they sell, service, market, and innovate, enabling them to become customer-centric organizations. As the fastest-growing enterprise software company in the top 10, Salesforce has been recognized as the World's Most Innovative Company by Forbes and as one of Fortune's 100 Best Companies to Work For. The CRM Database Sustaining Engineering Team at Salesforce is responsible for deploying and managing some of the largest and most trusted databases globally. Customers rely on this team to ensure the safety and high availability of their data. As a Database Cloud Engineer at Salesforce, you will have a mission-critical role in ensuring the reliability, scalability, and performance of Salesforce's extensive cloud database infrastructure. You will contribute to powering one of the largest Software as a Service (SaaS) platforms globally. We are seeking engineers with a DevOps mindset and deep expertise in databases to architect and operate secure, resilient, and high-performance database environments across public cloud platforms such as AWS and GCP. Collaboration across various domains including systems, storage, networking, and applications is essential to deliver cloud-native reliability solutions at a massive scale. The CRM Database Sustaining Engineering team is a dynamic and fast-paced global team that delivers and supports databases and cloud infrastructure to meet the evolving needs of the business. In this role, you will collaborate with other engineering teams to deliver innovative solutions in an agile and dynamic environment. As part of the Global Team, you will engage in 24/7 support responsibilities within Europe, requiring occasional flexibility in working hours to align globally. You will be responsible for the reliability of Salesforce's cloud database, running on cutting-edge cloud technology. **Job Requirements:** - Bachelor's in Computer Science or Engineering, or equivalent experience. - Minimum of 8+ years of experience as a Database Engineer or in a similar role. - Expertise in Database and SQL performance tuning in at least one relational database. - Knowledge and hands-on experience with Postgres database is advantageous. - Broad and deep knowledge of at least two relational databases, including Oracle, PostgreSQL, and MySQL. - Working knowledge of cloud platforms such as AWS or GCP is highly desirable. - Experience with cloud technologies like Docker, Spinnaker, Terraform, Helm, Jenkins, GIT, etc. Exposure to Zookeeper fundamentals and Kubernetes is highly desirable. - Proficiency in SQL and at least one procedural language such as Python, Go, or Java, with a basic understanding of C. - Excellent problem-solving skills and experience with Production Incident Management and Root Cause analysis. - Experience with mission-critical distributed systems service, including supporting Database Production Infrastructure with 24x7x365 support responsibilities. - Exposure to a fast-paced environment with a large-scale cloud infrastructure setup. - Strong speaking, listening, and writing skills, attention to detail, and a proactive self-starter. **Preferred Qualifications:** - Hands-on DevOps experience including CI/CD pipelines and container orchestration (Kubernetes, EKS/GKE). - Cloud-native DevOps experience (CI/CD, EKS/GKE, cloud deployments). - Familiarity with distributed coordination systems like Apache Zookeeper. - Deep understanding of distributed systems, availability design patterns, and database internals. - Monitoring and alerting expertise using tools like Grafana, Argus, or similar. - Automation experience with tools like Spinnaker, Helm, and Infrastructure as Code frameworks. - Ability to drive technical projects from idea to execution with minimal supervision.,
Posted 3 weeks ago
5.0 years
0 Lacs
Itanagar, Arunachal Pradesh, India
Remote
Senior Software Engineer - Location: Remote Remote India Remote Remote India Job Type: regular full-time Division: Precision for Medicine Business Unit: QuartzBio Requisition Number: 5809 QuartzBio Overview QuartzBio (www.quartz.bio ) is a Software-as-a-Service (SaaS) solutions provider to the life sciences industry. We deliver innovative, data enabling technologies (i.e., software) that provide biotech/pharma (R&D) teams with enterprise-level access to sample/biomarker data management solutions & analytics, information, insight & reporting capabilities. Our end-to-end (from sample collection to biomarker data) suite of solutions are focused on providing sponsors information (data with context) – we do this by connecting biospecimen, assay as well as clinical data sources in a secure and scalable cloud-based infrastructure, enabling seamless, automated data management workflows, key insight development, improved collaboration, and the ability to make faster, more informed decisions. Position Summary As we continue to expand our software engineering team, we are seeking a highly experienced Software Engineer. You will work with a team of software engineers to design, develop, test and maintain software applications. The successful candidate will have a strong understanding of software architecture, programming concepts and tools, and be able to work independently to solve complex technical problems. In your role as Senior DevOps Engineer You will lead the design and implementation of scalable infrastructure solutions. You’ll mentor junior engineers and drive automation and reliability across our AWS environments. Key Responsibilities Manage projects and initiatives with moderate complexity. Collaborate with cross-functional teams to design, develop, test, and maintain software applications. Create design specifications, test plans and automated test scripts for individual work scope. Develop software solutions that are scalable, maintainable, and secure. Analyze, maintain, and implement (including performance profiling) existing software applications and develop specifications from business requirements. Understand the purpose of new features and help communicate that purpose to team members. Write and debug software systems in accordance with software development standards, including the Application Development Lifecycle. Debug and troubleshoot complex software issues and provide timely solutions. Implement new software features and enhancements. Ensure adherence to software development best practices and processes. Write clean, legible, efficient, and well-documented code. Lead code reviews and provide constructive feedback to peers. Help to support the work of their peers by pair programming, reviewing code, and through mentorship. Mentor junior team members and provide guidance. Continuously improve technical skills and stay up to date with emerging technologies. Communicate effectively with team members and stakeholders. Contribute to strategic planning and decision-making. When performing duties as Senior DevOps Engineer Lead the development and maintenance of the Terraform IaC repository, ensuring modularity and scalability. Design and implement deployment strategies for microservices on Kubernetes (EKS) using Helm. Provision new applications and environments, ensuring consistency across dev, staging, and production. Optimize CI/CD pipelines in GitLab, integrating with Kubernetes and Docker workflows. Manage and monitor Kubernetes clusters, pods, and services. Collaborate with engineering teams to standardize development tools and deployment technologies. Mentor junior engineers and contribute to architectural decisions. Identify opportunities to streamline and automate IaC development processes. Other duties as assigned Qualifications Bachelor’s degree related field and a minimum of 5 years of relevant work experience in cloud/infrastructure technologies, information technology (IT) consulting/support, systems administration, network operations, software development/support, technology solutions. 2-4 years of experience working in a customer-facing role and leading projects. Excellent problem-solving and analytical skills. Strong written and verbal communication skills. Ability to articulate ideas and write clear and concise reports. Role qualifications: For Senior DevOps Engineer 5+ years of DevOps experience. Deep expertise in AWS services and Terraform. Strong scripting and automation skills. Experience with container orchestration (EKS, Kubernetes) and Helm Charts. Experience with CI/CD tools, specifically GitLab. Leadership Expectations Follows Company's Principles and code of ethics on a day-to-day basis. Shows appreciation for individual talents, differences, and abilities of fellow team members. Listens and responds with appropriate actions. Supports change initiatives and continuous process improvements. It has come to our attention that some individuals or organizations are reaching out to job seekers and posing as potential employers presenting enticing employment offers. We want to emphasize that these offers are not associated with our company and may be fraudulent in nature. Please note that our organization will not extend a job offer without prior communication with our recruiting team, hiring managers and a formal interview process. Apply Now
Posted 3 weeks ago
2.0 - 8.0 years
0 Lacs
karnataka
On-site
Dexcom Corporation is a pioneer and global leader in continuous glucose monitoring (CGM), with a mission to revolutionize diabetes management and improve health outcomes. With a vision to empower individuals to take control of their health, Dexcom is dedicated to providing personalized, actionable insights to address important health challenges. As a part of the team, you will contribute to the innovative solutions that are transforming the healthcare industry and improving human health on a global scale. Joining the high-growth and fast-paced environment at Dexcom, you will collaborate with leading-edge cloud and cybersecurity teams to develop cutting-edge diabetes medical device systems. As a Cloud Operations Engineer specializing in Google Cloud Platform (GCP) and PKI operations, you will play a crucial role in deploying and operating cloud-based services for the next generation of Dexcom products. Your responsibilities will include supporting the secure design, development, and deployment of products and services, working closely with various teams to ensure the PKI system is securely deployed and operated in the cloud. In this role, you will: - Support designs and architectures for enterprise cloud-based systems in GCP, ensuring careful planning and consideration of changes within the internal platform ecosystem. - Deploy and support PKI, device identity management, and key management solutions for Dexcom products and services. - Assist in software building and delivering processes using CI/CD tools, with a focus on automation and traceability. - Monitor and maintain production systems, troubleshoot issues, and participate in on-call rotations to address service outages. - Collaborate with stakeholders and development teams to integrate capabilities into the system architecture and align delivery plans with project timelines. - Provide cloud training to team members and support system testing, integration, and deployment. To be successful in this role, you should have experience with: - CI/CD processes and GitHub actions - Scripting languages such as Bash, Python - Automation of operational work - Cloud development and containerized Docker applications - GKE, Datadog, and cloud services - Developing and deploying cloud-based systems via CI/CD pipelines and cloud-native tools Additionally, having knowledge of PKI fundamentals, cryptography, and cloud security expertise is preferred. By joining Dexcom, you will have the opportunity to work with life-changing CGM technology, access comprehensive benefits, grow on a global scale, and contribute to an innovative organization committed to its employees, customers, and communities. Travel Required: 0-5% Experience And Education Requirements: - Bachelors degree in a technical discipline with 5-8 years related experience - Masters degree with 2-5 years equivalent industry experience - PhD with 0-2 years experience ,
Posted 3 weeks ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As a Software Developer with 4 to 5 years of experience in Java backend development using Spring Boot, you will be responsible for leveraging your strong skills in cloud technologies (AWS/Azure) and DevOps practices. Your role will involve building and deploying scalable microservices, automating CI/CD pipelines, and ensuring high availability by utilizing Kubernetes and Docker. You must be adept at working in Agile environments and collaborating effectively across development, operations, and QA teams. Your main responsibilities will include designing, developing, and maintaining RESTful backend services using Spring Boot. You will be expected to build and deploy applications on cloud platforms with a focus on security and scalability. Additionally, you will create and manage CI/CD pipelines for automated testing and deployment, containerize applications using Docker, and orchestrate them with Kubernetes. Monitoring system performance, troubleshooting issues, and collaborating with QA, DevOps, and Product teams to deliver high-quality features will also be part of your daily tasks. Key technical skills required for this role include proficiency in Java 8/11, Spring Boot, Spring MVC, Spring Data JPA, and Spring Security. Experience in RESTful API development, microservices architecture, and cloud deployment and monitoring strategies (AWS/Azure) is essential. Familiarity with CI/CD tools such as Jenkins, GitHub Actions, and GitLab CI/CD, as well as containerization technologies like Docker and orchestration tools like Kubernetes and Helm, will be highly valued. Preferred qualifications for this position include a Bachelor's degree in Engineering or a related field, along with 4 to 5 years of relevant experience in software development.,
Posted 3 weeks ago
5.0 - 7.0 years
5 - 5 Lacs
Pune
Work from Office
Additional Comments: Wiz cloud DevOps We are looking for a Cloud DevOps Engineer with hands-on experience in cloud-native environments and a strong working knowledge of Wiz Cloud Native Application Protection Platform (CNAPP) . The ideal candidate will play a key role in delivering and managing Wiz capabilities, integrating it across multiple cloud platforms (AWS, GCP, Azure, AliCloud), and enhancing visibility and compliance within containerized environments. This is a cross-functional role that bridges DevOps and Security, enabling secure, compliant cloud deployments and improving the overall cloud and container security posture. Key Responsibilities: Wiz Management & Automation : Automate day-to-day operations on the Wiz management plane (user onboarding, policy/setting updates, tenant migrations). Build infrastructure-as-code (IaC) for Wiz deployments and configurations using tools like Terraform and Helm. Automate Wiz integration with cloud platforms and container infrastructure. Integration & Customization : Onboard new cloud and container accounts into Wiz. Integrate Wiz with CI/CD pipelines, container registries, and ticketing systems. Develop custom reports for different stakeholders using Wiz API, GraphQL, Python, etc. Support integration with downstream reporting tools by managing service accounts and API catalogs. Security Operations Support : Contribute to security runbooks, incident response procedures, and policy-as-code documentation. Responsible for patching and maintaining the Wiz management infrastructure. Troubleshoot and resolve Wiz-related issues. Key Requirements: Cloud DevOps Expertise : Experience in building management planes for IT systems using GitOps and CI/CD pipelines, preferably with Google Cloud technologies (e.g., Cloud Run). Proven capability in automating operational tasks within the Wiz platform. Technical Skills : Strong scripting skills in Python or Go for creating custom reports and API integrations. Proficiency in at least one major cloud provider (GCP or AWS). Hands-on experience with IaC tools like Terraform and Helm . Familiarity with CI/CD tools such as GitHub Actions , Jenkins , AWS Systems Manager , or Cloud Run . Wiz Platform Experience : Deep understanding of Wiz onboarding, configuration, and platform management. Experience integrating Wiz into DevOps pipelines and ticketing systems. Knowledge of Wiz APIs and ability to leverage them for custom workflows and reporting. Debugging & Maintenance : Experience troubleshooting issues within Wiz. Managing Wiz platform updates, patches, and integrations. Qualifications: Education : Bachelor's degree in Computer Science, Information Security, or a related field. Experience : Relevant experience in Information Security , Cybersecurity , or Cloud DevOps roles. Candidate Requirements: Available to join within 15-30 days . Currently serving a 30-day notice period (if applicable). Must be flexible to work in UK business hours . Good to Have: Exposure to security runbooks , incident response protocols , and policy-as-code . Knowledge of cloud-native application platforms . Required Skills GCP, AWS development knowledge, scripting using Python / Go, Cloud Native
Posted 3 weeks ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Join our Team About this opportunity: Looking to be a part of reinventing the future of technology? We have the perfect opportunity for you. As a key member of our Application Development and 3rd Level support team, you'll be at the helm of ensuring cost-effective design, apt technology evolution, and the reliability of our applications and tools. Your role will be pivotal in maintaining the performance of our applications from both a product and end-to-end perspective, along with ensuring sufficient capacity to fulfil the growing business requirements and projections of our customers. What you will do: Take up various Application Development Support activities. Provide support for executing complex changes. Aid in incident restoration and problem management support. Perform application, engineering, and IS/IT specification analysis and design. Develop detailed project plans for solution development. Prepare low-level installation, integration, and test plans. Ensure software configuration and quality management. Facilitate application lifecycle, release and deployment, and capacity and performance management. The skills you bring: Good Development experience on Nokia Mediation - NCS22/24 Good hands on of Perl, C and Java Programming and Knowledge of Kubernetes , cloud, virtualization Experience in Designing high traffic business critical solutions Prior experience in handling critical Production emergencies Open to work 24*7 and provide technical support to support team when required Hands-on experience on UNIX, Linux, Clustering, Oracle, MySQL, PostgreSQL, Shell, Python scripting IP networking and client server concepts Good understanding of 5G, CHF, VOICE, SMS,GSM / IN call flow, CAMEL, HLD LLD designing Good debugging and troubleshooting skill, Config Tuning, understanding of thread dump Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Noida Req ID: 769482
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France