Jobs
Interviews

1311 Yaml Jobs - Page 10

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Requirements Description and Requirements The Enterprise Infrastructure Engineer role is to support Enterprise Configuration and Integration Technologies for IBM WebSphere Application Server (WAS) and WebSphere Liberty and ensure platform stability and process improvements. Responsibilities include planning, support, and implementation of application platform infrastructure to include operational processes and procedures Job Responsibilities WebSphere 8.5, 9.x WebSphere Liberty administration and Operations Installation and configuration of WebSphere environments WAS Liberty environment building Provide support, maintenance, and guidance in support of current WAS and WAS Liberty platforms as well as advise on new versions of software platforms Troubleshoot any identified and reported issues Support work across WAS or other hosting platforms as necessary Provide support, maintenance, and guidance in support of current WAS and WAS Liberty platforms as well as advise on new versions of software platforms Deployment, administration, and operational support of (production, staging, test and development) environments for multiple projects using WebSphere Application Server Experience in installing, clustering, and doing performance tuning and troubleshooting for applications server Web infrastructures build and deployment Working knowledge of production Web hosting environments is required UNIX and Shell Scripting Monitor middleware performance and assist with ensuring middleware security Provide technical support to application developers when required. This includes promoting use of best practices, ensuring standardization across applications and trouble shooting Technical leadership, ability to think strategically and effectively communicate solutions to a variety of stake holders Able to debug production issues by analyzing the logs directly and using tools like Splunk Learn new technologies based on demand and help team members by coaching and assisting Good Communication skill with the ability to communicate clearly and effectively Knowledge, Skills And Abilities Education Bachelor's degree in computer science, Information Systems, or related field. Experience 7+ years of total experience and at least 4+ years of experience in WebSphere and Liberty applications. WebSphere 8.5, 9.x WebSphere Liberty administration and Operations Installation and configuration of WebSphere environments Provide support, maintenance, and guidance in support of current WAS and WAS Liberty platforms as well as advise on new versions of software platforms Apache / HIS Ansible Python Linux/Windows OS Communication Shell scripting AZDO Pipelines Integration of authentication and authorization methods Web to jvm communications SSL/TLS protocols/cipher suites and certificates/keystores Integration with middleware technologies WebSphere ND administration Liberty administration Troubleshooting Continuous Integration / Continuous Delivery (CI/CD) Experience in creating and working on Service Now tasks/tickets Good to Have : OpenShift Json/Yaml Azure DevOps Run/Code Ansible (Automation) Integration with database technologies About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us!

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Kochi, Kerala, India

On-site

Desirable to have at least 5 years hands on experience in SFCC. Should have hands on experience working on SiteGenesis and/or SFRA based storefront. Deep knowledge of Demandware or equivalent Javascript based scripting languages. Thorough knowledge of the Retail domain and various downstream and upstream systems catered. Hands on experience in third party integrations from SFCC. Good knowledge of headless implementation concepts in SFCC by exposing API’s making use of OCAPI and SCAPI. Good knowledge of API concepts, technologies such as REST, JSON, XML, YAML, GraphQL, Swagger. Test Driven Development experience with hands on experience using mocking frameworks. Experience with Data modelling. Good knowledge of agile methodology and scrum ceremonies. Concepts on branching/merge strategies, code repo frameworks like Git/Bitbucket, code reviews etc. Hands on experience in integrations with code quality tools. Professional Skill Requirements Proven ability to work creatively and analytically in a problem-solving environment. Desire to work in an information systems environment. Strong written and verbal communication skills with the ability to effectively communicate with business and technology partners, peers, and senior management. Excellent interpersonal skills and the ability to work with multiple stakeholders to drive success.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Desirable to have at least 5 years hands on experience in SFCC. Should have hands on experience working on SiteGenesis and/or SFRA based storefront. Deep knowledge of Demandware or equivalent Javascript based scripting languages. Thorough knowledge of the Retail domain and various downstream and upstream systems catered. Hands on experience in third party integrations from SFCC. Good knowledge of headless implementation concepts in SFCC by exposing API’s making use of OCAPI and SCAPI. Good knowledge of API concepts, technologies such as REST, JSON, XML, YAML, GraphQL, Swagger. Test Driven Development experience with hands on experience using mocking frameworks. Experience with Data modelling. Good knowledge of agile methodology and scrum ceremonies. Concepts on branching/merge strategies, code repo frameworks like Git/Bitbucket, code reviews etc. Hands on experience in integrations with code quality tools. Professional Skill Requirements Proven ability to work creatively and analytically in a problem-solving environment. Desire to work in an information systems environment. Strong written and verbal communication skills with the ability to effectively communicate with business and technology partners, peers, and senior management. Excellent interpersonal skills and the ability to work with multiple stakeholders to drive success.

Posted 2 weeks ago

Apply

4.0 years

3 - 18 Lacs

Jubilee Hills, Hyderabad, Telangana

On-site

Job Description: We are seeking a talented and experienced Spring Boot Developer to join our dynamic team. As a Spring Boot Developer, you will be responsible for developing and maintaining our web applications, collaborating with cross-functional teams, and implementing user interfaces that delight our users. Qualifications: 4+ years of hands-on experience designing and developing microservices using Java/Spring Boot Must have worked with the following: Java 8, Spring, Spring MVC, Spring Boot, Microservices, Maven, SQL, Oracle, MongoDB, RESTful API, Data Structures Experience with API concepts and technologies such as REST, JSON, XML, SOAP, YAML, GraphQL, and Swagger Experience with Node JS Experience developing within agile methodology using CI/CD pipeline Experience with 3-tier, n-tier, cloud computing, microservices architectures and SOA Familiarity with containerization and deployment (e.g., Docker, Kubernetes). Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Key Responsibilities: 1. Design and develop microservices/APIs using either Java/Spring boot 2. Understand the points of integration between the different systems and highlight the potential risks associated with the delivery of solutions 3. Produce detailed functional and technical specifications 4.Create work breakdown structures, do critical path analysis, estimate effort 5. Identify and address performance bottlenecks to ensure smooth application performance. 6. Write unit tests and participate in code reviews. 7. Collaborate with fellow developers, designers, product managers, and other stakeholders to deliver high-quality software. 8. Maintain clear and concise documentation for code, processes, and best practices. 9. Stay up-to-date with the latest developments in backend technologies and share knowledge with the team. Nice to Have: Experience in front-end technologies like angular or react. Knowledge of performance optimization techniques. Experience in healthcare technology (pharmaceutical & life sciences domain preferred) Education: B.E./B.Tech./M.Tech in Computer Science or MCA is preferred, but relevant experience and skills are also highly valued. Job Types: Full-time, Permanent Pay: ₹387,871.55 - ₹1,811,823.41 per year Location Type: In-person Schedule: Monday to Friday Ability to commute/relocate: Jubilee Hills, Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Mohali district, India

On-site

Job Title: DevOps/MLOps Expert Location: Mohali (On-Site) Employment Type: Full-Time Experience: 6 + years Qualification: B.Tech CSE About the Role We are seeking a highly skilled DevOps/MLOps Expert to join our rapidly growing AI-based startup building and deploying cutting-edge enterprise AI/ML solutions. This is a critical role that will shape our infrastructure, deployment pipelines, and scale our ML operations to serve large-scale enterprise clients. As our DevOps/MLOps Expert , you will be responsible for bridging the gap between our AI/ML development teams and production systems, ensuring seamless deployment, monitoring, and scaling of our ML-powered enterprise applications. You’ll work at the intersection of DevOps, Machine Learning, and Data Engineering in a fast-paced startup environment with enterprise-grade requirements. Key Responsibilities MLOps & Model Deployment • Design, implement, and maintain end-to-end ML pipelines from model development to production deployment • Build automated CI/CD pipelines specifically for ML models using tools like MLflow, Kubeflow, and custom solutions • Implement model versioning, experiment tracking, and model registry systems • Monitor model performance, detect drift, and implement automated retraining pipelines • Manage feature stores and data pipelines for real-time and batch inference • Build scalable ML infrastructure for high-volume data processing and analytics Enterprise Cloud Infrastructure & DevOps • Architect and manage cloud-native infrastructure with focus on scalability, security, and compliance • Implement Infrastructure as Code (IaC) using Terraform , CloudFormation , or Pulumi • Design and maintain Kubernetes clusters for containerized ML workloads • Build and optimize Docker containers for ML applications and microservices • Implement comprehensive monitoring, logging, and alerting systems • Manage secrets, security, and enterprise compliance requirements Data Engineering & Real-time Processing • Build and maintain large-scale data pipelines using Apache Airflow , Prefect , or similar tools • Implement real-time data processing and streaming architectures • Design data storage solutions for structured and unstructured data at scale • Implement data validation, quality checks, and lineage tracking • Manage data security, privacy, and enterprise compliance requirements • Optimize data processing for performance and cost efficiency Enterprise Platform Operations • Ensure high availability (99.9%+) and performance of enterprise-grade platforms • Implement auto-scaling solutions for variable ML workloads • Manage multi-tenant architecture and data isolation • Optimize resource utilization and cost management across environments • Implement disaster recovery and backup strategies • Build 24x7 monitoring and alerting systems for mission-critical applications Required Qualifications Experience & Education • 4-8 years of experience in DevOps/MLOps with at least 2+ years focused on enterprise ML systems • Bachelor’s/Master’s degree in Computer Science, Engineering, or related technical field • Proven experience with enterprise-grade platforms or large-scale SaaS applications • Experience with high-compliance environments and enterprise security requirements • Strong background in data-intensive applications and real-time processing systems Technical Skills Core MLOps Technologies • ML Frameworks : TensorFlow, PyTorch, Scikit-learn, Keras, XGBoost • MLOps Tools : MLflow, Kubeflow, Metaflow, DVC, Weights & Biases • Model Serving : TensorFlow Serving, PyTorch TorchServe, Seldon Core, KFServing • Experiment Tracking : MLflow, Neptune.ai, Weights & Biases, Comet DevOps & Cloud Technologies • Cloud Platforms : AWS, Azure, or GCP with relevant certifications • Containerization : Docker, Kubernetes (CKA/CKAD preferred) • CI/CD : Jenkins, GitLab CI, GitHub Actions, CircleCI • IaC : Terraform, CloudFormation, Pulumi, Ansible • Monitoring : Prometheus, Grafana, ELK Stack, Datadog, New Relic Programming & Scripting • Python (advanced) - primary language for ML operations and automation • Bash/Shell scripting for automation and system administration • YAML/JSON for configuration management and APIs • SQL for data operations and analytics • Basic understanding of Go or Java (advantage) Data Technologies • Data Pipeline Tools : Apache Airflow, Prefect, Dagster, Apache NiFi • Streaming & Real-time : Apache Kafka, Apache Spark, Apache Flink, Redis • Databases : PostgreSQL, MongoDB, Elasticsearch, ClickHouse • Data Warehousing : Snowflake, BigQuery, Redshift, Databricks • Data Versioning : DVC, LakeFS, Pachyderm Preferred Qualifications Advanced Technical Skills • Enterprise Security : Experience with enterprise security frameworks, compliance (SOC2, ISO27001) • High-scale Processing : Experience with petabyte-scale data processing and real-time analytics • Performance Optimization : Advanced system optimization, distributed computing, caching strategies • API Development : REST/GraphQL APIs, microservices architecture, API gateways Enterprise & Domain Experience • Previous experience with enterprise clients or B2B SaaS platforms • Experience with compliance-heavy industries (finance, healthcare, government) • Understanding of data privacy regulations (GDPR, SOX, HIPAA) • Experience with multi-tenant enterprise architectures Leadership & Collaboration • Experience mentoring junior engineers and technical team leadership • Strong collaboration with data science teams , product managers , and enterprise clients • Experience with agile methodologies and enterprise project management • Understanding of business metrics , SLAs , and enterprise ROI Growth Opportunities • Career Path : Clear progression to Lead DevOps Engineer or Head of Infrastructure • Technical Growth : Work with cutting-edge enterprise AI/ML technologies • Leadership : Opportunity to build and lead the DevOps/Infrastructure team • Industry Exposure : Work with Government & MNCs enterprise clients and cutting-edge technology stacks Success Metrics & KPIs Technical KPIs • System Uptime : Maintain 99.9%+ availability for enterprise clients • Deployment Frequency : Enable daily deployments with zero downtime • Performance : Ensure optimal response times and system performance • Cost Optimization : Achieve 20-30% annual infrastructure cost reduction • Security : Zero security incidents and full compliance adherence Business Impact • Time to Market : Reduce deployment cycles and improve development velocity • Client Satisfaction : Maintain 95%+ enterprise client satisfaction scores • Team Productivity : Improve engineering team efficiency by 40%+ • Scalability : Support rapid client base growth without infrastructure constraints Why Join Us Be part of a forward-thinking, innovation-driven company with a strong engineering culture. Influence high-impact architectural decisions that shape mission-critical systems. Work with cutting-edge technologies and a passionate team of professionals. Competitive compensation, flexible working environment, and continuous learning opportunities. How to Apply Please submit your resume and a cover letter outlining your relevant experience and how you can contribute to Aaizel Tech Labs’ success. Send your application to hr@aaizeltech.com , bhavik@aaizeltech.com or anju@aaizeltech.com.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are looking for an experienced Senior System Engineer specialized in Microsoft Azure to join our innovative team. In this role, you will be responsible for designing, implementing, and maintaining solutions on Azure with an emphasis on scalability, fault tolerance, high availability, and security. The ideal candidate will have 5 to 8 years of direct hands-on experience in Azure Cloud, along with proficiency in DevOps tools, scripting, and infrastructure automation. Responsibilities Design, deploy, and manage scalable, fault-tolerant, and secure Azure infrastructure and platforms Execute CI/CD pipelines with automated building and testing systems Oversee production deployments employing multiple deployment strategies Facilitate Azure infrastructure and platform deployments using IaC tools Automate system configurations utilizing configuration management tools Employ microservices concepts and best practices to facilitate application development Liaise requirements, schedules, and activities with development teams Troubleshoot and resolve technical problems, continuously enhancing system performance Undertake proof-of-concept (POC) studies to validate proposed designs and technologies Effectively learn and adapt to the services used in the current environment Requirements 5 to 8 years of Azure Cloud experience Hands-on experience with DevOps CI/CD tools Proficiency in Linux/Windows administration Advanced scripting capabilities in Python, Bash, Shell, or Unix Experience with YAML scripting and ARM templates Expertise in Terraform modules for infrastructure automation Solid understanding of containerization technologies including Docker and Kubernetes (including AKS) Knowledge of fault tolerance, high availability, and scalability in Azure environments Familiarity with microservices architecture and best practices Experience with configuration management tools B2+ level in English

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

\Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Software engineering is the application of engineering to the design, development, implementation, testing and maintenance of software in a systematic method. The roles in this function will cover all primary development activity across all technology functions that ensure we deliver code with high quality for our applications, products and services and to understand customer needs and to develop product roadmaps. These roles include, but are not limited to analysis, design, coding, engineering, testing, debugging, standards, methods, tools analysis, documentation, research and development, maintenance, new development, operations and delivery. With every role in the company, each position has a requirement for building quality into every output. This also includes evaluating new tools, new techniques, strategies; Automation of common tasks; build of common utilities to drive organizational efficiency with a passion around technology and solutions and influence of thought and leadership on future capabilities and opportunities to apply technology in new and innovative ways. Generally work is self-directed and not prescribed. Primary Responsibilities Manage Azure Cloud Infrastructure and building resilient and self-scaling systems Implement solutions to continuously improve operational reliability of the cloud infrastructure You will be responsible for the availability, performance, monitoring and Infra Provisioning for the Platform which comprises of Cloud infrastructure and On Prem technologies Closely partner with Engineering and Technical Support teams to drive resolution of critical issues Publish and implement operational standards for all Cloud infrastructure and services Work towards reducing Operations toil by automating repeatable tasks Focus would be to mentor and develop other members in the SRE subject area Application deployments using CI/CD tools, code repository, code scanning, artifact repo, compliance scanning, packaging, deployment, and configuration management Build Operations Dashboards leveraging tools like Dynatrace, Splunk or Grafana Handling incident, change and problem management Help with provisioning of Infrastructure using Terraform Enhancing Platform Observability Dashboards Closely partnering with Development Teams and help address Platform related roadblocks Conduct post-mortem after a production issues. React to production deficiencies by continuously implementing automation, self-healing, and real-time monitoring to production systems Work with Docker, Kubernetes, Azure cloud, Prometheus, Grafana, Java, Python and many other modern SaaS technologies Participate in projects involving people of many different disciplines: Engineering, Cloud, Networking, CI/CD, Project management, Monitoring, alerting etc. Stay informed of new technologies and Innovate Works with less structured, more complex issues Serves as a resource to others Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s or advanced Degree in a related technical field 3+ years IT Experience 3+ years DevOps Experience 2+ years experience on Infrastructure as Code (Terraform/Ansible/Chef/Puppet) 2+ years experience on Docker and Container Orchestration (Kubernetes/OpenShift) 2+ years experience on DevOps and CI/CD tools such as Git, Jenkins 2+ years experience on Kafka Support 2+ years experience on Monitoring tools and technologies (Splunk, Dynatrace, new relic) Preferred Qualifications Infrastructure Engineering Experience Cloud Experience (Azure/AWS/GCP) Automation experience Good Knowledge on SRE principles Hands on scripting with one or more: YAML, JSON, PowerShell, BASH or Python At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Roles And Responsibilities Implement and maintain Continuous Integration and Continuous Deployment pipelines to facilitate seamless code integration and delivery. Ensure consistency across development, testing, and production environments. Set up and manage monitoring tools to ensure system reliability and performance. Analyze logs for troubleshooting and performance tuning. Work closely with development, QA, and operations teams to ensure successful delivery and deployment of applications. Develop, maintain, and optimize shell scripts for automation of tasks and processes. Manage and execute deployment processes for Java-based applications, ensuring high availability and performance. Oversee the end-to-end release process, including planning, scheduling, and coordinating releases across multiple environments. Troubleshooting techniques and fixing the code bugs Identifying and deploying cyber-security measures by continuously performing vulnerability assessment and risk management. Incidence management and root cause analysis. Required Skills Minimum qualification - Bachelor’s degree in Engineering or Computer Applications 3+ years of experience as a DevOps Engineer or similar software engineering role. Must have experience working with any cloud platforms like AWS,Azure or GCP. Experience working with Instance setup,handling S3 buckets/blob, and RDS. Proficient in shell scripting (Bash, etc.) with a strong understanding of automation tools and frameworks. Proven experience in deploying and managing Java-based applications both manually and automatically. Experience in Manual deployment is a must. Solid understanding of release management processes and best practices. Hands-on experience with CI/CD tools (Jenkins, GitLab CI, etc.). Version Control: Proficiency with version control systems (Git, Bitbucket etc.). Must have a strong experience in handling the Nginx Configuration files and handling Nginx as a reverse proxy. Experience in handling SSL Certificates and DNS management. Must have a strong Experience with environment configuration and management. Experience in handling MySQL/MariaDb Queries. Good to have knowledge on handling the YAML Property files. Understanding of Containerisation (Docker) and Orchestration (Kubernetes,EKS) techniques. Strong analytical and troubleshooting skills. Excellent verbal and written communication skills. Location: Chennai Experience: 3 - 8 years Notice period: Immediate Joiner, less than 30 days

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Position Overview Job Title: DevOps Engineer - VP Location: Pune, India Role Description This role is for Senior DevOps Engineer responsible for building tools and automation of various DevOps tools and framework. The candidate should have sound understanding on various Linux platforms including Infrastructure knowledge. Should have expert knowledge is various scripting tools, databases and middleware systems. The candidate should have excellent skills in automation tasks and the ability to setup CI-CD pipelines independently. The candidate should have sound knowledge in build-deployment tools and scripting knowledge. The candidate should have worked hands-on in containerization, virtualization and cloud computing (preferably GCP). Should investigate and resolve technical issues and perform root cause analysis for production and non-production issues. The candidate should have excellent innovation and troubleshooting skills. The candidate is expected to work closely with Team Leads or Software Development Managers and other key stake holders to ensure good quality, maintainable, scalable and high performing software applications are delivered to users. Should be coming from a strong technological background. Should be hands on and be able to work independently requiring minimal technical/tool guidance. Should support day-to-day tasks related to DevOps and platform support. Should be able to technically guide and mentor junior resources in the team. Work closely with team members and vendor team to design and develop software. Should have good communication, nice attitude towards learning new skills and strong positive outlook towards work ethics. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Hands-on in build, deployment and CI-CD tools and setup of automated pipelines Knowledge of Linux platforms and Cloud computing Very good knowledge in scripting tools such as Shell, Perl/Python Excellent knowledge in Kubernetes, Docker and other containerization platforms Should be able to design, build and automate tools and utilities Ability to build, maintain, manage, monitor environments for delivery Experience in Agile and SDLC processes Work closely with technology teams and key stakeholders Should be able to independently troubleshoot with excellent analytical and design skills Participate in daily stand-up meetings Articulate issues and risks to management in timely manner Should handle weekend production implementation Train other team members to bring them up to speed Analyze, monitor environments and infrastructure for platform stability Good to have knowledge on big-data platforms such as Hadoop Administration Your Skills And Experience Hands-on exposure in majority (if not all) of skills mentioned below: Extensive experience with hands-on skills on most of the DevOps tools/framework below: Version Control System tools - Bitbucket, SVN Continuous Integration tools - TeamCity, Jenkins, Bamboo Continuous Testing tools - Selenium, Cucumber Code Quality and Analysis - SonarQube Configuration Management tools – Ansible, Puppet/Chef Deployment automation tools – uDeploy (IBM UCD) Continuous Monitoring tools – Nagios, Splunk, Geneos Containerization tools - Docker, OpenShift, Kubernetes Build tools - Maven, Gradle, Ant Artifactory Mgmt – Artifactory, Nexus Databases - Oracle, PostgreSQL, MongoDB, HBase, Hive Middleware systems - JBoss, Tomcat, Apache HTTP Server, WebLogic, JMS (MQ/Solace/Tibco) Operating Systems - RHEL, SLES, OEL Cloud Providers – Google Cloud Platform / Azure / AWS (preferably GCP) Schedulers – Control-M, Cron, Quartz/Autosys (Control-M is preferred) Big data platform – Hadoop, Cloudera (good to have) Scripting – Shell/Python, YAML/Groovy Experience in working on Bigdata platform - Hadoop and Cloudera Administration is preferred. Sound working experience of scripting and tooling. Must have hands-on skills to design, build and maintain CI-CD setup to support delivery. Good knowledge in troubleshooting environment and platform related issues. Decent knowledge on Java and PL/SQL basics, Oracle database and SQL commands. Should have hands-on knowledge in deploying upgrades and fixes. Should have handled production environments for deployments/maintenance or administration. Should have Linux server administration experience or a deep understanding of Linux/Unix. Should have worked on big-scale platform migrations of complex integrated systems. Should have expertise in disaster recovery, load-balancers and high-availability cluster concepts. Should have basic understanding on networking and security/encryption principles. Should have performed standard configuration management tasks on various DevOps tools. Should be able to analyze current technology utilized and develop steps and processes to improve the infrastructure estate. Should have worked on ITIL tasks ServiceNow for Release and Change Management. How We’ll Support You Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

vadodara, gujarat

On-site

As a Senior Software Engineer (Java Developer) at our organization, you will play a crucial role in designing, developing, and deploying high-performance Java-based microservices. Your expertise in Core Java, Spring Boot, and Microservices Architecture will be essential in implementing REST APIs following OpenAPI/Swagger standards. Your responsibilities will focus on ensuring the quality, automation, testing, performance optimization, and monitoring of our systems. In terms of design and development, you will be required to adhere to API-first and Cloud-native design principles while driving the adoption of automated unit tests, integration tests, and contract tests. Your role will involve developing and extending automation frameworks for API and integration-level testing, as well as supporting BDD/TDD practices across development teams. Furthermore, you will contribute to performance tuning, scalability, asynchronous processing, and fault tolerance aspects of the system. Your collaboration with DevOps, Product Owners, and QA teams will be crucial for feature delivery. Additionally, mentoring junior developers, conducting code walkthroughs, and leading design discussions will be part of your responsibilities. The ideal candidate should have at least 5 years of hands-on Java development experience and a deep understanding of Microservices design patterns, API Gateways, and service discovery. Exposure to Cloud deployment models like AWS ECS/EKS, Azure AKS, or GCP GKE is preferred. Proficiency with Git, Jenkins, SonarQube, and containerization (Docker/Kubernetes), along with experience working in Agile/Scrum teams, is highly desired. Experience with API security standards (OAuth2, JWT), event-driven architecture using Kafka or RabbitMQ, Infrastructure as Code (IaC) tools like Terraform or CloudFormation, and performance testing tools like JMeter or Gatling would be considered a plus. Your ownership-driven mindset, strong communication skills, and ability to solve technical problems under tight deadlines will be valuable assets in this role. It is essential for every individual working with or on behalf of our organization to prioritize information security. This includes abiding by security policies, ensuring confidentiality and integrity of information, reporting any security violations, breaches, and completing mandatory security trainings as per company guidelines. If you are a passionate and skilled Senior Software Engineer with expertise in Java development and a desire to contribute to scalable backend systems, we encourage you to apply for this role and join our dynamic team.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

chennai, tamil nadu

On-site

The job is located in Chennai, Tamil Nadu, India with the company Hitachi Energy India Development Centre (IDC). As part of the Engineering & Science profession, the job is full-time and not remote. The primary focus of the India Development Centre is on research and development, with around 500 R&D engineers, specialists, and experts dedicated to creating and sustaining digital solutions, new products, and technology. The centre collaborates with Hitachi Energy's R&D and Research centres across more than 15 locations in 12 countries. The mission of Hitachi Energy is to advance the world's energy system to be more sustainable, flexible, and secure while considering social, environmental, and economic aspects. The company has a strong global presence with installations in over 140 countries. As a potential candidate for this role, your responsibilities include: - Meeting milestones and deadlines while staying on scope - Providing suggestions for improvements and being open to new ideas - Collaborating with a diverse team across different time zones - Enhancing processes for continuous integration, deployment, testing, and release management - Ensuring the highest standards of security - Developing, maintaining, and supporting Azure infrastructure and system software components - Providing guidance to developers on building solutions using Azure technologies - Owning the overall architecture in Azure - Ensuring application performance, uptime, and scalability - Leading CI/CD processes design and implementation - Defining best practices for application deployment and infrastructure maintenance - Monitoring and reporting on compute/storage costs - Managing deployment of a .NET microservices based solution - Upholding Hitachi Energy's core values of safety and integrity Your background should ideally include: - 3+ years of experience in Azure DevOps, CI/CD, configuration management, and test automation - 2+ years of experience in various Azure technologies such as IAC, ARM, YAML, Azure PaaS, Azure Active Directory, Kubernetes, and Application Insight - Proficiency in Bash scripting - Hands-on experience with Azure components and services - Building and maintaining large-scale SaaS solutions - Familiarity with SQL, PostgreSQL, NoSQL, Redis databases - Expertise in infrastructure as code automation and monitoring - Understanding of security concepts and best practices - Experience with deployment tools like Helm charts and docker-compose - Proficiency in at least one programming language (e.g., Python, C#) - Experience with system management in Linux environment - Knowledge of logging & visualization tools like ELK stack, Prometheus, Grafana - Experience in Azure Data Factory, WAF, streaming data, big data/analytics Proficiency in spoken and written English is essential for this role. If you have a disability and require accommodations during the job application process, you can request reasonable accommodations through Hitachi Energy's website by completing a general inquiry form. This assistance is specifically for individuals with disabilities needing accessibility support during the application process.,

Posted 2 weeks ago

Apply

3.0 - 6.0 years

3 - 6 Lacs

Noida

Work from Office

Advanced Troubleshooting, Change Management, Automation,Linux,YAML/Helm/Kustomize,Maintain Operators, upgrade OpenShift clusters,Work with CI/CD pipelines and DevOps teams,Maintain logs, monitoring, and alerting tools (Prometheus, EFK, Grafana)

Posted 2 weeks ago

Apply

4.0 - 6.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Smarsh is the leading provider of archiving & compliance solutions for companies in regulated and litigious industries. The solutions are delivered using Smarsh product suite that process, control, manage and store a very large variety of electronic communication channels (from e.g. social networks, group chat, instant messaging, email, blogs, wikis, SMS/MMS, Voice etc.) at cloud scale About the team : We are seeking a talented Engineer to join our team, focusing on developing scalable integrations, APIs, and open-source solutions that contribute to our Internal Developer Portal (IDP) ecosystem. As a key team member, you will collaborate with cross-functional teams to design, implement, and maintain APIs and data pipelines that enable seamless data flow into our IDP. If you are passionate about clean code, open-source contributions, and building developer-centric tools, we want to hear from you Key Responsibilities API Development : Design, develop, and maintain robust APIs to push data into the IDP. Ensure high performance, scalability, and security in API implementations. Collaborate with teams to integrate APIs with existing systems. Integration Development : Build and maintain open-source integrations for third-party tools (e.g., monitoring systems, CI/CD pipelines, container registries). Write reusable, testable, and efficient Python code to bridge systems with the IDP. Data Processing and Transformation : Develop data pipelines to process, transform, and push data into the IDP. Implement error handling and logging mechanisms to ensure reliability. Design systems for data parsing and transformation , including robust handling of YAML, JSON , and other serialisation formats to normalise inputs from disparate sources. Open-Source Contribution : Contribute to open-source projects that enhance the IDP ecosystem. Actively participate in the developer community by publishing and maintaining open-source tools. Collaboration and Communication : Work closely with DevOps, Platform Engineering, and Security teams to understand data requirements. Document APIs, integrations, and workflows for internal and external stakeholders. Code Quality and Testing : Write unit and integration tests to ensure code reliability. Perform code reviews and enforce best practices in Python development. Required Experience/Skills Education : Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience) with 4 - 6 years of total experience. Technical Expertise : Proficiency in Python with a focus on building scalable applications. Experience with API frameworks such as FastAPI , Django Rest Framework , or Flask . Knowledge of data serialization formats (e.g., JSON , YAML ). Knowledge of event-driven architecture. Knowledge of queuing system like Kafka, RabbitMQ and SQS. Knowledge of Role-Based Access Control (RBAC) and least-privilege principles to secure all IDP interactions. Integration Experience : Experience building integrations with third-party tools like Jenkins , GitLab , Prometheus , or AWS . Familiarity with APIs for monitoring tools, container registries, and CI/CD systems. DevOps and Cloud : Understanding of Kubernetes , Docker , and cloud platforms (AWS, GCP, Azure). Familiarity with GitOps practices and tools like ArgoCD . Data Processing : Experience with data pipelines and ETL workflows. Knowledge of PostgreSQL , MongoDB , or other relational/non-relational databases. Design systems for data parsing and transformation , including robust handling of YAML and JSON . Open Source : Proven experience contributing to or maintaining open-source projects. Familiarity with Git and GitHub workflows. Soft Skills : Strong communication skills and the ability to work in a collaborative environment. Analytical mindset with attention to detail and problem-solving skills. Preferred Qualifications Familiarity with Port or other Internal Developer Portal (IDP) tools. Experience with security practices, including API authentication and data encryption. Understanding of AWS, Kubernetes and DevOps practices. Knowledge of DORA metrics and CI/CD pipeline observability. Exposure to Infrastructure-as-Code tools (e.g., Terraform, Pulumi). Familiarity with testing frameworks like pytest or unittest Smarsh hires lifelong learners with a passion for innovating with purpose, humility and humor. Collaboration is at the heart of everything we do. We work closely with the most popular communications platforms and the world’s leading cloud infrastructure platforms. We use the latest in AI/ML technology to help our customers break new ground at scale. We are a global organization that values diversity, and we believe that providing opportunities for everyone to be their authentic self is key to our success. Smarsh leadership, culture, and commitment to developing our people have all garnered Comparably.com Best Places to Work Awards. Come join us and find out what the best work of your career looks like.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Network Automation & Orchestration: Develop low-level automation using Ansible, Terraform, Python, and Puppet to configure routers, switches, firewalls (Cisco, Arista, Palo Alto), and enforce policy-driven changes across hybrid environments. Core Network Engineering: Hands-on expertise in Layer 2/3 protocols (MPLS, BGP, OSPF, VRRP, STP, VLANs) and physical/logical infrastructure design for large-scale, multi-region enterprise networks. Infrastructure as Code (IaC): Implement version-controlled, repeatable network provisioning using Terraform modules, YAML-based configs, and Git workflows to manage complete network lifecycles. Cloud Network Provisioning: Automate and manage cloud-native networking (VPCs, subnets, route tables, VPNs, Direct Connect, Transit Gateway, firewalls, NACLs, Security Groups) in AWS, Azure, and GCP. CI/CD for Network Ops: Integrate network changes into CI/CD pipelines (Jenkins, GitLab CI) for test-driven deployments with linting, validation, rollback, and minimal operational downtime. Internet & Perimeter Security: Configure and manage internet-edge security layers (firewall policies, DDoS protection, IPS/IDS, web proxies) and ensure secure ingress/egress traffic flows. Monitoring, Telemetry & Auto-Remediation: Deploy NMS/telemetry tools (SolarWinds, Prometheus, InfluxDB) and write custom scripts for real-time alerting, anomaly detection, and event-driven remediation. Troubleshooting & Packet Analysis: Perform low-level debugging using packet captures (Wireshark/tcpdump), flow telemetry, syslog, and routing table state analysis to resolve incidents in multi-vendor networks. Cross-Functional Collaboration & Documentation: Collaborate with CloudOps, DevOps, Security, and platform teams to align network architecture with application needs. Maintain up-to-date HLD/LLD, runbooks, topology maps, and compliance records. Innovation & Optimization: Evaluate SDN, SASE, Zero Trust, and AI-based networking solutions to continuously improve agility, reliability, and performance of the network ecosystem.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

10 - 20 Lacs

Pune

Work from Office

Job Summary: We are seeking an experienced Devops Engineer to join our team, responsible for ensuring the stability and reliability of production applications hosted in the Azure environment. The ideal candidate will have 3 to 5 years of experience in troubleshooting and investigating production issues based on SLAs, with strong knowledge of Azure Functions, Web APIs, and related technologies. A proactive mindset in issue investigation and resolution is crucial. This role involves shift-based work in a 24/7 production support environment. Key Responsibilities: Able to work in shifts Deploy Packages to existing environments i.e., UAT, QA & Production. Create and Maintain CI/CD pipelines in Azure DevOps Create and Maintain Bicep files for deploying Azure resources declaratively. Create/update baseline for production. Key Skills & Qualifications: Technical Expertise Bachelor’s degree in computer science or computer engineering, Azure Certified DevOps Engineer Experience in applications management or Azure operations support Experience in applications management or infrastructure operations support Gitlab, YAML or equivalent experience Integration of CI/CD tools Excellent communication (verbal and written) skills Extensive knowledge and experience in major incident management, problem management and change management Ability to lead, influence and coordinate resources to achieve results Autonomous and self-motivated

Posted 2 weeks ago

Apply

8.0 - 13.0 years

20 - 30 Lacs

Chennai

Hybrid

Role & responsibilities We are looking for an experienced Azure DevOps Lead to design, implement, and optimize CI/CD pipelines, manage cloud deployments, and oversee Azure Kubernetes infrastructure. The ideal candidate will have strong hands-on experience in Azure Cloud, Azure DevOps, Infrastructure as Code (IaC), and monitoring tools like Dynatrace . Key Responsibilities: 1. DevOps & CI/CD Pipeline Management: Azure DevOps (ADO): Design, implement, and optimize CI/CD pipelines using YAML/YML scripts . Manage release strategies, version control, and deployment automation . Troubleshoot CI/CD pipeline issues and ensure optimal performance. 2. Cloud Infrastructure Management (Azure): Azure Cloud Services: Hands-on experience with Azure Web Apps, Function Apps, Storage Accounts, Azure SQL, and Virtual Machines (VMs) . Expertise in Azure Networking (DNS Zones, Load Balancers, Application Gateway, API Management (APIM)) . Azure Kubernetes Service (AKS): Deploy, manage, and scale applications on Azure Kubernetes . Implement containerized deployments with Docker & Helm Charts . 3. Infrastructure as Code (IaC): Automate infrastructure provisioning using Terraform, Bicep, or ARM templates . Ensure secure and scalable infrastructure deployment with best practices. 4. Monitoring & Performance Optimization: Configure Dynatrace and Azure Application Insights for real-time monitoring. Set up alerting, logging, and performance optimization for Azure workloads. Troubleshoot cloud and infrastructure issues proactively. 5. Security & Governance: Implement RBAC, Network Security Groups (NSGs), firewall rules, and compliance standards . Ensure DevSecOps practices are integrated into CI/CD pipelines. 6. Cloud Migration & Modernization: Lead cloud migration initiatives from on-premise to Azure . Optimize legacy applications for cloud-native architecture . 7. Leadership & Collaboration: Mentor and guide DevOps engineers in best practices and automation techniques . Work closely with development, QA, and infrastructure teams to ensure seamless deployments. Drive innovation and improvements in the DevOps process

Posted 2 weeks ago

Apply

4.0 - 8.0 years

7 - 9 Lacs

Hyderābād

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism SAP Management Level Senior Associate Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. As a business application consulting generalist at PwC, you will provide consulting services for a wide range of business applications. You will leverage a broad understanding of various software solutions to assist clients in optimising operational efficiency through analysis, implementation, training, and support. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations Responsibilities: Azure Landing Zone using IaC Azure (Compute, Storage, Networking, BCP, Identity, Security, Automation): good grasp on at least 4/7 would be good to proceed with Terraform (State management knowledge is a must, modules, provisioners, built-in functions, deployment through DevOps tools) Containerization (Docker, K8S/AKS): either of them with questions covering identity, network, security, monitoring, backup along with core concepts and K8S architecture DevOps (ADO, Jenkins, GitHub): include questions on yaml based pipelines, approval gates, credential management, stage-job-steps-task hierarchy, job/task orchestration, agent pools Migrations: Knowledge on migrating planning and assessment would be ideal Experience with different caching architectures Knowledge of security compliance frameworks, such as SOC II, PCI, HIPPA, ISO27001 Knowledge of well-known open source tools for monitoring, trending and configuration management Mandatory skill sets: · Azure Infra Design · CI CD pipeline · Azure Migration · Terraform Preferred skill sets: · Azure Infra Design · CI CD pipeline · Azure Migration · Terraform Years of experience required: 4 to 8 Years Education qualification: BE/B.Tech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Bachelor of Technology, Master of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Java, Spring Boot Optional Skills Apache Kafka Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 2 weeks ago

Apply

10.0 - 12.0 years

5 - 10 Lacs

Noida

On-site

Senior Manager EXL/SM/1419365 Global TechnologyNoida Posted On 16 Jul 2025 End Date 30 Aug 2025 Required Experience 10 - 12 Years Basic Section Number Of Positions 1 Band C2 Band Name Senior Manager Cost Code G070601 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 2500000.0000 - 3000000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Enabling Sub Group Global Technology Organization Global Technology LOB Global Technology SBU Technology Operations Country India City Noida Center Noida - Centre 59 Skills Skill RISK & COMPLIANCE AUDITS CLOUD SECURITY AI TOOLS CYBER SECURITY ENDPOINT SECURITY Minimum Qualification BCA Certification No data available Job Description Key Responsibilities Assess, design, implement, and govern enterprise-wide cybersecurity and technology risk frameworks , including NIST, Zero Trust Architecture, MITRE ATT&CK , and other global standards. Build and deploy AI/ML and Generative AI-based solutions to automate cyber risk detection, response, control validation, and reporting processes. Utilize Prompt Engineering and Large Language Models (LLMs) such as GPT (OpenAI), Gemini (Google), LLaMA (Meta), Claude (Anthropic), etc., to solve real-world cybersecurity challenges. Apply code, low-code, and no-code approaches for automating and modernizing risk controls and compliance processes. Leverage advanced technologies including Next-Gen SIEM, SOAR, CNAPP, ZTNA, passwordless authentication , EDR/XDR, DLP, Microsegmentation, and multi-cloud native security services . Lead the design and implementation of AI-powered observability platforms to drive real-time telemetry, threat detection, behavioral analytics, and performance insights across infrastructure, applications, and security domains. Familiarity with platforms like Datadog, Dynatrace, New Relic, Splunk, Azure Monitor, Elastic, OpenTelemetry, and Grafana is expected. Collaborate across cross-functional teams to deliver secure-by-design outcomes for digital transformation and modernization programs. Frontend Internal / External audits and First Line Compliance control assurance and ensure key Risks are Self-Identified. Required Skills & Experience 10–12 years of experience in cybersecurity, technology risk, and compliance , with proven delivery in AI-infused environments . Hands-on expertise in Generative AI , ML , LLMs , vector databases , and related toolchains (e.g., LangChain, OpenAI APIs, HuggingFace, Pinecone, Weaviate). Experience with observability, AIOps, and telemetry pipelines using tools like Datadog, Prometheus, Loki, Fluentd, and Elastic Stack . Strong scripting and automation experience (e.g., Python, PowerShell, Bash, YAML) and proficiency in low-code/no-code platforms (e.g., Power Automate, ServiceNow, UiPath). Deep understanding of cloud-native security , DevSecOps , and risk automation across AWS, Azure, and GCP environments. Strong communication, stakeholder engagement, and analytical problem-solving abilities. Preferred Certifications CISSP, CISM, CRISC, CCSP, or equivalent cybersecurity and risk credentials. Certifications in AI/ML , cloud platforms (AWS, Azure, GCP) , are a plus. Mindset & Culture Fit Passion for innovation, automation, and continuous learning in cybersecurity and AI. Ability to collaborate across technology, operations, compliance, and business teams to build future-ready solutions. Self-starter with a bias toward action and measurable impact. Workflow Workflow Type L&S-DA-Consulting

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. Job Title: Java + Angular Developer Experience: 5+ years Location: Pune Mandatory Skills: Angular: Angular 9+ Angular Material, PrimeNg SonarQube HTML5, CSS, TypeScript AG Grid, Tailwind CSS Java: Strong development knowledge in Spring Boot Microservices Experience in Domain-Driven Design (DDD), Behavior-Driven Development (BDD), Test-Driven Development (TDD) Knowledge of OAuth token-based authentication DevOps & Cloud: Unix shell scripting Deployment experience in Cloud (preferably PCF) Strong scripting & YAML configuration knowledge Continuous Integration (CI) setup and enhancements Pipeline setup & integrations with other applications/infrastructure Additional Responsibilities: Improve monitoring and alert systems Troubleshoot issues and solve problems Willingness to own and resolve problems, POC work Provide technical guidance in project execution

Posted 2 weeks ago

Apply

1.0 - 6.0 years

7 - 11 Lacs

Ghaziabad

Work from Office

We are seeking a Distributed Cloud Support Engineer-I , who is hardworking and committed to customer success. You are comfortable in both the Support and Engineering environments, translating technical documentation and conversations into clear, concise directions for customers and partners. You are passionate about helping our customers tackle and solve problems. You will provide support via phone, email, messaging, and web portal. Support requests range in complexity from "how to" questions through involved debugging and forensic efforts when prioritizing operational issues. Visualizing problems remotely is key to being successful in this role along with excellent analytic and troubleshooting skills. You will have a significant career growth opportunity within a fast-paced SaaS company. What will you do? Fix reported issues and advocate for the customer. Collaborate with sales and engineering teams to provide support to resellers, service provider and enterprise customers, as well as end users via telephone, e-mail, Slack and the web portal. Issue reproduction and concise documentation of solutions provided through technical notes, case studies and knowledge base articles. Critical issue management and customer assurance when handling reported issues. Coordinate fixes by Engineering or Developers when required and relay appropriate information to our partners and customers. Provide recommendations on how to improve supportability, reliability, availability, and serviceability based on lessons learned through issue resolution. How do you qualify? 1 or more years of experience working in an operations environment. Background in customer service/support and IT, networking, or IT security incident management. Experience driving efficiency, handling growth, and delivering results. Good understanding of IT, Network, or IT SOC best practices and a real passion for continuous improvement. Strong organizational skills and work well with contacts in various business subject areas. Conversationally and technically fluent in English verbally and written. Advantageous to have: Strong understanding of Networking and Layer 7 Protocols. Familiarity with Container technologies (Docker and Kubernetes). Knowledge of Data Representation types (XML, JSON, YAML). Public cloud experience with Amazon Web Services (AWS), Google Cloud Platform (GCP) and/or Microsoft Azure is preferred. Security product/solutions experience (Firewalls, WAFs, DDoS Mitigation) is preferred. Strong troubleshooting skills, independent and collaborative. Approachable disposition and steadfast in delivering. Ability to prioritize and multitask when leading sophisticated technical issues. Proven understanding of routing and switching technologies. Ability to read different scripting and automation languages (Python, Shell and Ansible). Bachelors degree in technologically relatable field or equivalent practical experience. Physical Demands and Work Environment This role requires availability outside normal business hours to align with the distributed global teams or to respond to critical security events. Some travel may be required (less than 10%).

Posted 2 weeks ago

Apply

1.0 - 6.0 years

7 - 11 Lacs

Greater Noida

Work from Office

We are seeking a Distributed Cloud Support Engineer-I , who is hardworking and committed to customer success. You are comfortable in both the Support and Engineering environments, translating technical documentation and conversations into clear, concise directions for customers and partners. You are passionate about helping our customers tackle and solve problems. You will provide support via phone, email, messaging, and web portal. Support requests range in complexity from "how to" questions through involved debugging and forensic efforts when prioritizing operational issues. Visualizing problems remotely is key to being successful in this role along with excellent analytic and troubleshooting skills. You will have a significant career growth opportunity within a fast-paced SaaS company. What will you do? Fix reported issues and advocate for the customer. Collaborate with sales and engineering teams to provide support to resellers, service provider and enterprise customers, as well as end users via telephone, e-mail, Slack and the web portal. Issue reproduction and concise documentation of solutions provided through technical notes, case studies and knowledge base articles. Critical issue management and customer assurance when handling reported issues. Coordinate fixes by Engineering or Developers when required and relay appropriate information to our partners and customers. Provide recommendations on how to improve supportability, reliability, availability, and serviceability based on lessons learned through issue resolution. How do you qualify? 1 or more years of experience working in an operations environment. Background in customer service/support and IT, networking, or IT security incident management. Experience driving efficiency, handling growth, and delivering results. Good understanding of IT, Network, or IT SOC best practices and a real passion for continuous improvement. Strong organizational skills and work well with contacts in various business subject areas. Conversationally and technically fluent in English verbally and written. Advantageous to have: Strong understanding of Networking and Layer 7 Protocols. Familiarity with Container technologies (Docker and Kubernetes). Knowledge of Data Representation types (XML, JSON, YAML). Public cloud experience with Amazon Web Services (AWS), Google Cloud Platform (GCP) and/or Microsoft Azure is preferred. Security product/solutions experience (Firewalls, WAFs, DDoS Mitigation) is preferred. Strong troubleshooting skills, independent and collaborative. Approachable disposition and steadfast in delivering. Ability to prioritize and multitask when leading sophisticated technical issues. Proven understanding of routing and switching technologies. Ability to read different scripting and automation languages (Python, Shell and Ansible). Bachelors degree in technologically relatable field or equivalent practical experience. Physical Demands and Work Environment This role requires availability outside normal business hours to align with the distributed global teams or to respond to critical security events. Some travel may be required (less than 10%).

Posted 2 weeks ago

Apply

1.0 - 6.0 years

7 - 11 Lacs

Faridabad

Work from Office

We are seeking a Distributed Cloud Support Engineer-I , who is hardworking and committed to customer success. You are comfortable in both the Support and Engineering environments, translating technical documentation and conversations into clear, concise directions for customers and partners. You are passionate about helping our customers tackle and solve problems. You will provide support via phone, email, messaging, and web portal. Support requests range in complexity from "how to" questions through involved debugging and forensic efforts when prioritizing operational issues. Visualizing problems remotely is key to being successful in this role along with excellent analytic and troubleshooting skills. You will have a significant career growth opportunity within a fast-paced SaaS company. What will you do? Fix reported issues and advocate for the customer. Collaborate with sales and engineering teams to provide support to resellers, service provider and enterprise customers, as well as end users via telephone, e-mail, Slack and the web portal. Issue reproduction and concise documentation of solutions provided through technical notes, case studies and knowledge base articles. Critical issue management and customer assurance when handling reported issues. Coordinate fixes by Engineering or Developers when required and relay appropriate information to our partners and customers. Provide recommendations on how to improve supportability, reliability, availability, and serviceability based on lessons learned through issue resolution. How do you qualify? 1 or more years of experience working in an operations environment. Background in customer service/support and IT, networking, or IT security incident management. Experience driving efficiency, handling growth, and delivering results. Good understanding of IT, Network, or IT SOC best practices and a real passion for continuous improvement. Strong organizational skills and work well with contacts in various business subject areas. Conversationally and technically fluent in English verbally and written. Advantageous to have: Strong understanding of Networking and Layer 7 Protocols. Familiarity with Container technologies (Docker and Kubernetes). Knowledge of Data Representation types (XML, JSON, YAML). Public cloud experience with Amazon Web Services (AWS), Google Cloud Platform (GCP) and/or Microsoft Azure is preferred. Security product/solutions experience (Firewalls, WAFs, DDoS Mitigation) is preferred. Strong troubleshooting skills, independent and collaborative. Approachable disposition and steadfast in delivering. Ability to prioritize and multitask when leading sophisticated technical issues. Proven understanding of routing and switching technologies. Ability to read different scripting and automation languages (Python, Shell and Ansible). Bachelors degree in technologically relatable field or equivalent practical experience. Physical Demands and Work Environment This role requires availability outside normal business hours to align with the distributed global teams or to respond to critical security events. Some travel may be required (less than 10%).

Posted 2 weeks ago

Apply

1.0 - 6.0 years

7 - 11 Lacs

Gurugram

Work from Office

We are seeking a Distributed Cloud Support Engineer-I , who is hardworking and committed to customer success. You are comfortable in both the Support and Engineering environments, translating technical documentation and conversations into clear, concise directions for customers and partners. You are passionate about helping our customers tackle and solve problems. You will provide support via phone, email, messaging, and web portal. Support requests range in complexity from "how to" questions through involved debugging and forensic efforts when prioritizing operational issues. Visualizing problems remotely is key to being successful in this role along with excellent analytic and troubleshooting skills. You will have a significant career growth opportunity within a fast-paced SaaS company. What will you do? Fix reported issues and advocate for the customer. Collaborate with sales and engineering teams to provide support to resellers, service provider and enterprise customers, as well as end users via telephone, e-mail, Slack and the web portal. Issue reproduction and concise documentation of solutions provided through technical notes, case studies and knowledge base articles. Critical issue management and customer assurance when handling reported issues. Coordinate fixes by Engineering or Developers when required and relay appropriate information to our partners and customers. Provide recommendations on how to improve supportability, reliability, availability, and serviceability based on lessons learned through issue resolution. How do you qualify? 1 or more years of experience working in an operations environment. Background in customer service/support and IT, networking, or IT security incident management. Experience driving efficiency, handling growth, and delivering results. Good understanding of IT, Network, or IT SOC best practices and a real passion for continuous improvement. Strong organizational skills and work well with contacts in various business subject areas. Conversationally and technically fluent in English verbally and written. Advantageous to have: Strong understanding of Networking and Layer 7 Protocols. Familiarity with Container technologies (Docker and Kubernetes). Knowledge of Data Representation types (XML, JSON, YAML). Public cloud experience with Amazon Web Services (AWS), Google Cloud Platform (GCP) and/or Microsoft Azure is preferred. Security product/solutions experience (Firewalls, WAFs, DDoS Mitigation) is preferred. Strong troubleshooting skills, independent and collaborative. Approachable disposition and steadfast in delivering. Ability to prioritize and multitask when leading sophisticated technical issues. Proven understanding of routing and switching technologies. Ability to read different scripting and automation languages (Python, Shell and Ansible). Bachelors degree in technologically relatable field or equivalent practical experience. Physical Demands and Work Environment This role requires availability outside normal business hours to align with the distributed global teams or to respond to critical security events. Some travel may be required (less than 10%).

Posted 2 weeks ago

Apply

1.0 - 6.0 years

7 - 11 Lacs

Mandya

Work from Office

We are seeking a Distributed Cloud Support Engineer-I , who is hardworking and committed to customer success. You are comfortable in both the Support and Engineering environments, translating technical documentation and conversations into clear, concise directions for customers and partners. You are passionate about helping our customers tackle and solve problems. You will provide support via phone, email, messaging, and web portal. Support requests range in complexity from "how to" questions through involved debugging and forensic efforts when prioritizing operational issues. Visualizing problems remotely is key to being successful in this role along with excellent analytic and troubleshooting skills. You will have a significant career growth opportunity within a fast-paced SaaS company. What will you do? Fix reported issues and advocate for the customer. Collaborate with sales and engineering teams to provide support to resellers, service provider and enterprise customers, as well as end users via telephone, e-mail, Slack and the web portal. Issue reproduction and concise documentation of solutions provided through technical notes, case studies and knowledge base articles. Critical issue management and customer assurance when handling reported issues. Coordinate fixes by Engineering or Developers when required and relay appropriate information to our partners and customers. Provide recommendations on how to improve supportability, reliability, availability, and serviceability based on lessons learned through issue resolution. How do you qualify? 1 or more years of experience working in an operations environment. Background in customer service/support and IT, networking, or IT security incident management. Experience driving efficiency, handling growth, and delivering results. Good understanding of IT, Network, or IT SOC best practices and a real passion for continuous improvement. Strong organizational skills and work well with contacts in various business subject areas. Conversationally and technically fluent in English verbally and written. Advantageous to have: Strong understanding of Networking and Layer 7 Protocols. Familiarity with Container technologies (Docker and Kubernetes). Knowledge of Data Representation types (XML, JSON, YAML). Public cloud experience with Amazon Web Services (AWS), Google Cloud Platform (GCP) and/or Microsoft Azure is preferred. Security product/solutions experience (Firewalls, WAFs, DDoS Mitigation) is preferred. Strong troubleshooting skills, independent and collaborative. Approachable disposition and steadfast in delivering. Ability to prioritize and multitask when leading sophisticated technical issues. Proven understanding of routing and switching technologies. Ability to read different scripting and automation languages (Python, Shell and Ansible). Bachelors degree in technologically relatable field or equivalent practical experience. Physical Demands and Work Environment This role requires availability outside normal business hours to align with the distributed global teams or to respond to critical security events. Some travel may be required (less than 10%).

Posted 2 weeks ago

Apply

1.0 - 6.0 years

7 - 11 Lacs

Chittoor

Work from Office

We are seeking a Distributed Cloud Support Engineer-I , who is hardworking and committed to customer success. You are comfortable in both the Support and Engineering environments, translating technical documentation and conversations into clear, concise directions for customers and partners. You are passionate about helping our customers tackle and solve problems. You will provide support via phone, email, messaging, and web portal. Support requests range in complexity from "how to" questions through involved debugging and forensic efforts when prioritizing operational issues. Visualizing problems remotely is key to being successful in this role along with excellent analytic and troubleshooting skills. You will have a significant career growth opportunity within a fast-paced SaaS company. What will you do? Fix reported issues and advocate for the customer. Collaborate with sales and engineering teams to provide support to resellers, service provider and enterprise customers, as well as end users via telephone, e-mail, Slack and the web portal. Issue reproduction and concise documentation of solutions provided through technical notes, case studies and knowledge base articles. Critical issue management and customer assurance when handling reported issues. Coordinate fixes by Engineering or Developers when required and relay appropriate information to our partners and customers. Provide recommendations on how to improve supportability, reliability, availability, and serviceability based on lessons learned through issue resolution. How do you qualify? 1 or more years of experience working in an operations environment. Background in customer service/support and IT, networking, or IT security incident management. Experience driving efficiency, handling growth, and delivering results. Good understanding of IT, Network, or IT SOC best practices and a real passion for continuous improvement. Strong organizational skills and work well with contacts in various business subject areas. Conversationally and technically fluent in English verbally and written. Advantageous to have: Strong understanding of Networking and Layer 7 Protocols. Familiarity with Container technologies (Docker and Kubernetes). Knowledge of Data Representation types (XML, JSON, YAML). Public cloud experience with Amazon Web Services (AWS), Google Cloud Platform (GCP) and/or Microsoft Azure is preferred. Security product/solutions experience (Firewalls, WAFs, DDoS Mitigation) is preferred. Strong troubleshooting skills, independent and collaborative. Approachable disposition and steadfast in delivering. Ability to prioritize and multitask when leading sophisticated technical issues. Proven understanding of routing and switching technologies. Ability to read different scripting and automation languages (Python, Shell and Ansible). Bachelors degree in technologically relatable field or equivalent practical experience. Physical Demands and Work Environment This role requires availability outside normal business hours to align with the distributed global teams or to respond to critical security events. Some travel may be required (less than 10%).

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies