Home
Jobs

578 Yaml Jobs - Page 16

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 20.0 years

0 - 2 Lacs

Gurugram

Hybrid

Naukri logo

Role & responsibilities Vendors and Protocols: Networking: Cisco, Arista, Aruba Silver-Peak (SD-WAN) Firewall and SaaS: Palo Alto, Prisma Access Load-Balancers and WAFs: F5 Big-IP, Cloud-Flare, A10 Networks (optional) DDOS: Cloud-Flare and Radware. Network Observability: cPacket, Viavi, Wireshark, Thousand-Eyes, Grafana, Elasticsearch, Telegraf, Logstash Clouds: AWS, Azure Wireless: Cisco and Juniper MIST Networking Protocols: BGP, MP-BGP, OSPF, Multicast, MLAG, VPC, MSTP, Rapid-PVST+, LACP, mutual route redistribution, VXLAN, eVPN. Programming and Automation: Python, JSON, Jinja, Ansible, YAML. Preferred candidate profile 10+ Years experience.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role: Automation Test Engineer (QA) Experience: 3+ Years Job Type: Full Time / Permanent Work Location: Hyderabad- Madhapur (On-Site) Work Mode: Work From Office Mode of Interview: Virtual + Physical (3 rounds) Notice Period: Max 15 days Shift: UK Shift (Cab Facility + Perks) Note: C# with Selenium is must Responsibilities: • Performing manual and automated testing using various tools and frameworks • Developing and maintaining test scripts, test data, for regression suite and test reports and dashboards. • Identifying, analysing, and reporting software defects, issues, and risks, contribute to solutions for issues/errors based • on the • project managers, and other stakeholders to ensure quality throughout the software development lifecycle. • Collaborating with cross functional teams’ software developers, product owners, business analysts, functional • consultants, • project managers, and other stakeholders to ensure quality throughout the software development lifecycle. • Applying quality engineering principles and methodologies, such as agile & lean • Providing quality training and coaching to other team members. • Monitoring and Analysis: Track system metrics like response times and resource usage during tests, identifying • bottlenecks and areas for improvement • Contribute to the entire software development lifecycle, from concept to deployment • Be involved in all Agile ceremonies • Fostering a culture of continuous learning and improvement Work Experience & Functional/Technical Skills required: • 3-4 + years of experience in software quality engineering or testing • A solid knowledge of software testing concepts, techniques, and tools. • An understanding of one or more programming languages, such as C#. • Scripting knowledge Java, Javascript Python etc • A familiarity with software development methodologies, such as agile, DevOps, Lean canvass and Kanban. • Experience automation testing of APIs- • Experience of Jmeter or loadrunner • Experience using Cypress for UI automation testing • Understanding of common data formats (JSON, XML and YAML) • Knowledge of database technologies SQL • Strong analytical and problem-solving skills. • Good communication and interpersonal skills. • A high attention to detail and accuracy. Nice to have: • A bachelor's degree in computer science, software engineering, or a related field. A certification in software testing. • quality engineering, such as ISTQB. Awareness of mobile device testing APPIUM and device testing. • Automation testing using Playwright, Java. • Key Competencies/Core Capabilities required to perform the role • Analytical, Detailed orientated, Customer Focused, Professional, Adaptability, Proactive and communication skills Show more Show less

Posted 2 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Looking for Immediate Joiners Experience: 5 to 7 Years Job Description: Major Accountabilities for this role: • Responsible for gathering requirements, conducting infrastructural analysis, and producing robust designs that adhere to approved CLIENT technologies, and ensuring the design meets all CLIENT's technology and security policies. • Responsible for meeting with peers, engineers, application team(s) and user(s), to determine that all the high-level requirements have been met. • Responsible for generating products such as Technical Architecture Documents, technical memos, logical flows, and models to keep the users, architects, and the engineers constantly up to date and in agreement on the architecture and infrastructure layout of an application, system, or platform. • Responsible for ensuring that all architectural products and products with architectural input are maintained in the most current state and never allowed to become obsolete. • Responsible for implementing, managing, and supporting application infrastructure components while leveraging current standards and best practices. In charge of resolving internet architectural and operational problems impacting infrastructure and product availability and performance globally. • Leads and collaborates to achieve the definition of new standards. • Responsible for researching and evaluating new technology for possible deployment in CLIENT’s internet infrastructure. • May assume lead and total accountability for ongoing regional projects as assigned; including responsibility for planning; time and cost control; resource utilization and implementation. • May contribute to incident/problem diagnosis and root cause analysis. • May provide input into performance tuning; capacity planning and configuration management for CLIENT components. All About You • Advanced knowledge of Web application architecture and project delivery. • Advanced knowledge of how the Java Virtual Machine works • Advanced knowledge of load balancers and web application firewalls • Advanced knowledge of network and operating system principles • Vast knowledge of middleware technologies (Web Servers, Application Servers, Queue Managers, Messaging, Caching) • Working knowledge of Database technologies. • Working knowledge of cloud technologies • Solid knowledge of techniques or methodologies to achieve non-functional requirements like reliability, availability, resilience, performance, security. • Working knowledge of Role Based Access Control, Authentication and Authorization mechanisms. • Proficient in root cause analysis and troubleshooting. • Knowledge of Zero Trust Architecture principles and Network Micro-segmentation. • Experience working with YAML based deployments Top 3 skills : Infrastructure/Application Architecture Knowledge of Middleware Products Understanding of Network Solutions, Micro Segmentation Application Security Background with F5 load balancers, Network Micro Segmentation, VMS Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Requirements Description and Requirements The resource provides engineering and infrastructure support for .NET based applications. Primarily use IIS and Apache webservers (to securely serve web page). Develop automation scripts to automate repetitive tasks using Ansible tool and create scripts for server management using PowerShell Job Responsibilities Responsible for managing and maintaining, migrating IIS and Apache webservers in MetLife infrastructure Troubleshoot and resolve issues related to windows server environment Handle migrations, contact application partners, understand requirement, and build infrastructure Plan application interconnections, database connectivity and website domain creation Install/update required software or drivers on servers Develop and maintain automation scripts to improve efficiency and reduce human errors Develop and maintain AzDo pipelines using YAML Troubleshoot pipeline failures and mitigate issues Proficiency in scription language like PowerShell Basic Linux server administration knowledge Develop and maintain automation scripts to improve efficiency and reduce human errors Should be able to utilize ServiceNow in Change management, Incident management, Catalog task management, Problem management Learn new technologies based on demand. Excellent inter-personal skills with the ability to coordinate cross functionally Willing to work in rotational shifts Good Communication skill with the ability to communicate clearly and effectively Knowledge, Skills And Abilities Education Bachelor's degree in computer science, Information Systems, or related field Experience 7+ years of total experience and at least 4+ years of experience in maintaining, migrating IIS and Apache webservers Resolve issues related to windows server environment Work with application partners to understand the requirement and build infrastructure Experience in installing and updating required software or drivers on servers Knowledge on Azure Pipelines using YAML Windows Administration Windows Debugging Application Debugging IIS .Net Share Management Batch job Certificate Management PowerShell Active Directory SSIS SSRS SMTP Elastic Azure DevOps Experience in creating change tickets and working on tasks in Service Now Good to have SSAS Azure Pipelines Elastic Ansible About MetLife Other Requirements (licenses, certifications, specialized training – if required) Recognized on Fortune magazine's list of the 2024 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Description And Requirements Position Summary This position is responsible for design and implementation of application platform solutions, with an initial focus on Enterprise Content Management (ECM) platforms such as enterprise search and document generation/workflow products such as IBM FileNet / BAW, WebSphere Application Server (WAS), and technologies from OpenText. While gaining and providing expertise on these key business platforms, the Engineer will identify opportunities for automation and cloud-enablement across other technologies within the Platform Engineering portfolio and developing cross-functional expertise Job Responsibilities Provide design and technical support to application developers and operations support staff when required. This includes promoting the use of best practices, ensuring standardization across applications and troubleshooting Design and implement complex integration solutions through collaboration with engineers and application teams across the global enterprise Promote and utilize automation to design and support configuration management, orchestration, and maintenance of the integration platforms using tools such as Perl, Python, and Unix shell Collaborate with senior engineers to understand emerging technologies and their effect on unit cost and service delivery as part of the evolution of the integration technology roadmap Investigate, recommend, implement, and maintain ECM solutions across multiple technologies Investigation of released fix packs, provide well documented instructions and script automation to operations for implementation in collaboration with Senior Engineers in support of platform currency Capacity reviews of current platform Participate in cross-departmental efforts Leads initiatives within the community of practice Willing to work in rotational shifts Good Communication skill with the ability to communicate clearly and effectively Knowledge, Skills And Abilities Education Bachelor’s Degree in Computer Science, Information Systems, or related field. Experience 7+ years of total experience and at least 4+ years of experience in design and implementation of application platform solutions on Enterprise Content Management (ECM) platforms such as enterprise search, document generation/workflow products such as IBM FileNet / BAW, WebSphere Application Server (WAS) Promote and utilize automation to design and support configuration management, orchestration, and maintenance of the integration platforms using tools such as Perl, Python, and Unix shell Apache / HIS Linux/Windows OS Communication Json/Yaml Shell scripting Integration of authentication and authorization methods Web to jvm communications SSL/TLS protocols/cipher suites and certificates/keystores FileNet/BAW install, configure, administer Liberty administration Troubleshooting Integration with database technologies Integration with middleware technologies Good to Have: Ansible Python OpenShift AZDO Pipelines Other Requirements (licenses, Certifications, Specialized Training – If Required) Working Relationships Internal Contacts (and purpose of relationship): MetLife internal partners External Contacts (and purpose of relationship) – If Applicable MetLife external partners About MetLife Recognized on Fortune magazine's list of the 2024 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Requirements Description and Requirements Position Summary: This position is responsible for design and implementation of application platform solutions, with an initial focus on Customer Communication Management (CCM) platforms such as enterprise search and document generation/workflow products such as Quadient , xPression , Documaker , WebSphere Application Server (WAS), and technologies from OpenText. While gaining and providing expertise on these key business platforms, the Engineer will identify opportunities for automation and cloud-enablement across other technologies within the Platform Engineering portfolio and developing cross-functional expertise Job Responsibilities: Provide design and technical support to application developers and operations support staff when required. This includes promoting the use of best practices, ensuring standardization across applications and troubleshooting Design and implement complex integration solutions through collaboration with engineers and application teams across the global enterprise Promote and utilize automation to design and support configuration management, orchestration, and maintenance of the integration platforms using tools such as Perl, Python, and Unix shell Collaborate with senior engineers to understand emerging technologies and their effect on unit cost and service delivery as part of the evolution of the integration technology roadmap Investigate, recommend, implement, and maintain CCM solutions across multiple technologies Investigation of released fix packs, provide well documented instructions and script automation to operations for implementation in collaboration with Senior Engineers in support of platform currency Capacity reviews of current platform Participate in cross-departmental efforts Leads initiatives within the community of practice Willing to work in rotational shifts Good Communication skill with the ability to communicate clearly and effectively Education: Bachelor's degree in computer science, Information Systems, or related field. Experience: 7+ years of total experience in designing, developing, testing and deploying n-tier applications built on java, python, W ebSphere Application Server, Liberty, Apache Tomcat etc At least 4+ years of experience on Customer Communication Management (CCM) and Document Generation platforms such as Quadient , xPression , Documaker . Linux/Windows OS Apache / HIS IBM WebSphere Application Server, Liberty Quadient , xPression Ansible Shell scripting (Linux, Powershell ) Json/Yaml Ping, SiteMinder Monitoring & Observability (Elastic, AppD , Kibana) Troubleshooting Log & Performance Analysis OpenShift About MetLife Recognized on Fortune magazine's list of the 2024 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less

Posted 2 weeks ago

Apply

1.0 - 4.0 years

4 - 6 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Technology OpS Support Practitioner Project Role Description : Own the integrity and governance of systems, including best practices for delivering services. Develop, deploy and support infrastructures, applications and technology initiatives from an architectural and operational perspective in conjunction with existing standards and methods of delivery. Must have skills : Storage Area Networks (SAN) Architecture and Design Good to have skills : Netapp Storage Area Network (SAN) AdministrationMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Project Role :Integration Engineer Project Role Description :Provide consultative Business and System Integration services to help clients implement effective solutions. Understand and translate customer needs into business and technology solutions. Drive discussions and consult on transformation, the customer journey, functional/application designs and ensure technology and business solutions represent business requirements. Must have Skills :File:ONTAP/Isilon (one must have File)Block:Power flex, SolidFire(rear to find), vmax, 3par, brocade, cisco (One must have of block)Object:Storage grid(rear to find), storage fabricJob Requirements :File Storage Engineering product experience (eg Dell Isilon, NetApp ONTAP, VAST, Lustre, etc)Datacenter stack experience (Storage, Compute, Networking)Linux/Unix and Windows Operating Systems, including NAS protocols CIFS/SMB and NFSProven experience in automation of manual tasks via code (eg Python) or scripts (eg bash, PowerShell)Experience with programming languages such as Python; also JSON, YAML, etcRest API consumption via code or scriptsAbility to lead others and provide Subject Matter Expertise in one or more subjectsExcellent presentation skills Work with external vendors for new and existing productsExperience of large enterprise infrastructure designKnowledge of data storage technologies from NetApp, Dell or similar companiesSoftware and systems security. Key Responsibilities :Support role L2 and L3 tasksClosing incident tickets, interacting with customers, VendorsFacilitate migration (file Products) and make sure Runbooks are in place Educational Qualification:Minimum Bachelors degree Relevant Vendor/Technology certifications preferred Qualification 15 years full time education

Posted 2 weeks ago

Apply

4.0 - 8.0 years

14 - 18 Lacs

Hyderabad

Work from Office

Naukri logo

Project description Our client is a global technology change and delivery organization comprising nearly 500 individuals located in Switzerland, Poland, Singapore and India. Providing global records and document processing, archiving, and retrieval solutions to all business divisions focusing on supporting Legal, Regulatory, and Operational functions. Responsibilities Design, implement, and manage data solutions on Azure Develop and maintain data pipelines using Databricks Ensure efficient data storage and retrieval using Azure Storage and Data Lake Automate infrastructure and application deployments with Ansible Write clean, maintainable code in C# core, SQL Optimize code/application for best performance Use and promote state of the art technologies, tools and engineering practices Collaborate with team members using Git and GitLab for version control and CI/CD Share and contributeSupport and guide less senior team members, contribute to team spirit and dynamic growth, actively participate in wider engineering group and product-wide activities Skills Must have 10+ years of software development experience inbuilding and shipping production grade software Degree in Computer Science, Information Technology, or related field Proficient in deploying and managing services on Microsoft Azure Understanding of Azure Storage concepts and best practices Understanding of Microsoft Fabric concepts and best practices Experience in designing, implementing, and managing data lakes and Databricks on Azure Experience in using Ansible for automation and configuration management, with proficiency in YAML Strong programming skills in C# core, SQL, MVC core/Blazor, Java Script, HTML, CSS Proficient in version control using Git and experience with GitLab for CI/CD pipelines Strong cross-discipline and cross group collaboration skills Passion for delivering high quality/delightful user experience Strong problem solving, debugging, and troubleshooting skills Ability to ramp up quickly on new technologies and adopt solution from within the company or from the Open Source community Nice to have Experience in Agile Framework Other Languages EnglishC2 Proficient Seniority Senior

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Overview Job Purpose The Manager, Systems Operations role - specific to the Automation group is responsible for leading and managing a team responsible for supporting and maintaining automation tools and solutions. This position will also have accountability into the overall Operations of various application platforms at ICE as well as internal workload automation tools. This position helps to set the direction and enforce standards, implement solutions, develop best practices and guide team numbers on automation efforts. You will collaborate closely with the Operations and SRE teams to implement automation changes, troubleshoot issues, and drive continuous improvement projects. Your expertise will help ensure the stability and performance of our infrastructure, enabling our customers to manage risk and make informed decisions globally. Responsibilities Team Leadership: Lead a team of Engineers and Analysts, providing guidance mentoring, performance management. Foster a culture of continuous improvement. Process Improvement: Develop and refine processes that streamline workflows, reduce bottlenecks and increase overall velocity. Training and Support: Provide training and documentation to other team members on the effective use of existing toolsets and best practices. Act as primary point of contact for staff issues. Collaboration and Communication: Work closely with partner teams to understand their pain points and automation goals. Work with teams to make them more efficient. Continuous Improvement: Participate in or lead continuous improvement projects driven by automation. On-Call Rotation: Participate in an on-call rotation as needed. Additional Duties: Perform any other activities as directed by management. Knowledge And Experience Experience: 5+ years of experience as a people manager or in a team lead role with delegation duties in a technical environment. Strategic Thinking: Demonstrate ability to think strategically about business and product goals as well as technical challenges and staff workloads. Software Development: Prior experience with software development, infrastructure development, or operations. Scripting Languages: Strong proficiency in scripting languages such as Bash, Python, and/or PowerShell. Relational Databases: Experience with relational databases. Server Administration: Strong proficiency with Linux and Windows Server administration. Automation Frameworks: Experience in architecting automation frameworks with proficiency in tools such as Jenkins, Chef, Puppet, Ansible, or similar. Agile Methods: Experience with Agile methods (Scrum/Kanban) for organizing project deliverables and tracking progress (Jira). Version Control: Experience with Git and/or code repository services (BitBucket, GitHub, etc.). Cloud Services: Experience with open-source technologies and cloud services (AWS/Azure). Monitoring Tools: Experience with monitoring and alerting tools (Splunk, Nagios, BigPanda, PagerDuty). Infrastructure as Code: Experience with infrastructure as code (Terraform, CloudFormation). Container Technology: Knowledge of and exposure to container technology and orchestration is a plus. API Interaction: Experience interacting with REST APIs (GET/POST requests), webhooks, and API client tools (Postman). Problem-Solving: Excellent problem-solving and troubleshooting skills. Documentation: Process-oriented with great documentation skills (Confluence). Data Structures: Experience with data structures/formats such as XML, JSON, YAML, and HCL. Business Continuity: Experience with automation of business continuity/disaster recovery. Preferred Production Operations: Experience with managing production operations, monitoring, alerting, notifications, etc. Coding Experience: Moderate experience with coding any combination of Perl, Ruby, Bash, PowerShell and Java + others. Scheduling Tools: Experience with Rundeck and/or Cisco Tidal Enterprise Scheduler. Monitoring Tools: Experience with BigPanda and PagerDuty. AI Ops: Experience with AI Ops. Show more Show less

Posted 2 weeks ago

Apply

4.0 - 9.0 years

0 - 0 Lacs

Hyderabad

Work from Office

Naukri logo

Job Description: We are seeking a DevOps Engineer with a minimum of 5 years of experience who is proficient in Kubernetes, Docker, and other essential DevOps tools. The ideal candidate will have strong expertise in YAML, Infrastructure as Code (IaC) using Terraform, and a solid understanding of Azure and AWS cloud platforms. Key Responsibilities: Design, implement, and manage CI/CD pipelines to automate deployment processes. Maintain and optimize Kubernetes clusters and Docker containers. Develop and manage infrastructure as code using Terraform. Collaborate with development teams to ensure seamless integration and deployment of applications. Monitor and troubleshoot system performance, ensuring high availability and reliability. Implement security best practices and ensure compliance with industry standards. Manage cloud infrastructure on Azure and AWS, including resource provisioning, scaling, and monitoring. Write and maintain clear, concise documentation for infrastructure and processes. Required Skills: Proficiency in Kubernetes and Docker. Strong experience with YAML. Expertise in Infrastructure as Code (IaC) using Terraform. Solid knowledge of Azure DevOps and AWS. Experience with CI/CD tools such as Jenkins, GitLab CI, or Azure Pipelines. Familiarity with scripting languages (e.g., Bash, Python). Strong understanding of networking concepts and security best practices. Excellent problem-solving skills and the ability to work independently. Strong communication and collaboration skills. Preferred Qualifications: Certification in Azure or AWS (e.g., Azure DevOps Engineer Expert, AWS Certified DevOps Engineer). Experience with monitoring tools like Prometheus, Grafana, or ELK stack. Knowledge of configuration management tools such as Ansible or Chef. Preferred candidate profile

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role: Automation Test Engineer (QA) Experience: 3+ Years Job Type: Full Time / Permanent Work Location: Hyderabad- Madhapur (On-Site) Work Mode: Work From Office Mode of Interview: Virtual + Physical (3 rounds) Notice Period: Max 15 days Shift: UK Shift (Cab Facility + Perks) Responsibilities: • Performing manual and automated testing using various tools and frameworks • Developing and maintaining test scripts, test data, for regression suite and test reports and dashboards. • Identifying, analysing, and reporting software defects, issues, and risks, contribute to solutions for issues/errors based • on the • project managers, and other stakeholders to ensure quality throughout the software development lifecycle. • Collaborating with cross functional teams’ software developers, product owners, business analysts, functional • consultants, • project managers, and other stakeholders to ensure quality throughout the software development lifecycle. • Applying quality engineering principles and methodologies, such as agile & lean • Providing quality training and coaching to other team members. • Monitoring and Analysis: Track system metrics like response times and resource usage during tests, identifying • bottlenecks and areas for improvement • Contribute to the entire software development lifecycle, from concept to deployment • Be involved in all Agile ceremonies • Fostering a culture of continuous learning and improvement Work Experience & Functional/Technical Skills required: • 3-4 + years of experience in software quality engineering or testing • A solid knowledge of software testing concepts, techniques, and tools. • An understanding of one or more programming languages, such as C#. • Scripting knowledge Java, Javascript Python etc • A familiarity with software development methodologies, such as agile, DevOps, Lean canvass and Kanban. • Experience automation testing of APIs- • Experience of Jmeter or loadrunner • Experience using Cypress for UI automation testing • Understanding of common data formats (JSON, XML and YAML) • Knowledge of database technologies SQL • Strong analytical and problem-solving skills. • Good communication and interpersonal skills. • A high attention to detail and accuracy. Nice to have: • A bachelor's degree in computer science, software engineering, or a related field. A certification in software testing. • quality engineering, such as ISTQB. Awareness of mobile device testing APPIUM and device testing. • Automation testing using Playwright, Java. • Key Competencies/Core Capabilities required to perform the role • Analytical, Detailed orientated, Customer Focused, Professional, Adaptability, Proactive and communication skills Show more Show less

Posted 2 weeks ago

Apply

3.0 years

7 - 8 Lacs

Goa

On-site

GlassDoor logo

Responsibilities: Development and Deployment Programming Proficiency: Strong knowledge of Node.js and JavaScript for application development. Familiarity with at least one additional language such as Python or Golang is preferred. Application Lifecycle Management: Develop, test, and deploy applications across multiple platforms while ensuring cross-platform compatibility. Team Collaboration: Work closely with development teams to ensure applications are optimized for performance, scalability, and reliability. Server Setup and Management Linux Expertise: Proficient in setting up, managing, and maintaining servers on Linux-based systems. Cloud Infrastructure: Basic understanding of cloud platforms and infrastructure management, including common tools and best practices. Containerization and Orchestration Docker Expertise: Skilled in creating, deploying, and managing Docker containers efficiently. Proficient in using Docker CLI commands and authoring Dockerfiles for custom container images. Knowledgeable about Docker networking, port mapping, and volume mapping for data persistence. Kubernetes: Familiar with Kubernetes concepts, including Pods, Services, and Deployments. Ability to deploy and manage containerized applications in Kubernetes clusters. Basic understanding of Helm charts and YAML configuration for Kubernetes resources. Security Implementation Security Best Practices: Apply fundamental security principles to enforce secure configurations for servers, applications, and containerized environments. Regular Assessments: Conduct regular security checks and assessments to identify and mitigate potential vulnerabilities. Database Management Database Design and Maintenance: Skilled in designing, implementing, and maintaining databases to support diverse application needs. Networking Core Networking Concepts: Understand and troubleshoot networking issues related to applications and containerized systems. Collaboration: Work with networking teams to ensure seamless integration and optimized performance of applications. Server Architectures Architectural Design: Assist in designing and implementing robust, scalable, and efficient server architectures. Pattern Awareness: Familiarity with modern server architecture patterns, including microservices and serverless computing. Requirement: Work from Office Experience - 3 years Job Types: Full-time, Permanent Pay: ₹700,000.00 - ₹800,000.00 per year Benefits: Flexible schedule Health insurance Internet reimbursement Paid sick time Provident Fund Schedule: Day shift Monday to Friday Morning shift Supplemental Pay: Performance bonus Quarterly bonus Yearly bonus Application Question(s): City Notice period (in days) Is your notice period negotiable? Current CTC (in LPA) Experience (in years) Work Location: In person

Posted 2 weeks ago

Apply

2.0 years

0 - 0 Lacs

Cochin

On-site

GlassDoor logo

SOC ENGINEER (ENGINEER R&D / DEV) We are looking for a candidate who have experience in as DevOps engineer to creating systems software and analyzing data to improve existing systems or New innovation, along with develop and maintain scalable applications Monitor, troubleshoot, and resolve issues including deployments in multiple environments. Candidate must be well-versed in computer systems and network functions. They should be able to work diligently and accurately and should have great problem-solving ability in order to fix issues and ensure client’s business functionalities. REQUIREMENTS: ELK development experience Dev or DevOps experience on AWS cloud, containers, serverless code Development stack of Wazuh and ELK. Implement best DevOps practice Tool set knowledge required for parser/ use case development, plugin customisation – Regex, python, yaml, xml . Hands-on experience in DevOps . Experience with Linux and monitoring, logging tools such as Splunk ,Strong scripting skills Researching and designing new software systems, websites, programs, and applications. Writing and implementing, clean, scalable code. Troubleshooting and debugging code. Verifying and deploying software systems. Evaluating user feedback. Recommending and executing program improvements. Maintaining software code and security systems. Knowledge of cloud system(AWS, Azure). Excellent communication skills. GOOD TO HAVE: SOC, security domain experience is desirable. Knowledge of Docker, Machine Learning, BigData, Data Analysis, Web-Scrapping.ata Analysis, Web-Scrapping. Resourcefulness and problem-solving aptitude Good understanding of SIEM solutions like ELK, Splunk, ArcSight etc. Understanding of cloud platforms like Amazon AWS, Microsoft Azure and Google Cloud. Experience in managing firewall / UTM solutions from Sophos, Fortigate, Palo Alto, Cisco FirePower Professional certification (e.g. Linux Foundation Certified System Administrator, Linux+ CompTIA,RHCSA – Red Hat Certified System Administrator) QUALIFICATION: 2-3 years of experience in Product //DevOps//SecOps//development. SKILLS: Experience in software design and development using API infrastructure. Profound knowledge in various scripting languages, system, and server administration Exceptional organizing and time-management skills Very good communication abilities ELK, Wazuh, Splunk, ArcSight SIEM management skills Reporting Job Types: Full-time, Permanent Pay: ₹25,000.00 - ₹66,000.00 per month Benefits: Internet reimbursement Schedule: Day shift Supplemental Pay: Performance bonus Application Question(s): Do you have experience in SIEM Tool, Scripting, Backend or Front end development? Experience: minimum: 1 year (Required) Language: English (Required) Location: Kochi, Kerala (Required) Work Location: In person

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Our Company: At Teradata, we believe that people thrive when empowered with better information. That’s why we built the most complete cloud analytics and data platform for AI. By delivering harmonized data, trusted AI, and faster innovation, we uplift and empower our customers—and our customers’ customers—to make better, more confident decisions. The world’s top companies across every major industry trust Teradata to improve business performance, enrich customer experiences, and fully integrate data across the enterprise. What You’ll Do As a Senior DevOps Engineer in our Cloud Governance team within Product Engineering, you will play a crucial role in ensuring secure, compliant, and cost-effective cloud operations across the organization. Your responsibilities will include: Defining and enforcing governance policies for cloud account management, access controls, tagging, and resource usage. Monitoring and optimizing cloud spend by identifying cost-saving opportunities and implementing tools for tracking budgets and usage trends. Building scalable, automated workflows for provisioning, compliance, and operations to reduce manual overhead and enhance efficiency. Developing self-service capabilities that empower engineering teams to work within secure, approved guardrails. Supporting audits and ensuring cloud environments align with internal security and compliance requirements. Acting as a central point of coordination across engineering, security, and compliance teams to align cloud practices with strategic goals. Who You’ll Work With You will collaborate closely with a variety of stakeholders across the organization, including: Engineering teams to provide tools and guidance for compliant cloud usage, perform cost tracking, budgeting, and cloud spend reporting. Information Security and Compliance teams to enforce access controls and ensure alignment with corporate standards and regulatory requirements. Product Engineering leadership to drive governance strategy in line with broader organizational goals You will be reporting to the Senior Manager, Software Engineering. What Makes You a Qualified Candidate 5+ years of experience in DevOps, Cloud Infrastructure, or Cloud Operations. Graduate or Postgraduate in Computer Science or Electronics with knowledge on Database concepts & SQL Strong expertise in managing and governing public cloud environments (AWS, Azure, GCP). Hands-on experience with Infrastructure as Code tools like YAML, Terraform, or similar. Proficiency in scripting languages such as Python, PowerShell, Groovy, Shell, or similar. Experience in building CI/CD pipelines and embedding governance into DevOps workflows. Good knowledge of cloud identity and access management (IAM, RBAC). Hands-on experience with Docker and StackStorm in cloud infrastructure automation and container orchestration. Passion for automation, security, and efficiency in cloud operations. Certifications in AWS, Azure, or GCP. Familiarity with FinOps and enterprise compliance frameworks. Ability to learn new technologies and tools quickly and to leverage that knowledge for results analysis and problem solving Strong communication and stakeholder management skills. What You’ll Bring Proactive Leadership with Ownership: A commitment to driving continuous improvement and operational excellence through strategic decision-making, clear communication, and collaborative problem-solving. Expertise in Cloud Financial Management: A deep understanding of cloud cost optimization and financial management, with a focus on techniques such as right-sizing, resource allocation, and cost tracking. This will bring a data-driven approach to managing cloud spend and implementing best practices that drive efficiency without compromising performance. Collaboration with Cross-Functional Teams: Ability to work across engineering, security, and business teams to align governance practices with organizational goals, ensuring seamless cloud adoption and usage. knowledge of software engineering practices and metrics Ability to impact and influence without authority #LI-VB1

Posted 2 weeks ago

Apply

0 years

4 - 6 Lacs

Hyderābād

On-site

GlassDoor logo

This position is responsible for supporting a variety of DevOps and operational activities across the worldwide product development department. This position will work with a variety of technologies and tools used in the development of our hardware and software products. Principal DevOps will have the opportunity to learn and build tools and processes that support mission critical infrastructure This position reports directly to the Manager, DevOps. The DevOps team colleagues for this position are distributed globally, with several team members located in Irvine, CA. Required Skills Responsibilities Include: Support software build and release efforts: Create, set up, and maintain build servers Review build results and resolve build problems Plan, manage, and control product releases Validate, archive, and escrow product releases Maintain and administer DevOps tools, including source control, defect management, project management, and other systems. Develop scripts and programs to automate process and integrate tools. Resolve help desk requests from worldwide product development staff. Participate in team and process improvement projects. Familiarity with virtualization technologies (VMWare, Hyper-V, Docker) Interact with product development teams to plan and implement tool and build improvements. Research and stay current on SCM issues and technologies. Perform other duties as assigned. While the job description describes what is anticipated as the requirements of the position, the job requirements are subject to change based upon any changing needs and requirements of the business. Required Experience Qualifications A Bachelor’s degree or equivalent experience. 7 or more years’ experience with DevOps and/or Software Configuration Management tools, concepts, and processes. 7 or more years software development experience. Familiar with software build processes and related tools (Jenkins, Azure DevOps Services, Github, etc..) Must have hands-on experience on YAML Exposure on Build Tools like MSbuild, NANT, XCode Experience with common scripting languages, such as Perl, VB Script, PowerShell, Windows batch, and UNIX shell scripting. Experience with Windows, Linux, and Unix operating systems and web servers. Very good written and spoken English. Familiarity with object-oriented concepts and programming in C#, VB.Net, C++, COM, Java, and Visual Basic 6 would be beneficial. Familiar with software build processes and related tools. Exposure on Creating and Maintaining vCenter/VMware vSphere 6.5 Familiar with source control systems such as Perforce, ClearCase, or Subversion. Experience working with developers to resolve development issues related to source control systems. Familiarity with Microsoft SharePoint administration and development would be beneficial. No domestic and/or international travel

Posted 2 weeks ago

Apply

1.0 years

6 - 9 Lacs

Hyderābād

On-site

GlassDoor logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Design, develop, and maintain backend services and APIs using Java and Python Write clean, efficient, and scalable code following best practices Build microservices architecture and integrate services effectively Create unit tests and debug issues to ensure application reliability Deploy and manage containerized applications using Kubernetes Create and maintain Kubernetes manifests (YAML files) for pods, services, and ingresses Implement auto-scaling, load balancing, and fault-tolerant systems in Kubernetes clusters Monitor and optimize Kubernetes clusters using tools like Prometheus and Grafana Build and maintain CI/CD pipelines using GitHub Actions for automated testing, building, and deployment Write custom workflows in YAML to streamline development and deployment tasks Secure pipelines using GitHub Secrets and troubleshoot workflow issues Collaborate with cross-functional teams, including QA, DevOps, and product managers, to deliver high-quality software Deploy applications to cloud platforms (AWS, GCP, Azure) and manage Kubernetes integrations Mentor junior engineers and contribute to improving team processes and technical standards Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Graduate degree or equivalent experience 1+ years of experience Skills: Java, Python, API Development, Rest services, Kubernetes, docker, Jenkins, Github, SQL, MongoDB At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #Gen

Posted 2 weeks ago

Apply

3.0 years

2 - 10 Lacs

Gurgaon

On-site

GlassDoor logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Onboard clients via the components of our data engineering pipeline, which consists of UIs, Azure Databricks, Azure ServiceBus, Apache Airflow, and various container-based services configured through IUs, SQL, PL/SQL, Python, YAML, Node, and Shell, with code managed in GitHub, deployed through Jenkins, and monitored through Prometheus, Grafana Work as a part of our client implementation team to ensure the highest standards of product configuration that meet client requirements Test and troubleshoot data pipeline using sample and live client data. Utilize Jenkins, Python, Groovy Scripts and Java to automate these tests. Must be able to parse logs to determine next actions. Work with product teams to ensure the product is configured appropriately Utilize dashboards for Kubernetes/OpenShift to diagnose high level issues and ensure services are healthy Support Implementation immediately after go live, work with O&M team to transition support to that team Participate in daily AGILE meetings Estimate project deliverables Configure and test REST APIs and utilize manual tools to interact with API’s Work with data providers to clarify requirements and remove roadblocks Drive automation into everyday activities Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: 3+ years of experience working with SQL (preferably Oracle Pl/SQL and SparkSQL) and data at scale 3+ years of ETL experience ensuring source to target data integrity. Familiar with various file types (Delimited Text, Fixed Width, XML, JSON, Parque) 1+ years of coding experience with one or more of the follow languages; Java, C#, Python, NodeJS using Git, with practical experience with working collaboratively through Git branching strategies 1+ years of experience with Microsoft Azure cloud infrastructure, DataBricks, DataFactory, DataLake, Airflow and Cosmos Database 1+ years of experience in reading and configuring YAML 1+ years of experience with ServiceBus, setting up ingress and egress within a subscription, or relevant Azure Cloud services administrative experience 1+ years of experience with Unit Testing, Code Quality tools, CI/CD Technologies, Security and Container Technologies 1+ years of Agile development experience and knowledge of Agile ceremonies and practices At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Job brief: We are looking for a candidate who have experience in as DevOps engineer to creating systems software and analyzing data to improve existing systems or New innovation, along with develop and maintain scalable applications Monitor, troubleshoot, and resolve issues including deployments in multiple environments. Candidate must be well-versed in computer systems and network functions. They should be able to work diligently and accurately and should have great problem-solving ability in order to fix issues and ensure client’s business functionalities. Main Responsibilities: • Develop research programs incorporating current developments to improve existing products and study the potential of new products. • Research, design and evaluate materials, assemblies, processes, and equipment. • Document all phases of research and development. • Establish and maintain testing procedures for assessing raw materials, in-process and finished products. • Assess the scope of research projects and ensure they are on time and with result-oriented outcome. • Be present at industry conferences on research topics of interest. • Understand customer expectations on to-be manufactured products. • Identify and evaluate new technologies that help in building better products or services. • Maintain user guides and technical documentations. • Create impactful demonstrations to showcase emerging security technologies. • Design and build services with a focus on business value and usability. • Assist in keeping the SIEM platform up to date and contribute to security strategies as an when new threats emerge. • Staying up to date with emerging security threats including applicable regulatory security requirements. • Other responsibilities and additional duties as assigned by the security management team or service delivery manager. Skills Must-haves: • ELK development experience • Dev or DevOps experience on AWS cloud, containers, serverless code • Development stack of Wazuh and ELK. • Implement best DevOps practice • Tool set knowledge required for parser/ use case development, plugin customisation – Regex, python, yaml, xml . • Hands-on experience in DevOps . • Experience with Linux and monitoring, logging tools such as Splunk ,Strong scripting skills • Researching and designing new software systems, websites, programs, and applications. • Writing and implementing, clean, scalable code. • Troubleshooting and debugging code. • Verifying and deploying software systems. • Evaluating user feedback. • Recommending and executing program improvements. • Maintaining software code and security systems. • Knowledge of cloud system(AWS, Azure). • Excellent communication skills. Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Requirements Description and Requirements Position Summary The Shared Application Platform Engineering team is to provide the enterprise configuration and support for integration technologies such as IBM WebSphere Application Server (WAS) and WebSphere Liberty and ensure the platform stability and process improvement. Responsibilities include planning, engineering, and implementation of application platform infrastructure to include operational processes and procedures. This position includes evaluation and recommendation of new products and technologiesto the enterprise. Job Responsibilities Design and implement complex integration solutions through collaboration with engineers, application teams and operations team across the global enterprise. Middleware engineers will configure lower environments through automation and hand over to and support the operation’s team implementation in the higher environments Perform installation, configuration, and maintenance-level administrative functions for middleware, including patch installation and capacity management Monitor middleware performance and assist with ensuring middleware security Implement and support configuration management, orchestration, and maintenance automation using tools such as Ansible, CHEF, Azure DevOps, Python, and Unix shell Collaborate with other engineers to understand emerging technologies and their effect on unit cost and service delivery as part of the evolution of the integration technology roadmap Investigate, recommend, implement, and maintain middleware solutions using cloud, open source and container technologies Provide technical support to application developers when required. This includes promoting use of best practices, ensuring standardization across applications and trouble shooting Investigation of released fix packs, automation, documentation and handing over to operations in collaboration with Senior Engineers Assist new team team members and create standardized processes to simplify and standardize support engineers' role, and provide guidance to engineers from disciplines such as networking, database management, WebSphere Application Server (WAS) etc. Technical leadership, ability to think strategically and effectively communicate solutions to a variety of stake holders Work in Agile model with the understanding of Agile concepts Able to debug production issues by analyzing the logs directly and using tools like Splunk Strong collaboration with team members Should be able to utilize ServiceNow in Change management, Incident management, Catalog task management, Problem management Participate in cross-departmental efforts Leads initiatives within the community of practice Track project status working with team members and report to leadership Good decision-making skills Take ownership for the deliverables from the entire team Learn new technologies based on demand and help team members by coaching and assisting Willing to work in rotational shifts Good Communication skills with the ability to communicate clearly and effectively Knowledge, Skills And Abilities Education Bachelor's degree in computer science, Information Systems, or related field. Experience 10+ years of total experience and at least 7+ years of experience in WebSphere and Liberty applications build in both windows and Linux infrastructure and able to lead project Perform installation, configuration, and maintenance-level administrative functions for middleware, including patch installation and capacity management Monitor middleware performance and assist with ensuring middleware security Resolving the application issues and identifying bugs and fixing the issue Upgrading the WAS and Liberty versions. Apache / HIS Ansible Python Linux/Windows OS Communication OpenShift Json/Yaml Shell scripting AZDO Pipelines Azure DevOps Run/Code Ansible (Automation) Integration of authentication and authorization methods Web to jvm communications SSL/TLS protocols/cipher suites and certificates/keystores WebSphere ND administration Liberty administration Troubleshooting Integration with database technologies Integration with middleware technologies Continuous Integration / Continuous Delivery (CI/CD) Experience in creating and working on Service Now tasks/tickets About MetLife Recognized on Fortune magazine's list of the 2024 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Requirements Description and Requirements Position Summary: Resource is responsible for assisting MetLife Docker Container support of Application Development Teams. In this position the resource will be supporting MetLife applications in an operational role performing on boarding applications, troubleshooting infra and Applications Container issues. Automate any of the manual build process using CI/CD pipeline. Job Responsibilities: Development and maintenance in operational condition of OpenShift, Kubernetes Orchestration container platforms Experience in workload migration from Docker to OpenShift platform Manage the container platform ecosystem (installation, upgrade, patching, monitoring) Check and apply critical patches in OpenShift/Kubernetes Troubleshoot issues in OpenShift Clusters Experience in OpenShift implementation, administration and support Working experience in OpenShift and Docker/K8s Knowledge of CI/CD methodology and tooling (Jenkins, Harness) Experience with system configuration tools including Ansible, Chef Cluster maintenance and administration experience on OpenShift and Kubernetes Strong Knowledge & Experience in RHEL Linux Manage OpenShift Management Components and Tenants Participates as part of a technical team responsible for the overall support and management of the OpenShift Container Platform. Learn new technologies based on demand. Willing to work in rotational shifts Good Communication skill with the ability to communicate clearly and effectively Education: Bachelor's degree in computer science, Information Systems, or related field Experience: 7+ years of total experience and at least 4+ years of experience in development and maintenance in operational condition of OpenShift, Kubernetes Orchestration container platforms Experience in installation, upgrade, patching, monitoring of container platform ecosystem Experience in workload migration from Docker to OpenShift platform. Good knowledge of CI/CD methodology and tooling (Jenkins, Harness) Linux Administration Software Defined Networking (Fundamentals) Container Runtimes (Podman / Docker), Kubernetes (OpenShift) / Swarm Orchestration, GoLang framework and Microservices Architecture Knowledge and usage of Observability tools (i.e. Elastic, Grafana, Prometheus, OTEL collectors, Splunk ) Apache Administration Automation Platforms: Specifically, Ansible (roles / collections) SAFe DevOps Scaled Agile Methodology Scripting: Python, Bash Serialization Language: YAML, JSON Knowledge and usage of CI/CD Tools (i.e.: AzDO, ArgoCD) Reliability Mgmt. / Troubleshooting Collaboration & Communication SkillsContinuous Integration / Continuous Delivery (CI/CD) Experience in creating change tickets and working on tasks in Service Now Java Mgmt. (JMX)/ NodeJS management About MetLife Recognized on Fortune magazine's list of the 2024 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Requirements Description and Requirements The resource provides engineering and infrastructure support for .NET based applications. Primarily use IIS and Apache webservers (to securely serve web page). Develop automation scripts to automate repetitive tasks using Ansible tool and create scripts for server management using PowerShell Job Responsibilities Responsible for managing and maintaining, migrating IIS and Apache webservers in MetLife infrastructure Troubleshoot and resolve issues related to windows server environment Handle migrations, contact application partners, understand requirement, and build infrastructure Plan application interconnections, database connectivity and website domain creation Install/update required software or drivers on servers Develop and maintain automation scripts to improve efficiency and reduce human errors Develop and maintain AzDo pipelines using YAML Troubleshoot pipeline failures and mitigate issues Proficiency in scription language like PowerShell Basic Linux server administration knowledge Develop and maintain automation scripts to improve efficiency and reduce human errors Should be able to utilize ServiceNow in Change management, Incident management, Catalog task management, Problem management Learn new technologies based on demand. Excellent inter-personal skills with the ability to coordinate cross functionally Willing to work in rotational shifts Good Communication skill with the ability to communicate clearly and effectively Knowledge, Skills And Abilities Education Bachelor's degree in computer science, Information Systems, or related field Experience 7+ years of total experience and at least 4+ years of experience in maintaining, migrating IIS and Apache webservers Resolve issues related to windows server environment Work with application partners to understand the requirement and build infrastructure Experience in installing and updating required software or drivers on servers Knowledge on Azure Pipelines using YAML Windows Administration Windows Debugging Application Debugging IIS .Net Share Management Batch job Certificate Management PowerShell Active Directory SSIS SSRS SMTP Elastic Azure DevOps Experience in creating change tickets and working on tasks in Service Now Good to have SSAS Azure Pipelines Elastic Ansible About MetLife Other Requirements (licenses, certifications, specialized training – if required) Recognized on Fortune magazine's list of the 2024 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 11 The Team: The Infrastructure team is a global team split across the US, Canada and the UK. The team is responsible for building and maintaining platforms used by Index Management teams to calculate and rebalance our high profile indices. The Impact: You will be responsible for the development and expansion of platforms which calculate and rebalance indices for S&P Dow Jones Indices. This will ensure that relevant teams have continuous access to up to date benchmarks and indices. What’s In It For You In this role, you will be a key player in the Infrastructure Engineering team where you will manage the automation of systems administration in the AWS Cloud environment used for running index applications. You will build solutions to automate resource provisioning and administration of infrastructure in AWS Cloud for our index applications. There will also be a smaller element of L3 support for Developers when they have more complex queries to address. Responsibilities Create DevOps pipelines to deliver Infrastructure as Code. Build workflows to create immutable Infrastructure in AWS. Develop automation for provisioning compute instances and storage. Build AMI images using Packer. Develop Ansible playbooks and automate execution of routine Linux scripts. Provision resources in AWS using Cloud Formation Templates. Deploy immutable infrastructure in AWS using Terraform. Orchestrate container deployment Configure Security Groups, Roles & IAM Policy in AWS. Monitor infrastructure and develop utilization reports. Implementing and maintaining version control systems, configuration management tools, and other DevOps-related technologies. Designing and implementing automation tools and frameworks for continuous integration, delivery, and deployment. Develop and write scripts for pipeline automation using relevant scripting languages like Groovy, YAML. Configure continuous delivery workflows for various environments e.g., development, staging, production. Use Jenkins to create pipelines, which are groups of events or jobs that are interlinked with one another in a sequence. Evaluate new AWS services and solutions. Integrate application build & deployments scripts with Jenkins. Troubleshoot Production issues. Effectively interact with global customers, business users and IT employees Basic Qualifications Bachelor's degree in Computer Science, Information Systems or Engineering or equivalent qualification is preferred or relevant equivalent work experience RedHat Linux & AWS Certifications preferred. Strong experience in Infrastructure Engineering and automation. Very good experience in AWS Cloud systems administration. Experience in developing Ansible scripts and Jenkins integration. Expertise using DevOps tools (Jenkins, Terraform, Packer, Ansible, GitHub, Artifactory) Expertise in the different automation tools used to develop CI/CD pipelines. Proficiency in Jenkins and Groovy for creating dynamic and responsive CI/CD pipelines Good experience in RedHat Linux scripting First class communication skills – written, verbal and presenting Preferred Qualifications Candidates should have a minimum of 10+ years industry experience in cloud and Infrastructure. Administer Redhat Linux Operating Systems Deploy OS patches & perform upgrades Configure filesystems & allocate storage Develop Unix scripts Develop scripts for automation of infrastructure provisioning. Monitor infrastructure and develop utilization reports Evaluate new AWS services and solutions Experience working with customers to diagnose a problem, and work toward resolution Excellent verbal and written communication skills Understanding of various Load Balancers in a large data center environment About S&P Global Dow Jones Indices At S&P Dow Jones Indices, we provide iconic and innovative index solutions backed by unparalleled expertise across the asset-class spectrum. By bringing transparency to the global capital markets, we empower investors everywhere to make decisions with conviction. We’re the largest global resource for index-based concepts, data and research, and home to iconic financial market indicators, such as the S&P 500® and the Dow Jones Industrial Average®. More assets are invested in products based upon our indices than any other index provider in the world. With over USD 7.4 trillion in passively managed assets linked to our indices and over USD 11.3 trillion benchmarked to our indices, our solutions are widely considered indispensable in tracking market performance, evaluating portfolios and developing investment strategies. S&P Dow Jones Indices is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit www.spglobal.com/spdji. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Inclusive Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering an inclusive workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and equal opportunity, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.2 - Middle Professional Tier II (EEO Job Group) Job ID: 301292 Posted On: 2025-02-26 Location: Mumbai, Maharashtra, India Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Experience Good experience in API design, development, and implementation 3 years experience of cloud platform services (preferably GCP) Hands-on experience in designing, implementing, and maintaining APIs that meet the highest standards of performance, security, and scalability. Hands-on experience in Design, develop, and implement microservices architectures and solutions using industry best practices and design patterns. Hands-on experience with cloud computing and services. Hands-on experience with proficiency in programming languages like Java, Python, JavaScript etc. Hands-on experience with API Gateway and management tools like Apigee, Kong, API Gateway. Hands-on experience with integrating APIs with a variety of systems/applications/microservices and infrastructure . Deployment experience in Cloud environment (preferably GCP) Experience in TDD/DDD and unit testing. Hands-on CI/CD experience in automating the build, test, and deployment processes to ensure rapid and reliable delivery of API updates. Technical Skills Programming & Languages: Java, GraphQL, SQL, API Gateway and management tools Apigee, API Gateway Database Tech: Oracle, Spanner, BigQuery, Cloud Storage Operating Systems Linux Expert with API design principles, specification and architectural styles like REST, GraphQL, and gRPC, Proficiency in API lifecycle management, advanced security measures, and performance optimization. Good Knowledge of Security Best Practices and Compliance Awareness. Good Knowledge of messaging patterns and distributed systems. Well-versed with protocols and data formats. Strong development knowledge in microservice design, architectural patterns, frameworks and libraries. Knowledge of SQL and NoSQL databases, and how to interact with them through APIs Good to have knowledge of data modeling and database management design database schemas that efficiently store and retrieve data. Scripting and configuration (eg yaml) knowledge. Strong Testing and Debugging Skills writing unit tests and familiarity with the tools and techniques to fix issues. DevOps knowledge CI/CD practices and tools. Familiarity with Monitoring and observability platforms for real-time insights into application performance Understanding version control systems like Git. Familiarity with API documentation standards such as OpenAPI. Problem-solving skills and ability to work independently in a fast-paced environment. Effective Communication negotiate and communicate effectively with stakeholders to ensure API solutions meet both technical and non-technical stakeholders. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Strong DevOps knowledge, hands-on experience in CI/CD (Any CI/CD tools). •Extensive experience in Linux system administration, Networking and troubleshooting. - Experience in shell, YAML, JSON, groovy scripting. • Strong experience in AWS (EC2, S3, VPC, RDS, IAM, Organisation, Identity Center Etc.,) •Ability to setup CI/CD pipeline using AWS services or other CI/CD tools •Experience in configuring and troubleshooting on EKS & ECS and deploying applications on an EKS & ECS cluster. •Strong hands-on knowledge in Terraform and/or AWS CFT. Must be able to automate AWS infrastructure provisioning using Terraform/CFT in most efficient way. •Experience in Cloudwatch / Cloudtrail / Prometheus-Grafana for infra/application monitoring. - Most importantly, must have great soft skills, critical & analytical thinking in a larger scope, ability quickly, efficiently understand, identify and solve problems. Show more Show less

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Organization :- At CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, targets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title :- Associate/Graduate Engineer Location : - Bangalore Business & Team :- Group Security Technology’s purpose is safeguarding a brighter future for all through innovative technology. We do this by designing, building and running security products across cyber, identity and fraud technology in order to fulfil our objectives: Improve velocity of security outcomes with great customer experiences. Innovate, digitise and disrupt the way we deliver security. Lead the way in secure cloud and control adoption. Assure safe, sound, and secure technology. Our Edge Security Technology squad builds, runs and enhances the edge security products and services that protect the CBA Group's internet facing websites, web applications, APIs and perimeter infrastructure from distributed denial of service, web application & API cyberattacks. Impact & contribution :- The Sr Platform Engineer will be working with their chapter lead, squad lead and members to drive product roadmap initiatives, support and accelerate business unit engagements and contribute to operational enhancement and BAU activities. Roles & Responsibilities :- Write software and tooling that automates the operations of our platforms, infrastructure, environments, and tooling. Create a standardised set of tooling for deploying and running applications and setting them up with best practices. Maintain the underlying infrastructure to ensure the that it is reliable, secure and scalable. Make all platforms entirely self-service, secure, and available within minutes without human approval. Ensure that our platforms are loved by our software engineers, and continually evolve our platforms to embrace new technology and improve the happiness and efficiency of our software engineers. Write software to ensure that all deployment and operations is as automated as possible, using languages such as python and go, and automate implementation of controls into our platforms. Participate in cross-group activities to build a culture of one team, bar-raising both our engineering capability and our technology solutions to drive our strategy. Essential Skills :- Experience;-3 TO 5 Years. Experience with Data Security Posture Management Tool i.e. BigID, Cyera,Varonis Experience as Software Engineer/DevSecOps/DevOps/System Engineer in Cloud environment. Expertise working with AWS Infrastructure essential. Experience with container technology: Docker and Kubernetes/EKS. Strong experience with scripting/programming in programming languages such as PowerShell, Java, Bash, Python, YAML etc. Experience working with Infrastructure as Code Cloudformation, Terraform, CDK, AWS CodeBuild/CodePipeline highly regarded. Experience working with CI/CD & automation tools like Github, Github Actions, Teamcity, Jenkins, CI/CD Pipeline. Experience with Logging and Monitoring tools like Observe/Splunk, Prometheus, Grafana and Pagerduty Systematic problem-solving approach and proactively continue improving current processes and tools. Be able to communicate ideas clearly and effectively. Education Qualification :- Bachelor’s degree or Master’s degree in Engineering in Computer Science/Information Technology If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 30/05/2025 Show more Show less

Posted 2 weeks ago

Apply

Exploring YAML Jobs in India

YAML (YAML Ain't Markup Language) has seen a surge in demand in the job market in India. Organizations are increasingly looking for professionals who are proficient in YAML to manage configuration files, create data structures, and more. If you are a job seeker interested in YAML roles in India, this article provides valuable insights to help you navigate the job market effectively.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities are known for their vibrant tech scenes and have a high demand for YAML professionals.

Average Salary Range

The average salary range for YAML professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 10 lakhs per annum.

Career Path

In the YAML skill area, a typical career path may involve starting as a Junior Developer, progressing to a Senior Developer, and eventually becoming a Tech Lead. Continuous learning and gaining hands-on experience with YAML will be crucial for career advancement.

Related Skills

Apart from YAML proficiency, other skills that are often expected or helpful alongside YAML include: - Proficiency in scripting languages like Python or Ruby - Experience with version control systems like Git - Knowledge of containerization technologies like Docker - Understanding of CI/CD pipelines

Interview Questions

Here are 25 interview questions for YAML roles: - What is YAML and what are its advantages? (basic) - Explain the difference between YAML and JSON. (basic) - How can you include one YAML file in another? (medium) - What is a YAML anchor? (medium) - How can you create a multi-line string in YAML? (basic) - Explain the difference between a sequence and a mapping in YAML. (medium) - What is the difference between != and !== in YAML? (advanced) - Provide an example of using YAML in a Kubernetes manifest file. (medium) - How can you comment in YAML? (basic) - What is a YAML alias and how is it used? (medium) - Explain how to define a list in YAML. (basic) - What is a YAML tag? (medium) - How can you handle sensitive data in a YAML file? (medium) - Explain the concept of anchors and references in YAML. (medium) - How can you represent a null value in YAML? (basic) - What is the significance of the --- at the beginning of a YAML file? (basic) - How can you represent a boolean value in YAML? (basic) - Explain the concept of scalars, sequences, and mappings in YAML. (medium) - How can you create a complex data structure in YAML? (medium) - What is the difference between << and & in YAML? (advanced) - Provide an example of using YAML in an Ansible playbook. (medium) - Explain what YAML anchors and aliases are used for. (medium) - How can you control the indentation in a YAML file? (basic) - What is a YAML directive? (advanced) - How can you represent special characters in a YAML file? (medium)

Closing Remark

As you prepare for YAML job roles in India, remember to showcase your proficiency in YAML and related skills during interviews. Stay updated with the latest industry trends and continue to enhance your YAML expertise. With the right preparation and confidence, you can excel in the competitive job market for YAML professionals in India. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies