Home
Jobs

681 Yaml Jobs - Page 21

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Greater Lucknow Area

On-site

Linkedin logo

Kyndryl Software Engineering Chennai, Tamil Nadu, India Hyderabad, Telangana, India Bengaluru, Karnataka, India Gurugram, Haryana, India Pune, Maharashtra, India Greater Noida, Uttar Pradesh, India Posted on May 19, 2025 Apply now Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Within our Networking DevOps engineering team at Kyndryl, you'll be a master of managing and administering the backbone of our technological infrastructure. You'll be the architect of the system, shaping the base definition, structure, and documentation to ensure the long-term success of our business operations. Responsibilities Includes Requirement Gathering and Analysis: Collaborate with stakeholders to gather automation requirements, understanding business objectives, and network infrastructure needs. Analyse existing network configurations and processes to identify areas for automation and optimization. Analyse existing automation, opportunities to reuse/redeploy them with required modifications. End-to-End Automation Development: Design, develop and implement automation solutions for network provisioning, configuration management, monitoring and troubleshooting. Utilize programming languages such as Ansible, Terraform, Python, PHP to automate network tasks and workflows. Ensure scalability, reliability, and security of automation solutions across diverse network environments. Testing and Bug Fixing: Develop comprehensive test plans and procedures to validate the functionality and performance of automation scripts and frameworks. Identify and troubleshoot issues, conduct root cause analysis and implement corrective actions to resolve bugs and enhance automation stability. Collaborative Development: Work closely with cross-functional teams, including network engineers, software developers, and DevOps teams, to collaborate on automation projects and share best practices. Reverse Engineering and Framework Design: Reverse engineer existing Ansible playbooks, Python scripts and automation frameworks to understand functionality and optimize performance. Design and redesign automation frameworks, ensuring modularity, scalability, and maintainability for future enhancements and updates. Network Design and Lab Deployment: Provide expertise in network design, architecting interconnected network topologies, and optimizing network performance. Setup and maintain network labs for testing and development purposes, deploying lab environments on demand and ensuring their proper maintenance and functionality. Documentation and Knowledge Sharing: Create comprehensive documentation, including design documents, technical specifications, and user guides, to facilitate knowledge sharing and ensure continuity of operations. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career, from Junior Administrator to Architect. We have training and upskilling programs that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. One of the benefits of Kyndryl is that we work with customers in a variety of industries, from banking to retail. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical And Professional Experience Minimum 5+ years of relevant experience as a Network DevOps SME / Automation Engineer Hands On Experience In Below Technologies Data Network: Strong experience in configuring, managing, and troubleshooting Cisco, Juniper, HP, and Nokia routers and switches. Hands-on experience with SDWAN & SDN technologies (e.g., Cisco Viptela, Versa, VMWare NSX, Cisco ACI, DNAC, etc.) Network Security: Experience in configuring, managing, and troubleshooting firewalls and load balancers, including Firewalls: Palo Alto, Checkpoint, Cisco ASA/FTD, Juniper SRX Load Balancers: F5 LTM/GTM, Citrix NetScaler, A10. Deep understanding of network security principles, firewall policies, NAT, VPN (IPsec/SSL), IDS/IPS. Programming & Automation: Proficiency in Ansible development and testing for network automation. Strong Python or Shell scripting skills for automation. Experience with REST APIs, JSON, YAML, Jinja2 templates and GitHub for version control. Cloud & Linux Skills: Hands-on experience with Linux server administration (RHEL, CentOS, Ubuntu). Experience working with cloud platforms such as Azure, AWS, or GCP. DevOps: Basic understanding of CI/CD pipelines, GitOps, and automation tools. Familiarity with Docker, Kubernetes, Jenkins, and Terraform in a DevOps environment. Experience working with Infrastructure as Code (IaC) and configuration management tools Ansible Architecture & Design: Ability to design, deploy, and recommend network setups or labs independently. Strong problem-solving skills in troubleshooting complex network and security issues. Certifications Required: CCNP Security / CCNP Enterprise (Routing & Switching) Preferred Technical And Professional Experience Bachelor’s degree and above. Experience in Terraform experience is a plus (for infrastructure as code). Experience in Zabbix template development is a plus. Certifications Preferred: CCIE-level working experience (Enterprise, Security, or Data Center) – PCNSE (Palo Alto), CCSA (Checkpoint), Automation & Cloud, Python, Ansible, Terraform. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address. Apply now See more open positions at Kyndryl Show more Show less

Posted 4 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We are seeking a highly skilled and experienced Lead System Engineer specializing in Microsoft Azure to join our dynamic team. As a key technical leader, you will drive the design, implementation, and optimization of cloud solutions, ensuring fault tolerance, high availability, scalability, and security. This role requires hands-on expertise in Azure infrastructure, DevOps practices, and automation, combined with a passion for innovation and continuous improvement. Responsibilities Design, implement, and maintain Azure infrastructure to ensure fault tolerance, high availability, scalability, and security Develop and manage CI/CD pipelines with automated build and test systems Oversee production deployments using multiple deployment strategies Automate Azure infrastructure and platform deployments using Infrastructure as Code (IaC) tools Automate system configurations with configuration management tools Implement microservices architecture concepts and best practices Coordinate with application development teams to align requirements, schedules, and activities Conduct proofs of concept (POCs) to validate the feasibility of proposed designs and technologies Troubleshoot and resolve system issues, proactively addressing challenges with a continuous improvement mindset Learn and adapt quickly to new services and technologies used in the environment Requirements 8 to 12 years of experience in Azure Cloud environments Proficiency in DevOps CI/CD tools and practices Strong experience in Linux and Windows Administration Expertise in scripting languages such as Python, Bash, Shell, and Unix scripting Knowledge of YAML scripting and ARM templates Proficiency in Terraform modules for infrastructure automation Hands-on experience with Azure Kubernetes Service (AKS), Docker, and Kubernetes Strong understanding of microservices architecture and best practices Experience in automating infrastructure and platform deployments with IaC tools Proven ability to design and implement scalable, secure, and highly available solutions on Azure Excellent problem-solving skills and a proactive approach to challenges B2+ English proficiency level Show more Show less

Posted 4 weeks ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

The Role We are looking for talented DevOps engineers to expand our deployment team in Noida/Delhi. The successful candidate will focus on software deployments from development to production in a reliable and efficient manner. Key responsibilities: Within the FX Deployment team, your main role is to work on the automation pipeline for deploying, testing, operating, and monitoring our systems. You will take care of complex software installations and configurations in R&D, Quality Assurance, Demo and client environments You will collaborate with different teams such as Development, Project Management, Client support, etc. You will act as a requirement provider to developers of in-house automated tools. You will contribute on the integration of software developed by different teams, and put in place the tools to deploy it on different environments (development, test, demo and client) You will develop, improve, and thoroughly document operational practices and procedures. You will write documentation and train users on software installation Key Responsibilities Within the FX Deployment team, your main role is to work on the automation pipeline for deploying, testing, operating, and monitoring our systems: You will take care of complex software installations and configuration in R&D, Quality Assurance, Demo and client's environments You will collaborate with different teams such as Development, Project Management, Client support etc. You will act as a requirement provider to developers of in-house automated tools. You will contribute on the integration of software developed by different teams, and put in place the tools to deploy it on different environments (development, test, demo and client) You will develop, improve, and thoroughly document operational practices and procedures. You will write documentation and train users on software installation Required Skills, Experiences, And Qualifications Required skills: Strong working knowledge of Linux (RedHat preferred) and Windows operating systems Experience with AWS (Amazon cloud) Experience with Docker Comfortable with SQL and XML Experience with Jenkins Experience with Infrastructure Monitoring Tools (for example Nagios) Database Administration (Oracle 12/18) and/or SQL Server (2017) Experience with bash scripting Familiar with source code management systems (Git and subversion) Proficient in high level script languages (Python, JSON and/or YAML) Ansible Good to have Configuration Management and release Management knowledge. Any build tools like ant, Maven Hands on knowledge on CI and CD Preferred Skills: Positive and optimistic outlook, solution orientated attitude. Fluent in English; any other language is an advantage. Very strong problem-solving capabilities and a proven track record in a “problem solving” environment are desirable. Ability to react positively to changes in workload, targets, and plans. Ability to work independently as well as within a team. Keen interest in the worlds of software, technology About Us We’re a diverse group of visionary innovators who provide trading and workflow automation software, high-value analytics, and strategic consulting to corporations, central banks, financial institutions, and governments. Founded in 1999, we’ve achieved tremendous growth by bringing together some of the best and most successful financial technology companies in the world. Over 2,000 of the world’s leading corporations, including 50% of the Fortune 500 and 30% of the world’s central banks, trust ION solutions to manage their cash, in-house banking, commodity supply chain, trading and risk. Over 800 of the world’s leading banks and broker-dealers use our electronic trading platforms to operate the world’s financial market infrastructure. ION is a rapidly expanding and dynamic group with 13,000 employees and offices in more than 40 cities around the globe. Our ever-expanding global footprint, cutting edge products, and over 40,000 customers worldwide provide an unparalleled career experience for those who share our vision. ION is committed to maintaining a supportive and inclusive environment for people with diverse backgrounds and experiences. We respect the varied identities, abilities, cultures, and traditions of the individuals who comprise our organization and recognize the value that different backgrounds and points of view bring to our business. ION adheres to an equal employment opportunity policy that prohibits discriminatory practices or harassment against applicants or employees based on any legally impermissible factor. Show more Show less

Posted 4 weeks ago

Apply

0 years

0 Lacs

Vadodara, Gujarat, India

On-site

Linkedin logo

We are in search of an experienced Senior DevOps Engineer with specialized expertise in Kubernetes, GitOps, and cloud services. This individual will play a crucial role in the design and management of advanced CI/CD pipelines, guaranteeing seamless integration and deployment of software artifacts within varied environments in Kubernetes clusters. Key Responsibilities: Pipeline Construction & Management: Build and maintain efficient build pipelines. Deploy artifacts to Kubernetes with advanced deployment strategies. Docker & Helm Expertise: Develop and manage Docker images and Helm charts. Handle Helm repositories and deploy charts to Kubernetes clusters. GitOps Proficiency: Employ GitOps tools like ArgoCD, ArgoEvents, and ArgoRollouts. Coordinate with development and QA teams in managing GitOps repositories. Kubernetes & Cloud Services: Administer Kubernetes clusters, including knowledge of CSI, CNI drivers, backup/restore solutions. Monitor clusters using New Relic, ensuring reliability and availability. Proficiency in AWS services such as EKS, IAM, VPC, RDS/Aurora, Load Balancer configurations. Security & Compliance: Uphold security standards within Kubernetes clusters. IAC and Deployment Tracking: Manage Infrastructure as Code (IAC) and oversee deployment tracking, linking with CI/CD pipelines. Collaboration & Coordination: Collaborate with development teams for artifact generation pipelines. Coordinate with QA teams for environment setup (DEV, QA, Staging, UAT, Production). Technical Expertise: Skilled in both on-prem and cloud-managed Kubernetes clusters. Required Skills & Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field. 5-10 years of experience in DevOps with a focus on Kubernetes and GitOps. In-depth understanding of CI/CD principles, especially Gitlab/Jankins and ArgoCD. Advanced skills in AWS cloud services and Kubernetes security practices. Good knowledge of IAC, infrastructure provisioning and configuration management tool like Ansible and Terraform Proficient in working with YAML files and shell scripting. Experience in programming (Python or other relevant languages). Strong automation skills with an ability to streamline processes. Excellent problem-solving abilities and teamwork skills. Apply Now Show more Show less

Posted 4 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Data Integration Job Description As a Prinicipal Integration Engineer, you will play a critical role in designing, developing, and maintaining integrations between data source and sink. Your expertise will be instrumental in setting up cloud infrastructure needed for receiving and processing events from various sources, integrating with external APIs for real-time decision-making, and categorizing events for further analysis and storage in data lake. Additionally, you will also contribute to the development and improvement of DevOps practices that support these integrations. Responsibilities: Develop and maintain high-performance event-driven applications using languages such as .Net, Python, Go, or Node.js Design and implement robust event handling mechanisms, including message queues (e.g., Kafka, RabbitMQ), event streams (e.g., Apache Kafka, Azure Event Hubs), and pub/sub systems Integrate with external APIs and services (e.g., RESTful APIs) to exchange data and trigger actions Develop and maintain data pipelines for ingesting, transforming, and loading data into our data lake (e.g., Azure Data Lake Storage, AWS S3) Perform data analysis and generate insights from event data Ensure data quality and integrity throughout the data lifecycle Enable versioning, logging and monitoring for platform and application observability Troubleshoot and resolve issues related to data pipelines and Azure DevOps integrations Collaborate with cross-functional teams (e.g., data engineers, data scientists, product managers) to understand business requirements and translate them into technical solutions Contribute to the development and improvement of CI/CD pipelines (e.g., using Jenkins, Azure DevOps, GitLab CI) Implement and maintain infrastructure as code (IaC) using tools like Terraform or Ansible Participate in code reviews and contribute to the improvement of development processes Stay up-to-date with the latest technologies and best practices in event-driven architectures and data processing Mentor and guide junior team members DevOps and CI/CD practices Qualifications: Bachelor's degree in Computer Science, Computer Engineering, or a related field 7+ years of professional experience in software development Strong experience with at least one programming language such as Java, Python, Go, or Node.js Proficiency in configuration languages (Terraform, YAML) Experience with message queuing systems (e.g., Kafka, RabbitMQ) or event streaming platforms (e.g., Apache Kafka, Azure Event Hubs) Experience with API integration and RESTful services Experience with data processing frameworks (e.g., Spark) Experience with cloud platforms (e.g., Azure, AWS, GCP) Experience with containerization technologies (e.g., Docker, Kubernetes) Experience with or knowledge of Azure technologies (e.g., Azure Functions, Azure Event Grid, Azure Service Bus) Experience with or knowledge of data warehousing and data lake technologies (e.g., Azure Data Lake Storage, AWS S3) Familiarity with data quality and validation tools Strong analytical and problem-solving skills Excellent communication and collaboration skills Ability to work independently and as part of a team Location: IND Hyderabad Raidurg Village B7 South Tower, Serilingampally Mandal Divya Sree Orion Language Requirements: Time Type: Full time If you are a California resident, by submitting your information, you acknowledge that you have read and have access to the Job Applicant Privacy Notice for California Residents R1599147 Show more Show less

Posted 4 weeks ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Location – Greater Noida Mandatory Skills – AWS, Snowflake, Python/Pyspark, SQL, ETL, Datawarehousing Desired Skills – Data Modelling, Dimensional Modelling Detailed JD – Must be extremely proficient in Data Warehouse ETL Design/Architecture , dimensional/relational data modelling. Experience in atleast one ETL development project, writing/analyzing complex stored procedures. Should have entry level/intermediate experience in Python/PySpark – working knowledge on spark/pandas dataframe, spark multi-threading, exception handling, familiarity with different boto3 libraries, data transformation and ingestion methods, ability to write UDF. Snowflake – Familiarity with stages and external tables, commands in snowflake like copy, unload data to/from S3, working knowledge of variant data type, flattening nested structure thru SQL, familiarity with marketplace integrations, role-based masking, pipes, data cloning, logs, user and role management is nice to have . Familiarity with Coalesce/dbt is an added advantage for this job. Collibra integration experience for Data Quality and Governance in ETL pipeline development is nice to have. AWS – Should have hands-on experience with S3, Glue (jobs, triggers, workflow, catalog, connectors, crawlers), CloudWatch, RDS and secrets manager. AWS - VPC, IAM, Lambda, SNS, SQS, MWAA is nice to have . Should have hands-on experience with version controlling tools like github , working knowledge on configuring, setting up CI/CD pipelines using yaml, pip files. Streaming Services – Familiarity with Confluent Kafka or spark streaming, or Kinesis (or equivalent) is nice to have . Data Vault 2.0 (hubs satellite links) experience will be a PLUS. Interpersonal Highly proficient in Publisher, PowerPoint, SharePoint, Visio, Confluence and Azure DevOps Working knowledge of best practices in value-driven development (requirements management, prototyping, hypothesis-driven development, usability testing) Good communicator with problem solving mindset and focus on process improvement Consistently demonstrates clear and concise written and verbal communication skills Good interpersonal skills, ability to interact with Senior Management Highly self-motivated with a strong sense of initiative Excellent multitasking skills and task management strategies Ability to work well in a team environment, meet deadlines, demonstrate good time management, and multi-task in a fast-paced project environment. Responsibilities Partner with business stakeholders to gather requirements and translate them into technical specifications and process documentation for IT counterparts (onshore and offshore) Highly proficient in the architecture and development of an event driven data warehouse; streaming, batch, data modeling, and storage Advanced database knowledge; creating/optimizing SQL queries, stored procedures, functions, partitioning data, indexing, and reading execution plans Skilled experience in writing and troubleshooting Python/PySpark scripts to generate extracts, cleanse, conform and deliver data for consumption Expert level of understanding and implementing ETL architecture; data profiling, process flow, metric logging, and error handling Support continuous improvement by investigating and presenting alternatives to processes and technologies to an architectural review board Develop and ensure adherence to published system architectural decisions and development standards Multi-task across several ongoing projects and daily duties of varying priorities as required Interact with global technical teams to communicate business requirements and collaboratively build data solutions Show more Show less

Posted 4 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Reference # 312681BR Job Type Full Time Your role Are you an analytic thinker? Do you enjoy designing building Cloud and running the infrastructure services? Do you want to play a key role in transforming our firm into an agile organization? At UBS, we re-imagine the way we work, the way we connect with each other – our colleagues, clients and partners – and the way we deliver value. Being agile will make us more responsive, more adaptable, and ultimately more innovative. We’re looking for a Infrastructure Engineer with Database products aligned to our Cloud Development and Deliveries of Cloud Product from Azure Cloud: engage in and improve whole lifecycle of Database Services MS SQL products & tools for the infrastructure services in Azure and on-premises. This includes inception, design, curation and engineering through development, deployment and refinement. apply a broad range of engineering practices, from analyzing requirements and developing new features to automated testing, deployment, and operations ensure the quality, security, reliability, and compliance of solutions by applying our digital principles and implementing both functional and non-functional Requirements learn new technologies and practices, reuse strategic platforms and standards, evaluate options, and make decisions with long-term sustainability in mind work in Agile model with Scrum method and SRE Principles to deliver reliable and efficient platforms Your team In our agile operating model, crews are aligned to larger products and services fulfilling client needs and encompass multiple autonomous pods. You’ll be working in the Hosting Services – Database team in Poland / Pune, India / Navi Mumbai, India (based on your primary work location) focusing on managed database offerings comprised of: product engineering, curation and deployments for Databases MS SQL and other Emerging Database products & tools at on-premises and in Azure. Our products are consumed by Business aligned Technology and App-Dev teams across business divisions, globally. Your expertise bachelor or master’s degree or equivalent focusing on Engineering, development, curation and deployment for Databases MS SQL products & tools proficient with MS SQL Product on Azure Cloud, PowerShell, YAML, ADO Pipelines proficient in C#, TSQL, PSQL and Bash Skills proficient with Infra-as-a-Code, GIT and building CI/CD pipelines in Cloud experience as infrastructure engineer focused on Database technology. Proficiency on underlying infrastructure and Operating Systems. experienced with Infra as a Code with Terraform / ARM having good experience and exposure on Database / Infrastructure resiliency, backup and DR Designs confident communicator that can explain technology to non-technical audiences About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors.. We have a presence in all major financial centers in more than 50 countries. Join us At UBS, we embrace flexible ways of working when the role permits. We offer different working arrangements like part-time, job-sharing and hybrid (office and home) working. Our purpose-led culture and global infrastructure help us connect, collaborate, and work together in agile ways to meet all our business needs. From gaining new experiences in different roles to acquiring fresh knowledge and skills, we know that great work is never done alone. We know that it's our people, with their unique backgrounds, skills, experience levels and interests, who drive our ongoing success. Together we’re more than ourselves. Ready to be part of #teamUBS and make an impact? Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce. Show more Show less

Posted 4 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Reference # 312672BR Job Type Full Time Your role Are you an analytic thinker? Do you enjoy designing building Database Products across on-premise & Cloud and running the infrastructure services? Do you want to play a key role in transforming our firm into an agile organization? At UBS, we re-imagine the way we work, the way we connect with each other – our colleagues, clients and partners – and the way we deliver value. Being agile will make us more responsive, more adaptable, and ultimately more innovative. We’re looking for a Infrastructure Engineer with Database products Development and Engineering of Tools for our on-premise and Cloud Database Platforms engage in and improve whole lifecycle of Database Services Oracle products & tools for the infrastructure services on-premises and in cloud. This includes inception, design, curation and engineering through development, deployment and refinement apply a broad range of engineering practices, from analyzing requirements and developing new features to automated testing, deployment, and operations ensure the quality, security, reliability, and compliance of solutions by applying our digital principles and implementing both functional and non-functional Requirements learn new technologies and practices, reuse strategic platforms and standards, evaluate options, and make decisions with long-term sustainability in mind work in Agile model with Scrum method and SRE Principles to deliver reliable and efficient platforms Your team In our agile operating model, crews are aligned to larger products and services fulfilling client needs and encompass multiple autonomous pods. You’ll be working in the Hosting Services – Database Crew in Pune, India focusing on managed database offerings comprised of: product engineering, curation and deployments for Databases Oracle and tools at on-premises and in Azure. Our products are consumed by Business aligned Technology and App-Dev teams across business divisions, globally. Your expertise bachelor or master’s degree or equivalent focusing on Engineering, development, curation and deployment for Databases Oracle products & tools must be an expert in Administration of Oracle 19c DBA cloud experience specifically on Azure must be an expert YAML, BASH, Skills for automation on top of Oracle DB platform for housekeeping, compliance, monitoring proficient with Infra-as-a-Code, GIT, GitLab Runner and building CI/CD pipelines developing templates using Ansible Playbook development experience in Python modules azure tools like ARM and PIM (Privileged Identify Management) About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors.. We have a presence in all major financial centers in more than 50 countries. Join us At UBS, we embrace flexible ways of working when the role permits. We offer different working arrangements like part-time, job-sharing and hybrid (office and home) working. Our purpose-led culture and global infrastructure help us connect, collaborate, and work together in agile ways to meet all our business needs. From gaining new experiences in different roles to acquiring fresh knowledge and skills, we know that great work is never done alone. We know that it's our people, with their unique backgrounds, skills, experience levels and interests, who drive our ongoing success. Together we’re more than ourselves. Ready to be part of #teamUBS and make an impact? Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce. Show more Show less

Posted 4 weeks ago

Apply

0 - 5 years

0 Lacs

Bengaluru, Karnataka

Work from Office

Indeed logo

Thank you for your interest in working for our Company. Recruiting the right talent is crucial to our goals. On April 1, 2024, 3M Healthcare underwent a corporate spin-off leading to the creation of a new company named Solventum. We are still in the process of updating our Careers Page and applicant documents, which currently have 3M branding. Please bear with us. In the interim, our Privacy Policy here: https://www.solventum.com/en-us/home/legal/website-privacy-statement/applicant-privacy/ continues to apply to any personal information you submit, and the 3M-branded positions listed on our Careers Page are for Solventum positions. As it was with 3M, at Solventum all qualified applicants will receive consideration for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Job Description: Microservices and AWS Admin (Solventum) 3M Health Care is now Solventum At Solventum, we enable better, smarter, safer healthcare to improve lives. As a new company with a long legacy of creating breakthrough solutions for our customers’ toughest challenges, we pioneer game-changing innovations at the intersection of health, material and data science that change patients' lives for the better while enabling healthcare professionals to perform at their best. Because people, and their wellbeing, are at the heart of every scientific advancement we pursue. We partner closely with the brightest minds in healthcare to ensure that every solution we create melds the latest technology with compassion and empathy. Because at Solventum, we never stop solving for you. The Impact You’ll Make in this Role As a Microservices and AWS Admin, you will have the opportunity to tap into your curiosity and collaborate with some of the most innovative and diverse people around the world. Here, you will make an impact be responsible for provisioning production-grade infrastructure using CloudFormation, managing microservices deployments and ensuring the performance, scalability, and reliability in AWS cloud environment. Key Responsibilities: Set up and manage production-ready infrastructure for EKS, ECS, MSK and AWS CodePipeline Manage and operate Kafka and AWS MSK clusters with EKS/ECS Maintain and support CI/CD pipelines using Jenkins and AWS tools Build, deploy and troubleshoot Docker-based applications on Kubernetes and ECS Use CloudFormation for infrastructure automation Automate tasks using JSON, YAML, Shell scripts and Python Integrate tools like Splunk and SonarQube for monitoring and quality checks Your Skills and Expertise To set you up for success in this role from day one, Solventum requires (at a minimum) the following qualifications. We seek a skilled candidate with 3 to 5 years of expertise in AWS (EKS, ECS, MSK), Kubernetes, Kafka, Docker and CI/CD automation. Strong experience with AWS services: EKS, ECS, MSK, CodePipeline and CloudFormation (must-have) Hands-on experience with Kafka and AWS MSK (must-have) Good knowledge of Kubernetes (on-prem and AWS) and Docker (must-have) Experience with CI/CD tools: Jenkins, GitHub/Bitbucket, AWS CodePipeline (must-have) Scripting skills in YAML or JSON (must-have) , Python and Shell (add-ons) Familiarity with monitoring tools: Splunk (Prometheus, Grafana as add-ons) Working knowledge of Linux OS, preferably RHEL Strong troubleshooting, communication, and team collaboration skills Additional qualifications that could help you succeed even further in this role include: AWS Certified Solution Architect Certified Kubernetes Administrator (CKA) Solventum is committed to maintaining the highest standards of integrity and professionalism in our recruitment process. Applicants must remain alert to fraudulent job postings and recruitment schemes that falsely claim to represent Solventum and seek to exploit job seekers. Please note that all email communications from Solventum regarding job opportunities with the company will be from an email with a domain of @solventum.com . Be wary of unsolicited emails or messages regarding Solventum job opportunities from emails with other email domains. Please note: your application may not be considered if you do not provide your education and work history, either by: 1) uploading a resume, or 2) entering the information into the application fields directly. Solventum Global Terms of Use and Privacy Statement Carefully read these Terms of Use before using this website. Your access to and use of this website and application for a job at Solventum are conditioned on your acceptance and compliance with these terms. Please access the linked document, select the country where you are applying for employment, and review. Before submitting your application you will be asked to confirm your agreement with the terms.

Posted 4 weeks ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Precisely is the leader in data integrity. We empower businesses to make more confident decisions based on trusted data through a unique combination of software, data enrichment products and strategic services. What does this mean to you? For starters, it means joining a company focused on delivering outstanding innovation and support that helps customers increase revenue, lower costs and reduce risk. In fact, Precisely powers better decisions for more than 12,000 global organizations, including 93 of the Fortune 100. Precisely's 2500 employees are unified by four company core values that are central to who we are and how we operate: Openness, Determination, Individuality, and Collaboration. We are committed to career development for our employees and offer opportunities for growth, learning and building community. With a "work from anywhere" culture, we celebrate diversity in a distributed environment with a presence in 30 countries as well as 20 offices in over 5 continents. Learn more about why it's an exciting time to join Precisely! Overview The Principal Level Technical Consultant will lead a team of cloud-native Professional Services engineers, driving technical excellence and innovative solutions for our clients. This role involves hands-on coding, production deployments in cloud environments, and addressing client requirements with a collaborative mindset. What You Will Do Your day-to-day tasks are likely to involve the following: Leadership (35%) - Team Lead: Lead and mentor a group of cloud-native engineers, fostering a collaborative and innovative environment. Client Interaction: Work directly with clients to understand their requirements, manage their expectations, and maintain regular communication. Follow industry-standard processes for releases. Technical (65%) - Cloud Deployments: Ensure high-quality production deployments in Cloud Services (AWS, Azure, or GCP). Hands-on Coding: Lead coding tasks using Scala, Java, SQL, and Python to develop and deploy solutions. Data Analysis: Utilize strong data analysis skills to identify trends, outliers, correlations, and reconciliation needs across large data sets. Cloud-Native Technologies: Implement and manage cloud-native technologies and platforms such as Spark, Kubernetes, Docker, containerization, microservices, Databricks, and Snowflake. Problem-Solving: Address technical challenges with innovative solutions and a collaborative mindset. Working Hours: Support USA clients with a login time of 4 PM to align with their working hours. What We Are Looking For Minimum 7 years of strong industry experience in Software Development or Professional Services/ Consulting in Cloud Native Technologies. A Bachelor's degree in Computer Engineering is required. A Master's degree is preferred. Debug and correct complex internal and external application problems. Extensive experience in software development with proficiency in writing code in Java, Python, Scala, YAML, AngularJS, JavaScript, Groovy. Proven track record of production deployments in cloud environments (AWS, Azure, GCP) using Cloud-Native technologies like Spark, Kubernetes, Docker, Container, and Microservice. Strong data analysis skills in SQL or NoSQL. Passion for technology with intellectual curiosity, great problem-solving skills, and a commitment to continuous improvement. Ability to learn new technologies and seek challenges. Demonstrated leadership skills with the ability to lead a team of cloud-native engineers. Excellent communication and collaboration skills to work effectively with clients and internal teams. Experience in Linux, Databases, Application Deployments, Change Management, Client Coordination, Ticketing Systems Hands-on experience in different facets of application management. Experience in customer-facing roles is preferred. AWS, Databricks, or Snowflake Certification is good to have. Mandatory night shift (US EST hours) working - Support USA clients with a login time of 4 PM to align with their working hours. The personal data that you provide as a part of this job application will be handled in accordance with relevant laws. For more information about how Precisely handles the personal data of job applicants, please see the Precisely Global Applicant and Candidate Privacy Notice. Show more Show less

Posted 4 weeks ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Strong Dotnet CI/CD YAML & Classic Editor CI/CD Infra Management using ARM or Biceps Strong Experience IAS & PAAS Services Strong Knowledge in App Services (Deploying app Services i.e. function/web/logic & API Apps) Strong knowledge on APIM Knowledge in Provisioning Storage accounts and Key Vault Experience in Monitoring Tools App Insights, Azure Monitor & Log Analytics Extensive knowledge on RBAC access policy and experience on provisioning access on specific resource. Experience with Microsoft PowerShell with focus on Azure Experience of working in Agile/Scrum. Experience in Cosmos DB is a preferrable skill Strong experience on Azure networking concepts like VPN, Private end points & custom domains (using ARM or Bicep) Show more Show less

Posted 1 month ago

Apply

3 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We're Hiring: Databog Specialist Experience: 3+ years Roles & Responsibilities: ✅ Customize and configure Datadog agent YAML to enable various checks ✅ Build playbooks to automate agent installation & configuration ✅ Work with OpenTelemetry to extract key infrastructure metrics ✅ Modify application code to enable traces and spans ✅ Enable Digital Experience Monitoring for browser and mobile apps ✅ Create and manage API and browser synthetic tests ✅ Handle log ingestion, indexing, parsing, and exploration ✅ Set up pipelines, custom parsers, and archives for logs ✅ Apply Datadog tagging best practices for seamless filtering and grouping ✅ Integrate Datadog with various tools and technologies ✅ Design custom dashboards based on business/application needs ✅ Manage users, roles, and licensing within the platform ✅ Guide L1 & L2 teams in implementing Datadog solutions Qualifications: 1. Bachelor’s degree in CS, IT, or related field 2. Hands-on experience in Datadog setup and administration 3. Strong in Linux/Unix systems and scripting (Bash/Python) 4. Solid grasp of cloud components and DevOps tools (Ansible, Jenkins, Chef, etc.) 5. Excellent troubleshooting skills and ability to work across teams 6. Strong communication and collaboration skills Interested candidates send your resume at rakshita.prabhu@syngrowconsulting.com Show more Show less

Posted 1 month ago

Apply

2 - 3 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

Who We Are Brightly, the global leader in intelligent asset management solutions, enables organizations to transform the performance of their assets. Brightly’s sophisticated cloud-based platform leverages more than 20 years of data to deliver predictive insights that help users through the key phases of the entire asset lifecycle. More than 12,000 clients of every size worldwide depend on Brightly’s complete suite of intuitive software – including CMMS, EAM, Strategic Asset Management, IoT Remote Monitoring, Sustainability and Community Engagement. Paired with award-winning training, support and consulting services, Brightly helps light the way to a bright future with smarter assets and sustainable communities. About The Job Brightly Software continues to grow and needs amazing engineers. We are looking for a skilled Senior Software Engineer with a strong background in Geographic Information Systems (GIS) and expertise in Node.js. As an SSE in this role, you will be responsible for designing, developing, and maintaining advanced GIS-based software applications using Node.js to process, analyze, and visualize geospatial data. You will play a key role in building scalable, high-performance systems while collaborating with cross-functional teams to integrate geospatial data into modern web and backend systems. What You’ll Be Doing GIS Software Development: Lead the development of GIS solutions, leveraging Node.js to build APIs and backend services that interact with spatial data. Geospatial Data Integration: Integrate geospatial data sources (e.g., shapefiles, GeoJSON, raster data) into applications, ensuring accurate data processing and management. Node.js Application Development: Design and implement backend solutions using Node.js, focusing on performance, security, and scalability to process large geospatial datasets. API Development: Develop and maintain RESTful APIs that expose geospatial data and GIS services, ensuring smooth integration with other systems and frontend applications. Spatial Data Management: Optimize databases for handling geospatial data, utilizing PostGIS, MongoDB, or other spatial database technologies. Collaboration & Mentorship: Work closely with front-end developers, product managers, and other engineers to ensure GIS features meet business requirements. Mentor junior engineers, providing technical guidance and best practices. Performance Optimization: Continuously monitor and optimize GIS systems for performance, reliability, and data throughput. Documentation & Best Practices: Create and maintain clear technical documentation for geospatial APIs, services, and architectures. Ensure coding standards and best practices are followed. Continuous Learning: Stay up to date with emerging technologies and trends in both GIS and Node.js development to drive innovation in the team. Support our products, identify and fix root causes of production incidents, own troubleshooting and resolution of production issues across teams. Own discovery, solutioning, monitoring, incident resolution – imbibe and socialize DevOps mindset. Own product quality and work to quickly address production defects. Embed a DevOps mentality within the team. Serve as a senior member for your team as needed or special purpose projects per business priority. Identify & own coaching opportunities. Stay current with learning current trends in technology and mentor and guide junior engineers and interns. Partner with Tech Leads, architects, engineers, development managers, product managers, agile coaches across the engineering practice in an agile environment, with scrum implemented at scale globally. Driver in continuous improvement processes through metrics and feedback. Welcome, change and complexity. Learn quickly and adapt fast. Be a change leader! What You Need Bachelor’s or master’s degree in computer science, Geospatial Sciences, Geography, Engineering, or a related field. 5+ years of experience in software development, with at least 2-3 years focused on GIS applications. Strong experience with Node.js for backend development. Expertise in GIS technologies (ArcGIS, QGIS, GeoServer, MapServer, etc.) and spatial data formats (GeoJSON, KML, shapefiles). Experience with spatial databases (PostGIS, MongoDB with GeoJSON support, etc.). Solid understanding of RESTful API design and development. Advanced knowledge of Node.js and JavaScript (ES6+). Familiarity with geospatial libraries like GDAL, GeoPandas, Turf.js, or other JavaScript-based spatial libraries. Strong experience with version control systems like Git and agile development practices. Knowledge of cloud platforms (AWS, Azure) and deployment tools (Docker, Kubernetes) is a plus. Strong problem-solving abilities, with a focus on optimizing geospatial data handling and processing. Ability to communicate complex technical concepts to both technical and non-technical team members. A proactive team player who thrives in a collaborative environment. 3+ years’ unit testing, mocking frameworks, automation frameworks. DevOps mindset – 3+ years’ experience in CI/CD, SDLC environment, implemented exception handling, logging, monitoring, performance measurement, operational metrics knowledge. 3+ years’ experience working in agile methodologies (Scrum, Kanban) Strong communication, partnership, teamwork, and influencing skills required. Technologies NodeJS/ NestJS Framework Messaging framework (ActiveMQ/Kafka) SQL Server/MySql/MongoDB or Postgres Javascript, jQuery, HTML, CSS Dockerization and Containerization Reactive programming Markup languages like XML/JSON/Yaml In depth knowledge of version tools like Git/Bitbucket Expertise in GIS technologies (ArcGIS, QGIS, GeoServer, MapServer, etc.) and spatial data formats (GeoJSON, KML, shapefiles). Experience with spatial databases (PostGIS, MongoDB with GeoJSON support, etc.). Bonus Points Openshift/Kubernetes Open-source contribution, repositories, personal projects Participation in communities of interest, meetups Certifications in technology, agile methodologies Prior experience in agile implemented at scale across multiple teams globally Javascript, jQuery, HTML, CSS The Brightly culture Service. Ingenuity. Integrity. Together. These values are core to who we are and help us make the best decisions, manage change, and provide the foundations for our future. These guiding principles help us innovate, flourish and make a real impact in the businesses and communities we help to thrive. We are committed to the great experiences that nurture our employees and the people we serve while protecting the environments in which we live. Together we are Brightly Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Reference # 306675BR Job Type Full Time Your role Are you an analytic thinker? Do you enjoy designing building Database Products across on-premise & Cloud and running the infrastructure services? Do you want to play a key role in transforming our firm into an agile organization? At UBS, we re-imagine the way we work, the way we connect with each other – our colleagues, clients and partners – and the way we deliver value. Being agile will make us more responsive, more adaptable, and ultimately more innovative. We’re looking for a Infrastructure Engineer with Database products Development and Engineering of Tools for our on-premise and Cloud Database Platforms engage in and improve whole lifecycle of Database Services Oracle products & tools for the infrastructure services on-premises and in cloud this includes inception, design, curation and engineering through development, deployment and refinement apply a broad range of engineering practices, from analyzing requirements and developing new features to automated testing, deployment, and operations. ensure the quality, security, reliability, and compliance of solutions by applying our digital principles and implementing both functional and non-functional requirements learn new technologies and practices, reuse strategic platforms and standards, evaluate options, and make decisions with long-term sustainability in mind work in Agile model with Scrum method and SRE Principles to deliver reliable and efficient platforms Your team In our agile operating model, crews are aligned to larger products and services fulfilling client needs and encompass multiple autonomous pods. You’ll be working in the Hosting Services – Database Crew in Pune, India focusing on managed database offerings comprised of: product engineering, curation and deployments for Databases Oracle and tools at on-premises and in Azure. Our products are consumed by Business aligned Technology and App-Dev teams across business divisions, globally.> Your expertise bachelor or master’s degree or equivalent focusing on Engineering, development, curation and deployment for Databases Oracle products & tools must be an expert in Administration of Oracle 19c DBA cloud experience specifically on Azure must be an expert YAML, BASH, Skills for automation on top of Oracle DB platform for housekeeping, compliance, monitoring proficient with Infra-as-a-Code, GIT, GitLab Runner and building CI/CD pipelines developing templates using Ansible Playbook, experience in Python modules azure tools like ARM and PIM (Privileged Identify Management) experience as infrastructure engineer focused on Database technology. Proficiency on underlying infrastructure and Operating Systems like Unix RHEL 7/8, Solaris About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors.. We have a presence in all major financial centers in more than 50 countries. Join us At UBS, we embrace flexible ways of working when the role permits. We offer different working arrangements like part-time, job-sharing and hybrid (office and home) working. Our purpose-led culture and global infrastructure help us connect, collaborate, and work together in agile ways to meet all our business needs. From gaining new experiences in different roles to acquiring fresh knowledge and skills, we know that great work is never done alone. We know that it's our people, with their unique backgrounds, skills, experience levels and interests, who drive our ongoing success. Together we’re more than ourselves. Ready to be part of #teamUBS and make an impact? Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce. Show more Show less

Posted 1 month ago

Apply

21 - 31 years

50 - 70 Lacs

Bengaluru

Work from Office

Naukri logo

What we’re looking for As a member of the infrastructure team at Survey Monkey, you will have a direct impact in designing, engineering and maintaining our Cloud, Messaging and Observability Platform. Solutioning with best practices, deployment processes, architecture, and support the ongoing operation of our multi-tenant AWS environments. This role presents a prime opportunity for building world-class infrastructure, solving complex problems at scale, learning new technologies and offering mentorship to other engineers. What you’ll be working on Architect, build, and operate AWS environments at scale with well-established industry best practices. Automating infrastructure provisioning, DevOps, and/or continuous integration/delivery. Provide Technical Leadership & Mentorship Mentor and guide senior engineers to build technical expertise and drive a culture of excellence in software development. Foster collaboration within the engineering team, ensuring the adoption of best practices in coding, testing, and deployment. Review code and provide constructive feedback to ensure code quality and adherence to architectural principles. Collaboration & Cross-Functional Leadership Collaborate with cross-functional teams (Product, Security, and other Engineering teams) to drive the roadmap and ensure alignment with business objectives. Provide technical leadership in meetings and discussions, influencing key decisions on architecture, design, and implementation. Innovation & Continuous Improvement Propose, evaluate, and integrate new tools and technologies to improve the performance, security, and scalability of the cloud platform. Drive initiatives for optimizing cloud resource usage and reducing operational costs without compromising performance. Write libraries and APIs that provide a simple, unified interface to other developers when they use our monitoring, logging, and event-processing systems. Participate in on-call rotation. Support and partner with other teams on improving our observability systems to monitor site stability and performance We’d love to hear from people with: 12+ years of relevant professional experience with cloud platforms such as AWS, Heroku. Extensive experience leading design sessions and evolving well-architected environments in AWS at scale. Extensive experience with Terraform, Docker, Kubernetes, scripting (Bash/Python/Yaml), and helm. Experience with Splunk, OpenTelemetry, CloudWatch, or tools like New Relic, Datadog, or Grafana/Prometheus, ELK (Elasticsearch/Logstash/Kibana). Experience with metrics and logging libraries and aggregators, data analysis and visualization tools – Specifically Splunk and Otel. Experience instrumenting PHP, Python, Java and Node.js applications to send metrics, traces, and logs to third-party Observability tooling. Experience with GitOps and tools like ArgoCD/fluxcd. Interest in Instrumentation and Optimization of Kubernetes Clusters. Ability to listen and partner to understand requirements, troubleshoot problems, or promote the adoption of platforms. Experience with GitHub/GitHub Actions/Jenkins/Gitlab in either a software engineering or DevOps environment. Familiarity with databases and caching technologies, including PostgreSQL, MongoDB, Elasticsearch, Memcached, Redis, Kafka and Debezium. Preferably experience with secrets management, for example Hashicorp Vault. Preferably experience in an agile environment and JIRA. SurveyMonkey believes in-person collaboration is valuable for building relationships, fostering community, and enhancing our speed and execution in problem-solving and decision-making. As such, this opportunity is hybrid and requires you to work from the SurveyMonkey office in Bengaluru 3 days per week. #LI - Hybrid

Posted 1 month ago

Apply

21 - 31 years

35 - 42 Lacs

Bengaluru

Work from Office

Naukri logo

What we’re looking for As a member of the Infrastructure team at Survey Monkey, you will have a direct impact in designing, engineering and maintaining our Cloud, Messaging and Observability Platform. Solutioning with best practices, deployment processes, architecture, and support the ongoing operation of our multi-tenant AWS environments. This role presents a prime opportunity for building world-class infrastructure, solving complex problems at scale, learning new technologies and offering mentorship to other engineers. What you’ll be working on Architect, build, and operate AWS environments at scale with well-established industry best practices Automating infrastructure provisioning, DevOps, and/or continuous integration/delivery Support and maintain AWS services, such as EKS, Heroku Write libraries and APIs that provide a simple, unified interface to other developers when they use our monitoring, logging, and event-processing systems Support and partner with other teams on improving our observability systems to monitor site stability and performance Work closely with developers in supporting new features and services. Work in a highly collaborative team environment. Participate in on-call rotation We’d love to hear from people with 8+ years of relevant professional experience with cloud platforms such as AWS, Heroku. Extensive experience with Terraform, Docker, Kubernetes, scripting (Bash/Python/Yaml), and helm. Experience with Splunk, Open Telemetry, CloudWatch, or tools like New Relic, Datadog, or Grafana/Prometheus, ELK (Elasticsearch/Logstash/Kibana). Experience with metrics and logging libraries and aggregators, data analysis and visualization tools – Specifically Splunk and Otel. Experience instrumenting PHP, Python, Java and Node.js applications to send metrics, traces, and logs to third-party Observability tooling. Experience with GitOps and tools like ArgoCD/fluxcd. Interest in Instrumentation and Optimization of Kubernetes Clusters. Ability to listen and partner to understand requirements, troubleshoot problems, or promote the adoption of platforms. Experience with GitHub/GitHub Actions/Jenkins/Gitlab in either a software engineering or DevOps environment. Familiarity with databases and caching technologies, including PostgreSQL, MongoDB, Elasticsearch, Memcached, Redis, Kafka and Debezium. Preferably experience with secrets management, for example Hashicorp Vault. Preferably experience in an agile environment and JIRA. SurveyMonkey believes in-person collaboration is valuable for building relationships, fostering community, and enhancing our speed and execution in problem-solving and decision-making. As such, this opportunity is hybrid and requires you to work from the SurveyMonkey office in Bengaluru 3 days per week. #LI - Hybrid

Posted 1 month ago

Apply

4 - 9 years

7 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

Primary Skills Azure DevOps Strong experience with Azure DevOps Services, including Pipelines, Repos, Artifacts, and Boards for CI/CD and project management. Terraform Expertise in Infrastructure as Code (IaC) using Terraform for provisioning and managing Azure cloud resources efficiently. YAML Proficiency in writing YAML-based Azure DevOps pipeline configurations for automated builds, tests, and deployments. PowerShell & Scripting Strong scripting skills using PowerShell for automation, configuration management, and system administration in Azure environments. CI/CD Pipelines Hands-on experience in building and maintaining CI/CD pipelines using Azure DevOps Pipelines, GitHub Actions, or Jenkins. Azure Cloud Services Experience with Azure Virtual Machines (VMs), Azure Kubernetes Service (AKS), Azure Functions, App Services, Storage Accounts, and Networking (VNet, Load Balancers). Containerization & Kubernetes Experience in working with Docker containers and deploying applications on AKS. Security & Compliance Understanding of Azure security best practices, IAM roles, Managed Identities, Key Vault, and Azure Policy. Monitoring & Logging Hands-on experience with Azure Monitor, Application Insights, Log Analytics, and alerting mechanisms for cloud resource monitoring. Version Control Systems Experience with Git-based repositories (Azure Repos, GitHub, GitLab) and branching strategies like GitFlow. Secondary Skills ARM Templates & Bicep Knowledge of Azure Resource Manager (ARM) templates and Bicep as alternatives to Terraform for infrastructure deployment. Ansible & Configuration Management Experience in automating configurations using Ansible, Chef, or Puppet. Networking & Hybrid Cloud Understanding of VPNs, ExpressRoute, Private Link, and hybrid cloud connectivity. Azure DevOps Security Implementing security scanning tools like SonarQube, Snyk, and DevSecOps practices. Performance Tuning & Cost Optimization Analyzing Azure workloads for cost efficiency and optimizing performance. Multi-Cloud Experience Basic understanding of AWS or GCP alongside Azure. Agile & Scrum Methodologies Working knowledge of Agile frameworks, sprint planning, and collaboration tools like Jira. Serverless & Event-Driven Architecture Experience with Azure Functions, Event Grid, and Logic Apps for automation and event handling. API Management Familiarity with Azure API Management (APIM) for securing and managing APIs.

Posted 1 month ago

Apply

8 - 13 years

20 - 30 Lacs

Chennai

Hybrid

Naukri logo

Greetings from Encora Innovation Labs Pvt Ltd! A Leading World Class Product Engineering Company! Encora is looking for Azure DevOps Lead with 8-12 years experience in Azure DevOps, Terraform, CI/CD,ARM, YAML, Kubernetes and GCP. Please find the below detailed job description and the company profile for your better understanding. Position: Azure DevOps Lead Experience: 8-12 years Location: Chennai Position Type: Full time Qualification: Any graduate Work Mode : WFO(Hybrid) Your Day-to-Day role: Design, and lead the implementation of Cloud solutions using Microsoft Azure Services. Participate in the solutioning efforts to the re-design and architecting efforts to support complex services integration ecosystems to the Azure cloud. Assess and recommend public and hybrid Cloud solutions, including Infrastructure-as-a-Service (IaaS), Software-as-a-Service (SaaS), and Platform-as-a-Service (PaaS). Integrate an existing team of cloud experts, and participate in developing Azure Infrastructure solutions for a diverse array of existing and new customers. Be a key contributor and thought-leader in the DevOps space, maintaining and improving current infrastructure and build automation services in Azure DevOps, as well as providing thought-leadership to continue to move the needle on CI/CD Evaluate and recommend GCP services and components for various use cases, such as compute, storage, networking, and security. Develop cloud architecture blueprints, ensuring alignment with business goals and technical requirements. Monitor cloud infrastructure, identify performance bottlenecks, and implement solutions for optimization. Develop cloud architecture blueprints, ensuring alignment with business goals and technical requirements. Lead the planning and implementation of GCP infrastructure, considering best practices for security, performance, and reliability. Stay updated with the latest GCP features, trends, and industry best practices. Requirements : Designed, Developed and Delivered Scalable and high Availability Application End to End Expertise in Azure/Google Cloud Platform services and components, including Compute Engine, Kubernetes Engine, Storage, Bigdata components preferably in GCP Proven experience as a Azure/GCP Architect or similar role, with a strong background in designing and implementing cloud solutions preferably in GCP Design Architecture and participated in ARB Strong programming skills in Python, Java, or other coding languages for creating custom solutions and automation Has performed Resource utilisation monitoring and right sizing , Cost optimisation, Performance optimisation exercises Worked in Event Driven Architecture (like Kafka) Infrastructure Auto Scaling , Utilisation based alerting Experience with designing and implementing CI/CD pipelines for cloud deployments. Knowledge of cloud security principles and best practices for securing cloud environments Familiarity with networking concepts and configuration in a cloud environment. Knowledge of microservices architecture and serverless computing Infra Pricing estimation for new requirement We'd love to hear from you if you have: Multi Cloud (Azure & GCP) Setting up Security And code Quality Scans Worked closely with DevOps, SRE and Analytics teams Added Advantage: Experience supporting system rationalization and modernization efforts. Experience supporting organization-level architecture development. Knowledge of planning, programming, budget, and execution, including acquisition and investment management processes. Ability to perform research to develop recommendations. Possession of excellent oral and written communication skills. Communication: Facilitates team and stakeholder meetings effectively Resolves and/or escalates issues in a timely fashion Understands how to communicate difficult/sensitive information tactfully Astute cross-cultural awareness and experience in working with international teams (especially US) You should be speaking to us if; You are looking for a career that challenges you to bring your knowledge and expertise to bear for designing implementing and running a world class IT organization You like a job that brings a great deal of autonomy and decision-making latitude You like working in an environment that is young, innovative and well established You like to work in an organization that takes decisions quickly, is non-hierarchical and where you can make an impact Why Encora Innovation Labs? Are you are looking for a career that challenges you to bring your knowledge and expertise to bear for designing implementing and running a world class IT Product Engineering organization? Encora Innovation Labs is a world class SaaS technology Product Engineering company and focused on transformational outcomes for leading-edge tech companies. Encora Partners with fast growing tech companies who are driving innovation and growth within their industries. Who We Are: Encora is devoted to making the world a better place for clients, for our communities and for our people. What We Do: We drive transformational outcomes for clients through our agile methods, micro-industry vertical expertise, and extraordinary people. We provide hi-tech, differentiated services in next-gen software engineering solutions including Big Data, Analytics, Machine Learning, IoT, Embedded, Mobile, AWS/Azure Cloud, UI/UX, and Test Automation to some of the leading technology companies in the world. Encora specializes in Data Governance, Digital Transformation, and Disruptive Technologies, helping clients to capitalize on their potential efficiencies. Encora has been an instrumental partner in the digital transformation journey of clients across a broad spectrum of industries: Health Tech, Fin Tech, Hi-Tech, Security, Digital Payments, Education Publication, Travel, Real Estate, Supply Chain and Logistics and Emerging Technologies. Encora has successfully developed and delivered more than 2,000 products over the last few years and has led the transformation of a number of Digital Enterprises. Encora has over 25 offices and innovation centers in 20+ countries worldwide. Our international network ensures that clients receive seamless access to the complete range of our services and expert knowledge and skills of professionals globally. Encora global delivery centers and offices in the United States, Costa Rica, Mexico, United Kingdom, India, Malaysia, Singapore, Indonesia, Hong Kong, Philippines, Mauritius, and the Cayman Islands. Encora is Certified Great Place to Work in India. Please visit us at Website: encora.com LinkedIn: EncoraInc Facebook: @EncoraInc Instagram: @EncoraInc Are you are looking for a career that challenges you to bring your knowledge and expertise to bear for designing implementing and running a world class IT Product Engineering organization. Please share your updated resume to ravi.sankar@encora.com

Posted 1 month ago

Apply

5 - 10 years

20 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Lead development using React, TypeScript, Redux, and Webpack for the frontend. Build microservices and APIs using Java (Spring Boot, Vert.x) on the backend. Write YAML-based configuration files and leverage Python/Bash for automation, scripting Required Candidate profile Mandatory Skills: frontend- React/TypeScript/Webpack/Redux backend- Java/Spring Boot/Vertx/YAML/Python/Bash Minimum Relevant Experience: 05+ Years 5days working from office

Posted 1 month ago

Apply

4 - 8 years

13 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

Mission/Position Headline: Responsible for the development and on-time delivery of software component(s) in a project, translating software design into code in accordance to the product quality requirements with teams productivity improvements. Areas of Responsibility: Analyzes requirements, translates into design and drives estimation of work product. Defines and implements the work break down structure for the development. Provides inputs for project management and effort tracking. Leads implementation and developer testing in the team Supports engineers within the team with technical/technology/requirement/design expertise. Performs regular internal technical coordination and reviews with all relevant project stakeholders. Tests the work product, and investigates, fixes software defects found through test and code review, submits work products for release after integration, ensuring requirements are addressed and deliverables are of high quality. Ensuring integration and submission of solution into software configuration management system, within committed delivery timelines. Desired Experience: Proficiency in ASP.Net core web Api, C#, .Net Core, WPF, Entity Framework, Sql Server 2022 Secondary skills Docker, Kubernetes, Terraform, YAML. Strong knowledge on Git or any other equivalent source control. UI Controls Telerik Nice to have skills on Python, Knowledge of various CI/CD Qualification and Experience Bachelor"™s or Master"™s degree in Computer Science/Electronics Engineering required, or equivalent 12 to 15 years of experience in software development lifecycle. Capabilities Should have good communication skills, be self-motivated, quality and result oriented Strong Analytical and Problem-Solving Skills

Posted 1 month ago

Apply

3 - 5 years

15 - 20 Lacs

Pune

Work from Office

Naukri logo

Hello eager tech expert! To create a better future, you need to think outside the box. That"™s why we at Siemens need innovators who aren"™t afraid to push boundaries to join our diverse team of tech gurus. Got what it takes? Then help us create lasting, positive impact! Working for Siemens Financial Services Information Technology (SFS IT), you will work on the continuous enhancement of our Siemens Credit Warehouse solution by translating business requirements into IT solutions and working hand in hand on the implementation of these with our interdisciplinary and international team of IT experts. The Siemens Credit Warehouse is a business-critical IT-application that provides credit rating information and credit limits of our customers to all Siemens entities worldwide. We are looking for an experienced Release Manager to become part of our Siemens Financial Services Information Technology (SFS IT) Data Management team. You will have a pivotal role in the moderation of all aspects related to release management of our Data Platform, liaising between the different stakeholders that range from senior management to our citizen developer community. Through your strong communication & presentation skills, coupled with your solid technical background and critical thinking, you"™re able to connect technical topics to a non-technical/management quorum, leading the topics under your responsibility towards a positive outcome based on your natural constructive approach. You"™ll break new ground by: Lead topics across multiple stakeholders from different units in our organization (IT and Business). Actively listen to issues and problems faced by technical and non-technical members. Produce outstanding technical articles for documentation purposes. You"™re excited to build on your existing expertise, including University degree in computer science, business information systems or similar area of knowledge. At least 3 to 5 years"™ experience in a release manager role. Strong technical background with proven track record in: Data engineering and data warehousing, esp. with Snowflake and dbt open source, ideally dbt cloud that allow you to champing CI/ CD processes end-to-end setup and development of release management processes (CI/CD) and concepts. Azure DevOps (esp. CI/CD, project setup optimization), Github and Gitlab, including Git Bash. Reading YAML code for Azure DevOps pipeline and error handling. Very good programming skills in SQL (esp. DDL and DML statements). General good understanding of Azure Cloud tech stack (Azure Portal, Logic Apps, Synapse, Blob Containers, Kafka, Clusters and Streaming). A proven Track on AWS is a big plus. Experience in Terraform is a big plus. Create a better #TomorrowWithUs! We value your unique identity and perspective and are fully committed to providing equitable opportunities and building a workplace that reflects the diversity of society. Come bring your authentic self and create a better tomorrow with us. Protecting the environment, conserving our natural resources, fostering the health and performance of our people as well as safeguarding their working conditions are core to our social and business commitment at Siemens. This role is based in Pune/Mumbai. You"™ll also get to visit other locations in India and beyond, so you"™ll need to go where this journey takes you. In return, you"™ll get the chance to work with international team and working on global topics.

Posted 1 month ago

Apply

2 - 6 years

11 - 16 Lacs

Pune

Work from Office

Naukri logo

Hello Visionary! We empower our people to stay resilient and relevant in a constantly evolving world. We"™re looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. Does that sound like you? Then it seems like you"™d make a phenomenal addition to our vibrant team. Siemens Mobility is an independent handled company of Siemens AG. Its core business includes rail vehicles, rail automation and electrification solutions, turnkey systems, intelligent road traffic technology and related services. The Information Technology (IT) department has the global responsibility for the internal IT of Siemens Mobility. Its goal is to provide a robust and efficient IT landscape derived from business and market demands. Your personality and individuality make the difference. In our team, we increase business performance and point the way into the digital age. Is that exactly your thing? Then live your passion in a cross-location team in which you can actively craft the future of our company. You open up new possibilities for our customers with your competence. Connected with this is an exciting career path that leads you to ever new projects and solutions in the field of IT for Siemens Mobility. We are looking for a Senior AI Developer You"™ll make a difference by Core AI Capabilities Expert in text understanding and generation Development of complex, agentic AI services Implementation of semantic search capabilities RAG (Retrieval-Augmented Generation) variants development Technical Skills Python-based prompt flows implementation LLM-based processing logic JSON/YAML schema development Integration with multiple AI frameworks AWS Bedrock Azure Document Intelligence Azure OpenAI LangChain/LlamaIndex Quality & Evaluation Design and implement AI evaluation pipelines Implement quality metrics G-Eval Faithfulness Answer correctness Answer relevance Synthetic ground truth data generation Performance optimization and monitoring Use Case Development Text extraction and analysis Document comparison capabilities List generation from documents Template filling implementations Chat function development You"™ll win us over by Experience level 6+ years Very good English skills are required. Join us and be yourself! We value your outstanding identity and perspective and are fully committed to providing equitable opportunities and building a workplace that reflects the diversity of society. Come bring your authentic self and build a better tomorrow with us. Make your mark in our exciting world at Siemens. This role is based in Pune and is an Individual contributor role. You might be required to visit other locations within India and outside. In return, you'll get the chance to work with teams impacting - and the shape of things to come. We're Siemens. A collection of over 379,000 minds building the future, one day at a time in over 200 countries. We're dedicated to equality, and we welcome applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit and business need. Bring your curiosity and imagination and help us shape tomorrow. Find out more about Siemens careers at: www.siemens.com/careers & more about mobility at https://new.siemens.com/global/en/products/mobility.html

Posted 1 month ago

Apply

6 - 11 years

9 - 19 Lacs

Pune, Bengaluru, Delhi / NCR

Work from Office

Naukri logo

Position-Azure Devops Location- PAN India Experience-6+ years Must skills: Azure, Powershell, Devops, YAML,CI-CD,SQL server, Automation Testing, Bicep/Terraform

Posted 1 month ago

Apply

7 - 10 years

22 - 30 Lacs

Bengaluru

Remote

Naukri logo

We are conducting weekend online drive for the below role: [17-May-2025] Candidates available please respond. Work Experience: 7-10 years Work Mode: Remote work Laptop collection from office (travel/accommodation not provided) In-office presence required once every 6 months for 5 days (based on project need) Job Description: We are hiring experienced .NET Full Stack Developers to join our dynamic team. Candidates must be capable of delivering robust enterprise applications using modern Microsoft technologies. Strong problem-solving and team collaboration skills are essential. Mandatory Skills: C#, .NET Framework & .NET 6+ SQL Server / T-SQL Microservices architecture, RESTful APIs ASP.NET MVC, JavaScript, HTML, CSS, XML, JSON Vue.js Cloud technologies (Azure preferred) Agile & Azure DevOps, Git, Unit Testing, TDD Strong analytical & communication skills Preferred Skills: Angular, TypeScript, YAML GRPC, Async/Concurrent Programming PowerShell, Batch scripting Note: Candidates must be available for an online interview drive on 17-May-2025 and should be ready to join immediately or within 30 days.

Posted 1 month ago

Apply

3 - 6 years

5 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

Overview This person will look after the environment management of the salesforce orgs and will also be able to handle the deployments within multiple salesforce orgs. Responsibilities Oversee Salesforce Data Cloud environments across development, staging, and production. Define best practices for environment setup, security, and governance. Manage data pipelines, ingestion processes, and harmonization rules for efficient data flow. Establish role-based access control (RBAC) to ensure data security and compliance. Monitor data processing jobs, ingestion performance, and data harmonization. Ensure compliance with GDPR, CCPA, and other data privacy regulations Establish CI/CD pipelines using tools like Azure DevOps Implement version control and automated deployment strategies for Data Cloud configurations Define a data refresh strategy for lower environments to maintain consistency. Qualifications Mandatory Technical Skills Extensive experience in setting up, maintaining, and troubleshooting CI/CD pipelines for Salesforce apps. Strong knowledge of Azure DevOps tools and pipeline creation, with proficiency in automation scripting (primarily YAML, with additional languages as needed). Hands-on experience with SFDX, Azure Repos, and automated release deployments for Salesforce. Expertise in implementing GIT branching strategies using VS Code integrated with Salesforce CLI tool.Mandatory Skills Proficiency in Salesforce Data Cloud architecture and best practices. Experience with data lake, Snowflake, or cloud-based data storage solutions. Familiarity with OAuth, authentication mechanisms, and data security standards. Salesforce Data Cloud Consultant Certification

Posted 1 month ago

Apply

Exploring YAML Jobs in India

YAML (YAML Ain't Markup Language) has seen a surge in demand in the job market in India. Organizations are increasingly looking for professionals who are proficient in YAML to manage configuration files, create data structures, and more. If you are a job seeker interested in YAML roles in India, this article provides valuable insights to help you navigate the job market effectively.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities are known for their vibrant tech scenes and have a high demand for YAML professionals.

Average Salary Range

The average salary range for YAML professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 10 lakhs per annum.

Career Path

In the YAML skill area, a typical career path may involve starting as a Junior Developer, progressing to a Senior Developer, and eventually becoming a Tech Lead. Continuous learning and gaining hands-on experience with YAML will be crucial for career advancement.

Related Skills

Apart from YAML proficiency, other skills that are often expected or helpful alongside YAML include: - Proficiency in scripting languages like Python or Ruby - Experience with version control systems like Git - Knowledge of containerization technologies like Docker - Understanding of CI/CD pipelines

Interview Questions

Here are 25 interview questions for YAML roles: - What is YAML and what are its advantages? (basic) - Explain the difference between YAML and JSON. (basic) - How can you include one YAML file in another? (medium) - What is a YAML anchor? (medium) - How can you create a multi-line string in YAML? (basic) - Explain the difference between a sequence and a mapping in YAML. (medium) - What is the difference between != and !== in YAML? (advanced) - Provide an example of using YAML in a Kubernetes manifest file. (medium) - How can you comment in YAML? (basic) - What is a YAML alias and how is it used? (medium) - Explain how to define a list in YAML. (basic) - What is a YAML tag? (medium) - How can you handle sensitive data in a YAML file? (medium) - Explain the concept of anchors and references in YAML. (medium) - How can you represent a null value in YAML? (basic) - What is the significance of the --- at the beginning of a YAML file? (basic) - How can you represent a boolean value in YAML? (basic) - Explain the concept of scalars, sequences, and mappings in YAML. (medium) - How can you create a complex data structure in YAML? (medium) - What is the difference between << and & in YAML? (advanced) - Provide an example of using YAML in an Ansible playbook. (medium) - Explain what YAML anchors and aliases are used for. (medium) - How can you control the indentation in a YAML file? (basic) - What is a YAML directive? (advanced) - How can you represent special characters in a YAML file? (medium)

Closing Remark

As you prepare for YAML job roles in India, remember to showcase your proficiency in YAML and related skills during interviews. Stay updated with the latest industry trends and continue to enhance your YAML expertise. With the right preparation and confidence, you can excel in the competitive job market for YAML professionals in India. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies