Jobs
Interviews

1328 Yaml Jobs - Page 49

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

20 - 25 Lacs

Bengaluru

Work from Office

Lead development using React, TypeScript, Redux, and Webpack for the frontend. Build microservices and APIs using Java (Spring Boot, Vert.x) on the backend. Write YAML-based configuration files and leverage Python/Bash for automation, scripting Required Candidate profile Mandatory Skills: frontend- React/TypeScript/Webpack/Redux backend- Java/Spring Boot/Vertx/YAML/Python/Bash Minimum Relevant Experience: 05+ Years 5days working from office

Posted 2 months ago

Apply

10.0 - 15.0 years

20 - 30 Lacs

Gurugram

Hybrid

Position : Senior Network Engineer Location : Gurugram - Haryana Direct Hire Role Skills- Firewall and SaaS: Palo Alto, Prisma Access Load-Balancers and WAFs: F5 Big-IP, Cloud-Flare, A10 Networks (optional) Networking: Cisco, Arista, Aruba Silver-Peak (SD-WAN) DDOS: Cloud-Flare and Radware. Network Observability: cPacket, Viavi, Wireshark, Thousand-Eyes, Grafana, Elasticsearch, Telegraf, Logstash Clouds: AWS, Azure Wireless: Cisco and Juniper MIST Networking Protocols: BGP, MP-BGP, OSPF, Multicast, MLAG, VPC, MSTP, Rapid-PVST+, LACP, mutual route redistribution, VXLAN, eVPN. Programming and Automation: Python, JSON, Jinja, Ansible, YAML. Role & responsibilities 10+ years of technical experience in networking, network security and upgrades Working understanding of open-standard networking protocols and the ability to identify and implement these protocols at an enterprise level Performs complex installations, upgrades, and maintenance and technical duties supporting the operations internal and non-internal network Assist in the development/design of network/security policies, standards, guidelines, and procedures relevant to IT infrastructure and Architecture Communicates with the client, the team and NOC on a day to day basis to ensure quick turnaround times, resolutions and maintain a robust environment

Posted 2 months ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Hyderabad

Hybrid

Location: PAN-INDIA Work Mode: Hybrid Looking for Azure Devops Developer with strong hands-on experience with Ansible Manage CI/CD pipelines using Azure DevOps, Git, and YAML pipelines Automate build, release, and deployment processes across environments Configure monitoring, alerting, and rollback strategies Integrate with Kubernetes, Terraform, and Azure Services (App Services, Key Vault, etc.) Collaborate with developers, testers, and cloud architects Implement DevOps best practices for security and scalability Troubleshoot pipeline issues and optimize performance Note: Looking for Immediate to 30-Days joiners at most.

Posted 2 months ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Gurugram

Hybrid

Location: PAN-INDIA Work Mode: Hybrid Looking for Azure Devops Developer with strong hands-on experience with Ansible Manage CI/CD pipelines using Azure DevOps, Git, and YAML pipelines Automate build, release, and deployment processes across environments Configure monitoring, alerting, and rollback strategies Integrate with Kubernetes, Terraform, and Azure Services (App Services, Key Vault, etc.) Collaborate with developers, testers, and cloud architects Implement DevOps best practices for security and scalability Troubleshoot pipeline issues and optimize performance Note: Looking for Immediate to 30-Days joiners at most.

Posted 2 months ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Bengaluru

Hybrid

Location: PAN-INDIA Work Mode: Hybrid Looking for Azure Devops Developer with strong hands-on experience with Ansible Manage CI/CD pipelines using Azure DevOps, Git, and YAML pipelines Automate build, release, and deployment processes across environments Configure monitoring, alerting, and rollback strategies Integrate with Kubernetes, Terraform, and Azure Services (App Services, Key Vault, etc.) Collaborate with developers, testers, and cloud architects Implement DevOps best practices for security and scalability Troubleshoot pipeline issues and optimize performance Note: Looking for Immediate to 30-Days joiners at most.

Posted 2 months ago

Apply

5.0 - 10.0 years

6 - 10 Lacs

Greater Noida

Work from Office

Responsibilities: * Design, implement & optimize CI/CD pipelines using Python, Ansible & YAML. * Manage AWS infrastructure with Terraform & Jenkins. * Collaborate on Git repositories with Bitbucket & Github.

Posted 2 months ago

Apply

4.0 - 7.0 years

10 - 15 Lacs

Noida

Work from Office

As a Consultant in Automation domain, you will be responsible for delivering automation use cases enabled by AI and Cloud technologies. In this role, you play a crucial part in building the next-generation autonomous networks. You will develop efficient and scalable automation solutions, you will leverage your technical expertise, problem-solving abilities, and domain knowledge to drive innovation and efficiency. You have: Bachelor's degree in Computer Science, Engineering, or a related field preferred, with 8-10+ years of experience in automation or telecommunications. Understanding of telecom network architecture, including Core networks, OSS, and BSS ecosystems, along with industry frameworks like TM Forum Open APIs and eTOM. Practical experience in programming and scripting languages such as Python, Go, Java, or Bash, and automation tools like Terraform, Ansible, and Helm. Hands-on experience with CI/CD pipelines using Jenkins, GitLab CI, or ArgoCD, as well as containerization (Docker) and orchestration (Kubernetes, OpenShift). It would be nice if you also had: Exposure to agile development methodologies and cross-functional collaboration. Experience with real-time monitoring tools (Prometheus, ELK Stack, OpenTelemetry, Grafana) and AI/ML for predictive automation and network optimization is a plus. Familiarity with GitOps methodologies and automation best practices for telecom environments. Design, develop, test, and deploy automation scripts using languages such as Python, Go, Bash, or YAML. Automate the provisioning, configuration, and lifecycle management of network and cloud infrastructure. Design and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, or Tekton. Automate continuous integration, testing, deployment, and rollback mechanisms for cloud-native services. Implement real-time monitoring, logging, and tracing using tools such as Prometheus, Grafana, ELK, and OpenTelemetry. Develop AI/ML-driven observability solutions for predictive analytics and proactive fault resolution, integrating AI/ML models to enable predictive scaling. Automate self-healing mechanisms to remediate network and application failures. Collaborate with DevOps and Network Engineers to align automation with business goals.

Posted 2 months ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Hyderabad, Bengaluru

Work from Office

- Azure Infra & DevOps - Terraform, YAML, CI/CD (Azure DevOps, GitHub, Bamboo) - Azure services, Helm, containers - AWS to Azure migration - Infra deployment, monitoring, troubleshooting Required Candidate profile - Azure Infra, DevOps, Terraform, YAML, CI/CD, and AWS to Azure migration. - Strong in containerized deployments, monitoring tools, and troubleshooting cloud environments.

Posted 2 months ago

Apply

3.0 - 7.0 years

6 - 11 Lacs

Mumbai

Work from Office

As Consultant, you are responsible to develop design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include Lead the design and construction of new mobile solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise With 3-5yrs of Design and implement automation for Application build and deployment. Best practices to follow during Ansible playbook development. Automate the current deployment process using Ansible like deploy war file (java application) to tomcat servers. Configure and manage inventories for different-2 environments Hands-on experience to develop the common (custom) roles to achieve specific funtionality which can be re-use for multiple applications Preferred technical and professional experience Good understanding of different-2 types of variables. Good knowledge of jinja templates and Windows and Microsoft ecosystem (Windows, AD, Azure) Good knowledge of Ansible filters to manipulate the variables Strong knowledge of Unix / Linux Strong knowledge of scripting toolsShell, Powershell, YAML

Posted 2 months ago

Apply

3.0 - 6.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Your Role and Responsibilities You will be part of team working for FinOps related products. A good hands on experience with multiple Cloud Service Providers and handling their Petabytes of data. Software Development & Cloud Engineering: Developscalable, high-performance Java applications with a focus on reliability and security. Design and implement cloud-native solutions usingAWS services (S3, SQS, DynamoDB, IAM, SNS). Build and managecontainerized applications withKubernetes and Docker. Write clean, maintainable code while following industry best practices DevOps & Automation Automate CI/CD pipelines usingGitHub Actions for seamless deployment. Define and manageInfrastructure as Code (IaC) usingTerraform and YAML-based configurations. Improve monitoring, logging, and alerting for cloud-based applications Additionally, Collaborate with DevOps, security, and product teams to define system architecture. Mentor junior developers, conduct code reviews, and foster a learning-oriented culture. UtilizeJIRA for Agile project tracking and sprint planning. Work withAWS, Azure, and GCP Access and Identity Management (IAM) to secure cloud environments. Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise 5+ years of experience in software development with strong experience inJava. Experience onAWS services (S3, SQS, DynamoDB, IAM, etc.) Hands-on experience withKubernetes and Docker for container orchestration Good understanding ofcloud security and identity management (AWS, Azure, GCP) Preferred technical and professional experience Nice to have 1. Experience working withTerraform and YAML-based configurations for cloud infrastructure is an added advantage

Posted 2 months ago

Apply

6.0 - 9.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Primary Skills Proficient in writing, modularizing, and maintaining Terraform configurations for provisioning cloud infrastructure. Experience with Terraform state management (local and remote), workspaces, and backends (e.g., Azure Storage Account). Hands-on with Terraform modules, variables, outputs, and lifecycle rules. Familiarity with Terraform Cloud or Enterprise for collaboration and policy enforcement. Expertise in designing and implementing CI/CD pipelines using Azure DevOps Pipelines (YAML and Classic). Integration of Terraform into Azure DevOps pipelines for automated infrastructure deployment. Use of pipeline stages, jobs, templates, and environments for structured deployments. Experience with pipeline triggers, approvals, and gated releases. Strong understanding of core Azure services (e.g., Azure Resource Manager, Virtual Networks, Key Vault, App Services, AKS). Experience deploying and managing Azure resources using Terraform. Familiarity with Azure RBAC, service principals, and managed identities for secure automation. Proficient in Git-based workflows (feature branching, pull requests, code reviews). Experience integrating Git repositories (Azure Repos, GitHub) with Azure DevOps pipelines. Implementing secure practices in IaC (e.g., secrets management via Azure Key Vault). Familiarity with tools like Sentinel, Checkov, or TFLint for policy-as-code and static analysis Secondary Skills Scripting with PowerShell, Bash, or Python for automation tasks. Integration with Azure Monitor, Log Analytics, and Application Insights. Basic knowledge of Docker and Kubernetes (especially AKS). Exposure to configuration management tools like Ansible or Chef. Experience working in Agile teams and understanding of DevOps principles. Awareness of cost-effective infrastructure design and Azure pricing models. Relevant certifications such as Azure Administrator Associate, Terraform Associate, or DevOps Engineer Expert.

Posted 2 months ago

Apply

3.0 - 4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

FactSet creates flexible, open data and software solutions for over 200,000 investment professionals worldwide, providing instant access to financial data and analytics that investors use to make crucial decisions. At FactSet, our values are the foundation of everything we do. They express how we act and operate, serve as a compass in our decision-making, and play a big role in how we treat each other, our clients, and our communities. We believe that the best ideas can come from anyone, anywhere, at any time, and that curiosity is the key to anticipating our clients’ needs and exceeding their expectations. FactSet is currently seeking a Cybersecurity Engineer with a focus in Security Automation and Orchestration to become part of the expanding Identity and Access Management team. The perfect candidate will have a strong passion for Cybersecurity and possess the ability to collaborate with experts in access control, directory services, and compliance reporting to implement consistent automation tools for managing employee data and access. Shift Timings: Flexi shift Job Responsibilities: Collaborate with the Security Governance, Risk & Compliance team to turn compliance requirements into project plans and automation that consistently produce accurate data. Partner with the IAM directory services expert to implement an automation framework for distributed directory services. Assess pre-identified workflow opportunities for automation by evaluating resource optimization, complexity, cost-effectiveness, and feasibility, assisting management in pinpointing quick wins and contributing to their development. Skilled in crafting technical documentation and process documentation, such as deployment diagrams and account creation flow charts. Examine account metadata, logging, and code repositories to enhance the accuracy of identity repositories. Generate reports to meet various audit and compliance needs. Assist in quarterly access reviews to ensure adherence to the principle of least privilege. Strengthen security controls and methodologies to meet industry best practices, emerging compliance requirements, and prevent threats. Participate in ongoing process improvement initiatives to enhance the quality of security service delivery and mitigate risk. Job Requirements: Bachelor's or master’s degree in Computer Science or equivalent. 3-4 years of development experience & good understanding of security basics. Experience developing in any programming stacks not limited to SQL, Python, PowerShell, Javascript, Shell Scripting, REST API, YAML. Solid understanding of HTTP protocol & well-versed in REST API Development. Prior experience in Orchestration/automation solutions would be helpful. Experience with identity repositories such as Active Directory, Okta, Azure AD, SailPoint, especially with access logs General understanding of authentication, authorization, role-based access, least privilege and segregation of duties concepts Experience in writing process documentation (e.g. flow charts for account creation) Good problem-solving skills. Strong debugging skills Experience as an administrator of at least one Identity & Access Management system (e.g. Okta, Sailpoint, local Active Directory domain, ADFS, Azure AD, AWS IAM, CyberArk, Centrify, BeyondTrust or other products in those families) Written and oral communication & ability to engage partner teams in driving conversations. Strong team player. Ability to provide on-the-job training and knowledge sharing to other engineers. Self-initiative with strong time management. Must be willing to work on shifting schedule Diversity: At FactSet, we celebrate diversity of thought, experience, and perspective. We are committed to disrupting bias and a transparent hiring process. All qualified applicants will be considered for employment regardless of race, color, ancestry, ethnicity, religion, sex, national origin, gender expression, sexual orientation, age, citizenship, marital status, disability, gender identity, family status or veteran status. FactSet participates in E-Verify. Return to Work: Returning from a break? We are here to support you! If you have taken time out of the workforce and are looking to return, we encourage you to apply and chat with our recruiters about our available support to help you relaunch your care Company Overview: FactSet (NYSE:FDS | NASDAQ:FDS) helps the financial community to see more, think bigger, and work better. Our digital platform and enterprise solutions deliver financial data, analytics, and open technology to more than 8,200 global clients, including over 200,000 individual users. Clients across the buy-side and sell-side, as well as wealth managers, private equity firms, and corporations, achieve more every day with our comprehensive and connected content, flexible next-generation workflow solutions, and client-centric specialized support. As a member of the S&P 500, we are committed to sustainable growth and have been recognized among the Best Places to Work in 2023 by Glassdoor as a Glassdoor Employees’ Choice Award winner. Learn more at www.factset.com and follow us on X and LinkedIn. Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We are looking for a hands-on MLOps & CI/CD Engineer to join our fast-paced engineering team. This role is ideal for professionals passionate about deploying and maintaining AI/ML pipelines while enabling robust CI/CD practices across engineering projects. You will be responsible for managing the full lifecycle of machine learning models — from experimentation to deployment and monitoring — while also playing a critical role in designing and managing end-to-end CI/CD pipelines for all software components, ensuring high developer productivity and system reliability. Key Responsibilities MLOps Responsibilities Build and maintain scalable MLOps pipelines using tools like MLflow , Azure ML , or Kubeflow . Operationalise AI agents and LLM-based solutions into production-ready workflows. Deploy and maintain Retrieval-Augmented Generation (RAG) pipelines and manage vector databases . Implement model monitoring, drift detection, and retraining workflows. Collaborate with AI leads to integrate ML models via secure, optimised APIs. CI/CD & DevOps Responsibilities Design and implement CI/CD pipelines for web applications , APIs , agents , and bots . Automate build, test, and deployment processes using Azure DevOps or GitHub Actions . Manage infrastructure through Infrastructure as Code (IaC) using tools like Terraform , Bicep , or ARM templates . Standardise deployment practices across dev , staging , and production environments. Implement containerisation strategies using Docker and orchestrate using Kubernetes (AKS) . Ensure CI/CD processes include automated testing, version control, rollback strategies, and promotion policies. Enhance system observability with robust logging, monitoring, and alerting using Azure Monitor , Prometheus , or Grafana . Required Qualifications Minimum 3 years of experience in MLOps, DevOps, or CI/CD-focused roles. Proficiency in Python and YAML scripting for workflow automation. Strong hands-on experience with Azure Cloud (preferred) or AWS/GCP. Familiarity with ML tools such as MLflow , Azure ML , or Kubeflow . Experience in building and maintaining CI/CD pipelines using Azure DevOps or GitHub Actions . Proficient in Docker , Kubernetes (preferably AKS) , and secure container practices. Good understanding of RESTful APIs , web services, and microservices architecture. Experience with Git workflows ; GitOps is a plus. Bonus Points Experience with LangChain , LangGraph , or other agentic AI frameworks. Exposure to CI/CD for front-end frameworks like React or Next.js . Knowledge of security , compliance , and observability tools. Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

This role is for one of the Weekday's clients Salary range: Rs 800000 - Rs 1500000 (ie INR 8-15 LPA) Min Experience: 3 years Location: Mumbai, Navi Mumbai, Pune JobType: full-time Travel: Based on the requirements of assignments/projects. Notice Period: Immediate joiner or within 30 days. Requirements Experience Required: 3-5 years of hands-on experience with Azure DevOps. Strong background in Azure DevOps and its APIs, Python. Proficiency with CI/CD processes, YAML, Bash, PowerShell, and Terraform. Experience with microservice architecture. Proficiency in Docker and container orchestration tools. Familiarity with Git version control and Linux environments. Nice to have: Knowledge of C#.NET. Essential Skills: Strong interpersonal and collaborative skills. Analytical mindset with problem-solving abilities. Ability to adapt and thrive in a dynamic work environment. Effective communication skills and self-driven attitude. Key Responsibilities: Design, implement, and maintain virtualization and CI/CD infrastructure. Participate in project delivery ensuring high-quality and reliable solutions. Support, maintain, and document DevOps functionality for modules or entire projects. Collaborate with teams to achieve technical, functional, and organizational goals. Perform necessary analysis, coding, testing, documentation, and deployment activities. Monitor infrastructure using frameworks like Prometheus, Grafana, and Azure Monitor. Implement cloud networking and security best practices, including identity management (e.g., Entra ID, SSO, OAuth2). Apply SRE principles, including SLAs, SLOs, and error budgets. Contribute to cost management and optimization in cloud environments. Develop strong scripting solutions using Python, Bash, and PowerShell. Work effectively in agile, cross-functional teams. Drive cost optimization strategies through rightsizing, auto-scaling, and resource governance. Integrate DevSecOps principles into deployment pipelines (e.g., SonarCloud, Mend, Invicti). Support multi-tenant, multi-environment strategies across staging, pre-production, and production. Enhance cloud platform automation and improve service resilience. Required Skills: DevSecOps Azure DevOps CI/CD Pipeline Show more Show less

Posted 2 months ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About The Role Grade Level (for internal use): 11 The Team: The Infrastructure team is a global team split across the US, Canada and the UK. The team is responsible for building and maintaining platforms used by Index Management teams to calculate and rebalance our high profile indices. The Impact: You will be responsible for the development and expansion of platforms which calculate and rebalance indices for S&P Dow Jones Indices. This will ensure that relevant teams have continuous access to up to date benchmarks and indices. What’s In It For You In this role, you will be a key player in the Infrastructure Engineering team where you will manage the automation of systems administration in the AWS Cloud environment used for running index applications. You will build solutions to automate resource provisioning and administration of infrastructure in AWS Cloud for our index applications. There will also be a smaller element of L3 support for Developers when they have more complex queries to address. Responsibilities Create DevOps pipelines to deliver Infrastructure as Code. Build workflows to create immutable Infrastructure in AWS. Develop automation for provisioning compute instances and storage. Build AMI images using Packer. Develop Ansible playbooks and automate execution of routine Linux scripts. Provision resources in AWS using Cloud Formation Templates. Deploy immutable infrastructure in AWS using Terraform. Orchestrate container deployment Configure Security Groups, Roles & IAM Policy in AWS. Monitor infrastructure and develop utilization reports. Implementing and maintaining version control systems, configuration management tools, and other DevOps-related technologies. Designing and implementing automation tools and frameworks for continuous integration, delivery, and deployment. Develop and write scripts for pipeline automation using relevant scripting languages like Groovy, YAML. Configure continuous delivery workflows for various environments e.g., development, staging, production. Use Jenkins to create pipelines, which are groups of events or jobs that are interlinked with one another in a sequence. Evaluate new AWS services and solutions. Integrate application build & deployments scripts with Jenkins. Troubleshoot Production issues. Effectively interact with global customers, business users and IT employees Basic Qualifications Bachelor's degree in Computer Science, Information Systems or Engineering or equivalent qualification is preferred or relevant equivalent work experience RedHat Linux & AWS Certifications preferred. Strong experience in Infrastructure Engineering and automation. Very good experience in AWS Cloud systems administration. Experience in developing Ansible scripts and Jenkins integration. Expertise using DevOps tools (Jenkins, Terraform, Packer, Ansible, GitHub, Artifactory) Expertise in the different automation tools used to develop CI/CD pipelines. Proficiency in Jenkins and Groovy for creating dynamic and responsive CI/CD pipelines Good experience in RedHat Linux scripting First class communication skills – written, verbal and presenting Preferred Qualifications Candidates should have a minimum of 10+ years industry experience in cloud and Infrastructure. Administer Redhat Linux Operating Systems Deploy OS patches & perform upgrades Configure filesystems & allocate storage Develop Unix scripts Develop scripts for automation of infrastructure provisioning. Monitor infrastructure and develop utilization reports Evaluate new AWS services and solutions Experience working with customers to diagnose a problem, and work toward resolution Excellent verbal and written communication skills Understanding of various Load Balancers in a large data center environment About S&P Global Dow Jones Indices At S&P Dow Jones Indices, we provide iconic and innovative index solutions backed by unparalleled expertise across the asset-class spectrum. By bringing transparency to the global capital markets, we empower investors everywhere to make decisions with conviction. We’re the largest global resource for index-based concepts, data and research, and home to iconic financial market indicators, such as the S&P 500® and the Dow Jones Industrial Average®. More assets are invested in products based upon our indices than any other index provider in the world. With over USD 7.4 trillion in passively managed assets linked to our indices and over USD 11.3 trillion benchmarked to our indices, our solutions are widely considered indispensable in tracking market performance, evaluating portfolios and developing investment strategies. S&P Dow Jones Indices is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit www.spglobal.com/spdji. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Inclusive Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering an inclusive workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and equal opportunity, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.2 - Middle Professional Tier II (EEO Job Group) Job ID: 301292 Posted On: 2025-02-26 Location: Mumbai, Maharashtra, India Show more Show less

Posted 2 months ago

Apply

5.0 - 8.0 years

10 - 20 Lacs

Pune, Mumbai (All Areas)

Hybrid

Job Opening: DevOps Engineer (Azure) Experience: 58 Years Location: Hybrid / Mumbi or Pune Employment Type: Full-Time About the Role: We are looking for a skilled DevOps Engineer to join our team and take charge of designing, implementing, and maintaining Azure cloud infrastructure for our new engineering infrastructure platform. You will work closely with international development and operations teams to drive seamless CI/CD integration and delivery of engineering applications. This role demands a strong background in Infrastructure as Code (IaC) and software engineering , with a focus on automation, scalability, and reliability. You will also be responsible for user training and ongoing system support. Key Responsibilities: Design, implement, and manage Azure Cloud infrastructure and managed instances Develop and maintain Infrastructure as Code (IaC) using Terraform Build and integrate CI/CD pipelines in collaboration with development teams Automate deployment processes for improved system reliability and scalability Monitor, troubleshoot, and resolve system performance and security issues Implement best practices for branch strategy, configuration management, and version control Ensure smooth and efficient software delivery across cross-functional teams Architect and maintain Microservices and Docker containerized applications Use Azure CLI / PowerShell, Bicep, and YAML for cloud provisioning and scripting Qualifications: Bachelor’s degree in Computer Science , Engineering, or related field 5–8 years of experience in a DevOps Engineer or similar role Expertise in Azure Cloud Infrastructure , especially Azure Container Services Proficiency in Terraform, Azure CLI / PowerShell, Bicep, and YAML Proven experience with microservices architecture and Docker containers Mandatory: Strong development experience in C# Preferred: Experience with JavaScript Familiarity with Azure DevOps tools Strong analytical, problem-solving, and communication skills

Posted 2 months ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Description Roles & Responsibilities GitHub Actions & CI/CD Workflows (Primary Focus) Design, develop, and maintain scalable CI/CD pipelines using GitHub Actions. Create reusable and modular workflow templates using composite actions and reusable workflows. Manage and optimize GitHub self-hosted runners, including autoscaling and hardening. Monitor and enhance CI/CD performance with caching, parallelism, and proper dependency management. Review and analyze existing Azure DevOps pipeline templates. Migrate Azure DevOps YAML pipelines to GitHub Actions, adapting tasks to equivalent GitHub workflows. Azure Kubernetes Service (AKS) Deploy and manage containerized workloads on AKS. Implement cluster and pod-level autoscaling, ensuring performance and cost-efficiency. Ensure high availability, security, and networking configurations for AKS clusters. Automate infrastructure provisioning using Terraform or other IaC tools. Azure DevOps Design and build scalable YAML-based Azure DevOps pipelines. Maintain and support Azure Pipelines for legacy or hybrid CI/CD environments. ArgoCD & GitOps Implement and manage GitOps workflows using ArgoCD. Configure and manage ArgoCD applications to sync AKS deployments from Git repositories. Enforce secure, auditable, and automated deployment strategies via GitOps. Collaboration & Best Practices Collaborate with developers and platform engineers to integrate DevOps best practices across teams. Document workflow standards, pipeline configurations, infrastructure setup, and runbooks. Promote observability, automation, and DevSecOps principles throughout the lifecycle. Must-Have Skills 8+ years of overall IT experience, with at least 5+ years in DevOps roles. 3+ years hands-on experience with GitHub Actions (including reusable workflows, composite actions, and self-hosted runners). 2+ years of experience with AKS, including autoscaling, networking, and security. Strong proficiency in CI/CD pipeline design and automation. Experience with ArgoCD and GitOps workflows. Hands-on with Terraform, ARM, or Bicep for IaC. Working knowledge of Azure DevOps pipelines and YAML configurations. Proficient in Docker, Bash, and at least one scripting language (Python preferred). Experience in managing secure and auditable deployments in enterprise environments. Good-to-Have Skills Exposure to monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack). Familiarity with Service Meshes like Istio or Linkerd. Experience with Secrets Management (e.g., HashiCorp Vault, Azure Key Vault). Understanding of RBAC, OIDC, and SSO integrations in Kubernetes environments. Knowledge of Helm and custom chart development. Certifications in Azure, Kubernetes, or DevOps practices. Skills Github Actions & CI/CD,Azure Kubernetes Service,AgroCD & GitOps,Devops Show more Show less

Posted 2 months ago

Apply

4.0 years

0 Lacs

Vadodara, Gujarat, India

On-site

Experience : 4.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Vadodara) Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Jeavio) What do you need for this opportunity? Must have skills required: Azure (Microsoft Azure), Kubernets, PowerShell Scripting, Bash Shell Scripting Jeavio is Looking for: Job Description Position Overview: The DevOps Engineer will lead the design, implementation, and maintenance of robust CI/CD pipelines, cloud infrastructure (AWS/Azure/GCP), containerized applications, and infrastructure automation solutions. The role requires advanced technical expertise in DevOps tools and scripting, infrastructure as code (Terraform), monitoring systems, and cloud optimization, with a strong focus on performance, reliability, and team collaboration. Key Responsibilities: Lead the development and management of CI/CD pipelines to automate build, test, and release processes. Design, implement, and manage cloud infrastructure using Infrastructure as Code (Terraform). Oversee containerization and orchestration efforts using Docker and Kubernetes. Manage and deploy Java-based applications in production environments. Implement system monitoring and alerting to ensure high system availability and reliability. Collaborate with development, QA, and SecOps teams to establish seamless workflows. Drive automation of manual processes to increase engineering productivity. Optimize cloud resource usage to reduce operational costs. Ensure compliance with internal security and regulatory standards. Continuously evaluate and integrate new technologies to improve DevOps practices. Requirements Required Skills: Proficient with Git and CLI-based version control workflows. Proficient in scripting languages (Bash, PowerShell) and YAML. Programming experience in Python, JavaScript/TypeScript, or C#. Strong expertise with cloud platforms (AWS, Azure, or GCP). Deep understanding of CI/CD tools (Azure Pipelines, GitHub Actions, Jenkins). Proficient with Docker and container orchestration via Kubernetes. Hands-on experience with Terraform and configuration management tools (Ansible, Chef, Puppet). Nice-to-Have Skills: Basic DBA Skills Are a Plus, Though Not Required. Familiarity with Azure Boards, Jira, or GitLab Issues for work management. Experience working in Linux-based environments. Proven experience optimizing cloud costs and infrastructure performance. Key Behaviors: Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work independently and lead cross-functional initiatives. Adaptable To Change And Passionate About Continuous Improvement. Required Qualifications: Bachelor’s degree in Computer Science or related field (or 5+ years of relevant IT experience in lieu of degree). (SR. Dev-ops Engg) - 7+ years of experience in IT, with significant experience as a DevOps engineer or manager. Strong leadership skills with a track record of leading cross-functional DevOps initiatives. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

1) Job Title: Azure DevOps CI/CD Engineer- Job Summary: We are looking for an experienced Azure DevOps CI/CD Engineer with expertise in YAML-based pipelines , Azure Functions , Cypress automation tests , APIM Terraform pipelines , application deployment , and CI/CD automation . The ideal candidate will design and manage CI/CD pipelines, ensuring smooth deployments and automation across cloud services. Key Responsibilities: Develop and manage Azure DevOps CI/CD pipelines using YAML . Automate deployments for Azure Functions, App Services, and AKS . Integrate Cypress automation tests into CI/CD workflows. Implement APIM Terraform pipelines for API Management automation. Secure and optimize DevOps processes with Azure Key Vault and RBAC . Monitor pipelines and troubleshoot failures to ensure high availability. Automate builds and releases Deploy applications to Azure App Services, AKS, and Virtual Machines . Manage web.config, appsettings.json , and other configuration files during deployments. Set up unit testing frameworks (e.g., xUnit, NUnit, MSTest ) in Azure DevOps pipelines. Handle NuGet package management and dependency resolution in CI/CD pipelines. Write YAML pipelines to automate .NET build, test, and deployment . Configure Application Insights for performance monitoring of applications. Implement logging strategies using Serilog, NLog, or built-in .NET logging frameworks . Troubleshoot issues using Azure Monitor & Log Analytics . Deploy .NET applications using Terraform, Bicep, or ARM templates . Handle authentication and authorization via Azure AD, Managed Identity, or OAuth in .NET applications. Securely store connection strings, API keys, and secrets using Azure Key Vault . Implement role-based access control (RBAC) for APIs and applications. Deploy and manage .NET Web APIs, gRPC services, and microservices . Manage API lifecycle using Azure API Management (APIM) . Use Docker & Kubernetes to containerize and orchestrate .NET services. Work with message queues (Azure Service Bus, RabbitMQ, Kafka) in applications. Proficiency in CI/CD pipeline configurations for frontend applications. Exposure to containerized deployments using Docker and Kubernetes. Ability to configure and manage build artifacts efficiently in Azure Pipelines. Experience with Azure Static Web Apps, Blob Storage, App Services, or AKS for deployments. Experience with frontend testing frameworks (Jest, Karma, Cypress, Playwright) would be an added advantage. Required Skills: Strong experience in Azure DevOps Pipelines (YAML-based) . Hands-on expertise in Azure Functions and serverless architectures . Experience with Terraform for infrastructure automation . Proficiency in Cypress test automation integration . Knowledge of Azure API Management (APIM) Terraform automation . Scripting skills in PowerShell, Bash, or Python . Expertise in Azure Monitor, Log Analytics, and Application Insights . Strong understanding of application deployment and CI/CD best practices . Show more Show less

Posted 2 months ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description Template Job Title - Decision Science Practitioner Analyst S&C GN Management Level : Senior Analyst Location: Bangalore/ Kolkata Must have skills: Collibra Data Quality - data profiling, anomaly detection, reconciliation, data validation, Python, SQL Good to have skills: PySpark, Kubernetes, Docker, Git Job Summary: We are seeking a highly skilled and motivated Data Science cum Data Engineer Senior Analyst to lead innovative projects and drive impactful solutions in domains such as Consumer Tech , Enterprise Tech , and Semiconductors . This role combines hands-on technical expertise , and client delivery management to execute cutting-edge projects in data science & data engineering Key Responsibilities Data Science and Engineering Implement and manage end to end Data Quality frameworks using Collibra Data Quality (CDQ). This includes – requirement gathering from the client, code development on SQL, Unit testing, Client demos, User acceptance testing, documentation etc. Work extensively with business users, data analysts, and other stakeholders to understand data quality requirements and business use cases. Develop data validation, profiling, anomaly detection, and reconciliation processes. Write SQL queries for simple to complex data quality checks. Python, and PySpark scripts to support data transformation and data ingestion. Deploy and manage solutions on Kubernetes workloads for scalable execution. Maintain comprehensive technical documentation of Data Quality processes and implemented solutions. Work in an Agile environment, leveraging Jira for sprint planning and task management. Troubleshoot data quality issues and collaborate with engineering teams for resolution. Provide insights for continuous improvement in data governance and quality processes. Build and manage robust data pipelines using Pyspark and Python to read and write from databases such as Vertica and PostgreSQL. Optimize and maintain existing pipelines for performance and reliability. Build custom solutions using Python, including FastAPI applications and plugins for Collibra Data Quality. Oversee the infrastructure of the Collibra application in Kubernetes environment, perform upgrades when required, and troubleshoot and resolve any Kubernetes issues that may affect the application's operation. Deploy and manage solutions, optimize resources for deployments in Kubernetes, including writing YAML files and managing configurations Build and deploy Docker images for various use cases, ensuring efficient and reusable solutions. Collaboration and Training Communicate effectively with stakeholders to align technical implementations with business objectives. Provide training and guidance to stakeholders on Collibra Data Quality usage and help them build and implement data quality rules. Version Control and Documentation Use Git for version control to manage code and collaborate effectively. Document all implementations, including data quality workflows, data pipelines, and deployment processes, ensuring easy reference and knowledge sharing. Database and Data Model Optimization Design and optimize data models for efficient storage and retrieval. Required Qualifications Experience: 4+ years in data science Education: B tech, M tech in Computer Science, Statistics, Applied Mathematics, or related field Industry Knowledge: Preferred experience in Consumer Tech, Enterprise Tech, or Semiconductors but not mandatory Technical Skills Programming: Proficiency in Python, SQL for data analysis and transformation. Tools : Hands-on experience with Collibra Data Quality (CDQ) or similar Data Quality tools (e.g., Informatica DQ, Talend, Great Expectations, Ataccama, etc.). Experience working with Kubernetes workloads. Experience with Agile methodologies and task tracking using Jira. Preferred Skills Strong analytical and problem-solving skills with a results-oriented mindset. Good communication, stakeholder management & requirement gathering capabilities. Additional Information: The ideal candidate will possess a strong educational background in quantitative discipline and experience in working with Hi-Tech clients This position is based at our Bengaluru (preferred) and Kolkata office. About Our Company | Accenture Experience: 4+ years in data science Educational Qualification: B tech, M tech in Computer Science, Statistics, Applied Mathematics, or related field Show more Show less

Posted 2 months ago

Apply

5.0 - 31.0 years

1 - 1 Lacs

Bengaluru/Bangalore

Remote

JD: Terraform and YAML deployments on Azure Good Troubleshooting skills Hands-on on Azure services deployments. (Infra + Apps) Migration experience from AWS to Azure Helm charts deployment Argo CD, Bamboo, Teckton

Posted 2 months ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

The Applications Development Intermediate Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Ab Initio Data Engineer We are looking for Ab Initio Data Engineer to be able to design and build Ab Initio-based applications across Data Integration, Governance & Quality domains for Compliance Risk programs. The individual will be working with both Technical Leads, Senior Solution Engineers and prospective Application Managers in order to build applications, rollout and support production environments, leveraging Ab Initio tech-stack, and ensuring the overall success of their programs. The programs have a high visibility, and are fast paced key initiatives, which generally aims towards acquiring & curating data and metadata across internal and external sources, provide analytical insights and integrate with other Citi systems. Technical Stack: Ab Initio 4.0.x software suite – Co>Op, GDE, EME, BRE, Conduct>It, Express>It, Metadata>Hub, Query>it, Control>Center, Easy>Graph Big Data – Cloudera Hadoop, Hive, Yarn Databases - Oracle 11G/12C, Teradata, MongoDB, Snowflake Others – JIRA, Service Now, Linux, SQL Developer, AutoSys, and Microsoft Office Responsibilities: Ability to design and build Ab Initio graphs (both continuous & batch) and Conduct>it Plans, and integrate with portfolio of Ab Initio softwares. Build Web-Service and RESTful graphs and create RAML or Swagger documentations. Complete understanding and analytical ability of Metadata Hub metamodel. Strong hands on Multifile system level programming, debugging and optimization skill. Hands on experience in developing complex ETL applications. Good knowledge of RDBMS – Oracle, with ability to write complex SQL needed to investigate and analyze data issues Strong in UNIX Shell/Perl Scripting. Build graphs interfacing with heterogeneous data sources – Oracle, Snowflake, Hadoop, Hive, AWS S3. Build application configurations for Express>It frameworks – Acquire>It, Spec-To-Graph, Data Quality Assessment. Build automation pipelines for Continuous Integration & Delivery (CI-CD), leveraging Testing Framework & JUnit modules, integrating with Jenkins, JIRA and/or Service Now. Build Query>It data sources for cataloguing data from different sources. Parse XML, JSON & YAML documents including hierarchical models. Build and implement data acquisition and transformation/curation requirements in a data lake or warehouse environment, and demonstrate experience in leveraging various Ab Initio components. Build Autosys or Control Center Jobs and Schedules for process orchestration Build BRE rulesets for reformat, rollup & validation usecases Build SQL scripts on database, performance tuning, relational model analysis and perform data migrations. Ability to identify performance bottlenecks in graphs, and optimize them. Ensure Ab Initio code base is appropriately engineered to maintain current functionality and development that adheres to performance optimization, interoperability standards and requirements, and compliance with client IT governance policies Build regression test cases, functional test cases and write user manuals for various projects Conduct bug fixing, code reviews, and unit, functional and integration testing Participate in the agile development process, and document and communicate issues and bugs relative to data standards Pair up with other data engineers to develop analytic applications leveraging Big Data technologies: Hadoop, NoSQL, and In-memory Data Grids Challenge and inspire team members to achieve business results in a fast paced and quickly changing environment Perform other duties and/or special projects as assigned Qualifications: Bachelor's degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) and a minimum of 5 years of experience Minimum 5 years of extensive experience in design, build and deployment of Ab Initio-based applications Expertise in handling complex large-scale Data Lake and Warehouse environments Hands-on experience writing complex SQL queries, exporting and importing large amounts of data using utilities Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity and affirmative action employer. Qualified applicants will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Citigroup Inc. and its subsidiaries ("Citi”) invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View the " EEO is the Law " poster. View the EEO is the Law Supplement . View the EEO Policy Statement . View the Pay Transparency Posting Show more Show less

Posted 2 months ago

Apply

6.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

We are seeking an experienced IT Monitoring & Observability Tools Engineer with expertise in managing multiple ITOM tools like - Operations Bridge Manager, Network Node Manager & SiteScope, Cloud technologies, integration of different ITOM tools, automation processes as well as a strong understanding of scripting and infrastructure-as-code tools. The ideal candidate will bring a strong blend of technical expertise and leadership skills, with the ability to collaborate effectively with global teams. What You'll Do: Coordinating OpenText ITOM Tools like Operations Bridge Manager, Network Node Manager & Sitescope. Develop Shell scripts in Bash & PowerShell to complement the ITOM tools. Coordinate OpenText Network Automation tool - config & vulnerability management. Responding to and analyzing issues that affect system performance to minimize downtime. OpenText Network Operations Management SaaS platform. Knowledge of SQL query to build Network Performance reports in NOM SaaS platform. System Integrator - Integration of different ITOM, ITSM, APM & Observability tools. Crafting and updating documentation for common issues and processes to ensure process improvements. Supporting occasional after-hours maintenance and willing to work in second shift (rotational basis). Learning and researching the various observability tools and integrations. What We're Looking For: 5 – 6 years’ experience in a similar position Experience administrating Operations Bridge Manager, Network Node Manager, Network Automation. Experience in different Operations Bridge Management Packs - System Infra, Cloud, Docker, Kubernetes, Database. Strong Shell scripting skills – Bash, PowerShell. Knowledge in Container technologies - Kubernetes & docker, JSON, XML & YAML. Experience running systems using AWS & Terraform. Knowledge of CI/CD pipelines using GitLab for application and infrastructure deployments. Knowledge of Python, SQL would be an asset. AWS certifications would be an asset. Strong problem solving, critical thinking, and analytical skills. Experience in Zabbix, Grafana, Nagios, Splunk, Datadog or similar monitoring technologies is a plus.. Develop solutions to integrate different tools and automate tasks. Education : Bachelor’s degree or equivalent experience in computer science, IT, or a related field. Expertise in prioritising multiple ITOM tools like - Operations Bridge Manager, Network Node Manager & Sitescope, Cloud technologies, integration of different ITOM tools, automation processes as well as a solid grasp of scripting and infrastructure-as-code tools. The ideal candidate will bring a strong blend of technical expertise and leadership skills, with the ability to collaborate optimally with global teams. Employee Benefit: Annual monetary bonus. An opportunity to become a Nasdaq shareholder Employee Stock Purchase Program Nasdaq stocks with a discount Health Insurance Program Flexible working schedule and hybrid way of work Flex day program (up to 6 paid days off a year) in addition to standard vacations and holidays Internal mentorship program – get a mentor or become one Wide selection of online learning resources, e.g., Udemy Come as you are Nasdaq is an equal opportunity employer. We positively encourage applications from suitably qualified and eligible candidates regardless of age, color, disability, national origin, ancestry, race, religion, gender, sexual orientation, gender identity and/or expression, veteran status, genetic information or any other status protected by applicable law. Nasdaq is a leading global provider of trading, clearing, exchange technology, listing, information, and public company services. As the creator of the world's first electronic stock market, its technology powers more than 100 marketplaces in 50 countries. Nasdaq is home to over 4,000 total listings with a market value of approximately $12 trillion. To learn more, about our business visit business.nasdaq.com. Check out more about our Life at Nasdaq. Come as You Are Nasdaq is an equal opportunity employer. We positively encourage applications from suitably qualified and eligible candidates regardless of age, color, disability, national origin, ancestry, race, religion, gender, sexual orientation, gender identity and/or expression, veteran status, genetic information, or any other status protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. Show more Show less

Posted 2 months ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Detailed Description 6+ years’ experience with full stack software development. 6+ years’ experience working with CI/CD tools such as TFS, Azure DevOps (ADO), Jenkins including creating/troubleshooting new build/release pipelines. 3+ years’ experience working with Git Based source control repositories such as GitHub, Azure DevOps, Bitbucket, etc. 2+ years’ experience in writing/maintaining YAML, bash/PowerShell scripts. 2+ years’ experience working with cloud service providers. Understanding of SCA, SAST & DAST (Knowledge on the listed tolls will be an added advantage: SonarQube, Fortify on Demand, Net Sparker, J-Frog X-Ray, Ansible, Terraform etc) Understand Customer requirements and analyze the gaps between existing architecture and customer requirements. Prototype the proposed solutions, document & report to leadership & engineering teams for giving clarity on the proposal. Demonstrate thought leadership across multiple channels for various products & be a trust advisor. Required Skills 6+ years' Experience with frontend frameworks like React, Angular, or Vue.js. 6+ years' Experience with backend technologies .NET Core, ASP .NET, Node.js, Express, Django, or Flask. Working knowledge on databases (SQL and NoSQL). SME in build & release, automations, CAAS & various other technical capabilities. Develop to execute automated builds & releases, benchmark the industry standards & processes to ensure efficient and quality delivery and deployments. Working knowledge on version control tools like Git. Strong knowledge of HTML, CSS, JavaScript. Experienced in MS Windows and Linux OS. Excellent troubleshooting skills and ability to learn new technologies and adapt to it. Provide Governance & oversight to guide implementation towards solution delivery. Guide & mentor, the team to successfully deliver the projects. Excellent communication & collaboration skills. Contribute to and review deployment plans; schedule the installation of new modules, upgrades, and fixes to the production environment. Leading analytical & design activities with approved/validated technologies in par with the organization standards and vested approaches and methods. Aligns IT security and IT operations teams to ensure that all processes, including DevOps processes, can operate safely and securely. Show more Show less

Posted 2 months ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About The Position The Global Capability Center (GCC) - IT Foundation Platform (ITFP) Network Product Line (NPL) responsible for supporting the Business Network ensuring cost competitive, reliable, and secure operations of Chevron's Network environment globally while also enabling digital capabilities. Products managed include all Business Network Infrastructure Products and Services globally including Software Defined Networking, Intent Based Networking, Internet First, Wireless, Telephony, Extranet, WAN, Data Center, Security Services and Life Cycle Management. This Lead role has an expectation of 10-15 years of relevant experience and will provide mentorship to junior members of the team. The NPL – Automation & Monitoring team drives continuous innovations to improve network asset configuration and reliability. We are seeking a dynamic team player with razor focus on system reliability for our Senior Site Reliability Engineer position to help us achieve our goal of higher returns and lower carbon. Key Responsibilities Participate in review of current network process, change, and build procedures; translate to network automation projects Work with Agile team members to design and implement feature in support of established security and acceptance criteria Develop and document standards and provide training to others Research network automation industry trends and automation tools Knowledge of source code management systems, version control tools and developing web services Build and maintain CI/CD pipelines to ensure code quality and maintainability Align product features and roadmap to NPL strategic themes Participate in Agile concepts and activities such as daily stand-up meetings, task tracking boards, design and code reviews, automated testing, continuous integration and deployment Work with leaders across product line and business units to understand business strategies and shape technology roadmaps to support those strategies Partner with Business Units to ensure solutions will operate at scale without issue and create visualizations for data collected from networking devices for quick interpretation and notification Provide clear technical direction, prioritization, and delivery excellence for the Network activities; ensuring team members are delivering against priorities, eliminating roadblocks and technical debt Identify, analyze, and resolve vulnerabilities, deployment and operational issues Ensure NPL meets Chevron security, architecture, and best practice guardrails and policies Required Qualifications Bachelor’s or master’s degree in computer science, Computer Engineering, Information Technologies, or Management Information Systems. Site Reliability Engineering Fundamentals: Leverage SRE principles and best practices of SLO, SLI, SLA, error budgets, eliminating toil via automation, observability and monitoring, emergency response (triage, postmortem, retrospective), demand forecast and capacity planning, deliver results. Cloud Fundamentals Network Fundamentals; in-depth knowledge about networking protocols and TCP/IP stack Automation and programmability; proficient in Python, Ansible, YAML and asynchronous programming System and network monitoring Structured technical problem solving and debugging; perfect understanding of access control lists, address translation, tunneling, and standard routing protocols. Critical thinking, self-motivated with excellent communication skills, and the demonstrated ability to work both independently and as part of a team. Able to conceive and develop a presentation to a peer group that is logical, well-written, and concise. Basic network routing and switch experience. Familiarity with network switches, load balancers, and firewalls. Network automation/orchestration experience Experience using Restful APIs to integrate various technologies 10 years’ experience as a network automation engineer with 5+ years of automation programing experience. 10-15 years of experience. Preferred Qualifications Experience working in a Linux environment and has a working knowledge of basic Linux commands/utilities. Desire to develop new ideas, while following best practices for design and coding. Microsoft Cloud Fundamentals (AZ-900) and Designing and Implementing Microsoft DevOps Solutions (AZ-400) certifications. Cisco Certified Network Associate (CCNA), Cisco Certified Network professional (CCNP), and/or DevNet certifications. Chevron ENGINE supports global operations, supporting business requirements across the world. Accordingly, the work hours for employees will be aligned to support business requirements. The standard work week will be Monday to Friday. Working hours are 8:00am to 5:00pm or 1.30pm to 10.30pm. Chevron participates in E-Verify in certain locations as required by law. Show more Show less

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies