Jobs
Interviews

1321 Yaml Jobs - Page 24

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job Information Date Opened 30/06/2025 Job Type Full time Work Experience 5+ years Industry IT Services Salary 40L City Bangalore North Province Karnataka Country India Postal Code 560002 Job Description About the Role: We are seeking a Senior DevOps Engineer to lead the migration of multiple applications and services into a new AWS environment. This role requires a strategic thinker with hands-on technical expertise, a deep understanding of DevOps best practices, and the ability to guide and mentor other engineers. You will work closely with architects and technical leads to design, plan, and execute cloud-native solutions with a strong emphasis on automation, scalability, security, and performance. Key Responsibilities: Take full ownership of the migration process to AWS, including planning and execution. Work closely with architects to define the best approach for migrating applications into Amazon EKS. Mentor and guide a team of DevOps Engineers, assigning tasks and ensuring quality execution. Design and implement CI/CD pipelines using Jenkins, with an emphasis on security, maintainability, and scalability. Integrate static and dynamic code analysis tools (e.g., SonarQube) into the CI/CD process. Manage secure access to AWS services using IAM roles, least privilege principles, and container-based identity (e.g., workload identity). Create and manage Helm charts for Kubernetes deployments across multiple environments. Conduct data migrations between S3 buckets, PostgreSQL databases, and other data stores, ensuring data integrity and minimal downtime. Troubleshoot and resolve infrastructure and deployment issues, both in local containers and Kubernetes clusters. Required Skills & Expertise: CI/CD & DevOps Tools: Jenkins pipelines (DSL), SonarQube, Nexus or Artifactory Shell scripting, Python (with YAML/JSON handling) Git and version control best practices Containers & Kubernetes: Docker (multi-stage builds, non-root containers, troubleshooting) Kubernetes (services, ingress, service accounts, RBAC, DNS, Helm) Cloud Infrastructure (AWS): AWS services: EC2, EKS, S3, IAM, Secrets Manager, Route 53, WAF, KMS, RDS, VPC, Load Balancers Experience with IAM roles, workload identities, and secure AWS access patterns Network fundamentals: subnets, security groups, NAT, TLS/SSL, CA certificates, DNS routing Databases: PostgreSQL: pg_dump/pg_restore, user management, RDS troubleshooting Web & Security Concepts: NGINX, web servers, reverse proxies, path-based/host-based routing Session handling, load balancing (stateful vs stateless) Security best practices, OWASP Top 10, WAF (configuration/training), network-level security, RBAC, IAM policies Candidate Expectations: The ideal candidate should be able to: Explain best practices around CI/CD pipeline design and secure AWS integrations. Demonstrate complex scripting solutions and data processing tasks in Bash and Python. Describe container lifecycle, troubleshooting steps, and security hardening practices. Detail Kubernetes architecture, Helm chart design, and access control configurations. Show a deep understanding of AWS IAM, networking, service integrations, and cost-conscious design. Discuss TLS certificate lifecycle, trusted CA usage, and implementation in cloud-native environments. Preferred Qualifications: AWS Certified DevOps Engineer or equivalent certifications. Experience in FinTech, SaaS, or other regulated industries. Knowledge of cost optimization strategies in cloud environments. Familiarity with Agile/Scrum methodologies. Certifications or experience with ITIL or ISO 20000 frameworks are advantageous.

Posted 1 month ago

Apply

13.0 years

18 - 20 Lacs

Mumbai Metropolitan Region

On-site

Role Overview We are looking for a highly skilled and experienced DevOps L3 Engineer who will be responsible for designing, implementing, and maintaining robust DevOps pipelines, cloud infrastructure, and application deployment processes. The ideal candidate will have strong expertise in AWS services, Infrastructure as Code, CI/CD pipelines, and monitoring tools, along with a solid background in Linux system administration and scripting. Key Responsibilities Design, implement, and manage CI/CD pipelines using AWS native tools or other CI/CD platforms. Administer and troubleshoot Linux-based systems and network configurations. Automate infrastructure provisioning using Terraform and/or AWS CloudFormation Templates. Deploy and manage containerized applications using Amazon EKS and ECS. Collaborate with development teams to streamline deployment and monitoring processes. Monitor infrastructure and applications using tools like CloudWatch, CloudTrail, Prometheus, and Grafana. Ensure security best practices are followed across cloud infrastructure and deployments. Troubleshoot and resolve complex technical issues in a production environment. Required Skills & Experience 10–13 years of overall experience in DevOps roles. Hands-on experience with CI/CD tools (e.g., Jenkins, GitLab CI/CD, AWS CodePipeline). Strong knowledge of Linux system administration, networking, and performance tuning. Proficient in Shell, YAML, JSON, and Groovy scripting. In-depth experience with AWS services: EC2, S3, VPC, RDS, IAM, Organizations, Identity Center, etc. Proven experience in deploying applications on EKS and ECS. Hands-on experience in Infrastructure as Code using Terraform and/or AWS CloudFormation. Experience in monitoring tools: CloudWatch, CloudTrail, Prometheus, Grafana. Ability to independently handle complex troubleshooting and infrastructure setups. Preferred Qualifications Bachelor’s degree in Computer Science, Information Technology, or related field. Certifications in AWS (e.g., AWS Certified DevOps Engineer, AWS Solutions Architect) are a plus. Excellent problem-solving, analytical thinking, and communication skills. Skills: cloudformation,shell scripting,cloudwatch,scripting,groovy,linux,eks,cloudtrail,aws,containerized applications,infrastructure as code,grafana,terraform,json,monitoring tools,ci/cd pipelines,aws services,yaml,aws cloudformation,devops,ecs,linux system administration,prometheus,ci/cd

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

At Sogeti, we believe the best is inside every one of us. Whether you are early in your career or at the top of your game, we’ll encourage you to fulfill your potential to be better. Through our shared passion for technology, our entrepreneurial culture , and our focus on continuous learning, we’ll provide everything you need to do your best work and become the best you can be. Job Description Should have design knowledge for L3/datacom features Good understanding of datacom protocol standards Very strong in C programming Working knowledge on protocol ISIS, BGP, BFD, MPLS Working experience on Linux Kernel Forwarding Project working experience on Netconf Contribution to design and review process Job Description - Grade Specific Should have design knowledge for L3/datacom features Good understanding of datacom protocol standards Very strong in C programming Working knowledge on protocol ISIS, BGP, BFD, MPLS Working experience on Linux Kernel Forwarding Project working experience on Netconf Contribution to design and review process Skills (competencies) Net core NET Framework 6 Net MVC ActiveMQ ADO.NET Advanced C# Advanced JavaScript (ES2015) Agile (Software Development Framework) Android Angular Ansible API design API Gateway API integration ASP.NET ASP.NET Core ASP.NET Core Web API ASP.NET MVC 5 Assymetric Encryption Attentiveness AWS Compute and PAAS AWS DevOps AWS Lambda Azure Boards Azure Compute Azure DevOps Azure Integration Services Azure Repos Azure SDK Azure Security Azure Storage Blazor C# C/C++ Caching Cloud Computing Cloud Migration Cloud Storage Cloud Strategy Collaboration Compression Containerization Continuous Integration and Continuous Delivery (CI/CD) Core Java Critical Thinking CSS3 Data formats JSON XML Data formats YAML DevOps Docker Entity framework Entity Framework Core Git GitHub gradle Groovy Hashing Hibernate HTML5 HTTP and verbs Hybris IDE Java Webservices JavaScript Jenkins Jmeter JMS ( Java Messaging Service) jQuery JSP junit Kafka Kubernetes Learning Mindset Linux Logic Apps maven Message Oriented Middleware Microcontrollers Microservices Microsoft SQL Server mockito Monitoring and Optimizing Azure Solutions Mulesoft Multi-Cloud MVC Core Node Js noSQL Nunit Testing OWASP Problem Solving Profiling React Rest API Rest Webservices RTOS Ruby on Rails Serial Comunication Service registry Servlets SOA (Service Oriented Architecture) Software Design patterns Software Testing Source Control Spring Core Spring Data Spring MVC Stakeholder Management Struts Symetric Encryption System Design Terraform Time Management Tuning Unit Testing Verbal Communication Verification and Validation Vue WCF Web API Written Communication Part of the Capgemini Group, Sogeti makes business value through technology for organizations that need to implement innovation at speed and want a local partner with global scale. With a hands-on culture and close proximity to its clients, Sogeti implements solutions that will help organizations work faster, better, and smarter. By combining its agility and speed of implementation through a DevOps approach, Sogeti delivers innovative solutions in quality engineering, cloud and application development, all driven by AI, data and automation.

Posted 1 month ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

BuyStars is seeking a results-oriented Devops Engineer to join our rockstar team. The ideal candidate is a passionate builder of great products/frameworks, with excellent leadership qualities. If you thrive in a fast-paced environment, can create an environment for others to thrive too, and are enthusiastic about all aspects of the business and product development, BuyStars is the place for you! Responsibilities Providing 24/7 infra support for hosting the workloads for the Engineering teams, and also building processes and documenting tribal knowledge at the same time. Managing application deployment - automate and improve development and release processes. Implement best practices for cloud infrastructure and configuration management, and continuously automate toil and repetitive work. Owning and onboarding the infrastructure of new applications with the production readiness using Cloudformation scripts in Terraform/Yaml. Integrating testing environments(Integration + Performance) into the release pipeline enables developers to be able to deploy and test releases seamlessly. Creating/maintaining the respective observability and log management with Prometheus/New Relic/ELK. Identifying observability gaps in application and infrastructure, and working with stakeholders to fix them. Working with CI/CD tools to continuously improve the build and deployment pipeline, EKS/EBS, CodePipeline/Jenkins/Bitbucket, and Maven. Taking part in cost optimisation initiatives by staying up to date and experimenting with the latest developments in cloud service providers. Requirements 1-4 Years of experience in managing high-traffic, large-scale microservices and infrastructure with excellent troubleshooting skills. Cloud experience: AWS and Google Cloud. Experience writing automation scripts in YAML/Terraform for deployments to AWS CloudFormation. Experience working with monitoring and log management tools like AWS Cloudwatch/Prometheus/New Relic/ELK. Experience setting up environments for testing in Staging and UAT, and integrating performance testing frameworks like Jmeter into the deployment pipeline. Shell/Bash/Python scripting. Experience with Docker. Good To Have Requirements Experience working with build and packaging tools for the following programming languages -> Java + Spring(Maven) / NodeJs(Npm) / Golang (go build). Experience with SonarQube and integrating code quality checks into the release pipeline. Experience in Kubernetes / EKS. This job was posted by Sriram Krishnamoorthy from BuyStars.

Posted 1 month ago

Apply

2.0 years

3 - 5 Lacs

India

On-site

AWS Cloud Engineer with extensive experience of 2 years in designing available, cost-efficient, Fault-Tolerant and scalable distributed systems on AWS; exposure in AWS deployment and management services. Monitoring the deployments in environments, debugging deployment issues and resolving the same in timely manner reducing the downtime. Experience in AWS Cloud and DevOps Tools. Experienced working in AWS Infrastructure and its services like IAM, VPC, EC2, EBS, S3, ALB, NACL, Security Groups, Auto Scaling, RDS, SNS, EFS, CloudWatch, CloudFront. Good hands-on experience in IAC tool like Terraform, CloudFormation. Good Experience in source code management tool Git, Github and source control management concepts like Branches, Merges . Good Experience in automating CI CD pipeline using Jenkins tools. Good hands-on experience in Configuration Management tool like Ansible. Having experience in creating custom Docker Images using Docker file and pushing Docker Images to Docker Hub. Setting up Kubernetes Cluster using EKS and Kubeadm. Writing manifest files to create deployments and services for micro service applications. Configuring Persistent volumes (PVs), PVCs for persistent database environments. Managed Deployment, ReplicationSet, StatefullSet, AutoScaling fo r Kubernetes Clusters. Good Experience on ELK for Log Aggregation and Log monitoring. Implemented, maintained, monitored alarms and notifications for AWS services using Cloud Watch and SNS. Experienced in deploying and monitoring applications on various platforms and setting up life cycle policies to back data from AWS S3. Configured CloudWatch alarm rules for operational and performance metrics for AWS resources and applications. Provisioned AWS resources using AWS Management Console, Command line Interface (CLI) Planed, built, and configured network infrastructure within VPC and other components. Responsible for implementing and supporting of cloud-based infrastructure and its solutions. Launching and configuring EC2 Instance using AMIs (Linux) Created IAM users and Policies towards application access. Installing and configuring Apache web server in windows and Linux. Initiating alarms in CloudWatch service for monitoring the server’s performance, CPU Utilization, disk usage etc.to take recommended actions for better performance. Creating/Managing Instance Image/Snapshots/Managing Volumes. Setup/Managing VPC, Subnets, make connection between different availability zones. Monitor Access logs and Error logs in AWS Cloud watch. Configuring EFS to EC2 instances. Creating & Configuring Elastic Load Balancer to distribute the traffic. Administration of Jenkins server - Includes Setup of Jenkins, parameterized builds and Deployment automation. Experience in creating Jenkins jobs, plug-in installations, setting up distributed builds concept and other Jenkins administration activities. Experience in managing microservices application using docker and Kubernetes. Increasing EBS volume storage capacity using AWS EBS Volume features. Creating/Managing buckets on S3 and assigning access permissions Applications of software installations, troubleshooting and updating Build and release EC2 instance Amazon Linux for development and production environment. Moving EC2 logs into S3. Experience in S3 Versioning, Server access logging & Life cycle policies on S3Buckets. Creating & Maintaining user accounts, groups and permissions. Created SNS notifications for multiple services in AWS. Creating and attaching Elastic IP to EC2 instances Assigning access permissions for files and directories to users and groups. Creating and managing user accounts/groups, assigning Roles and policies using IAM Experience on AWS Cloud services like IAM, S3, VPC, EC2, CloudWatch, CloudFront, CloudTrail, Route53, EFS, AWS Auto Scaling, EBS, SNS, SES, SQS, KMS, RDS, Security groups, Lambda, ECS, EKS,Tag Editor and more. Involved in designing and developing Amazon EC2, Amazon S3, Amazon RDS, Lamnda and other services. Creating containers in docker, pulling images deployment. Creating networks, creating nodes and pods in Kubernetes. Deployments using Jenkins through CI/CD pipeline. Creating infrastructure using terraform. Responsible for designing and deploying best SCM processes and procedures. Responsible for branching , merging and resolving various conflicts arising in GIT. Setup/Created CI/CD pipeline in Jenkins and scheduling a job. Established complete Jenkins CI-CD pipeline and complete workflow of build and delivery pipelines. Involved in writing DockerFile to build customized DockerImage for creating Docker Container and pushing DockerImage to DockerHub. Creating and managed multiple containers using Kubernetes . And creating deployments using Yaml code. Used Kubernetes to Orchestrate the deployment, scaling and management of docker container. Experience with monitoring tools like Prometheus and Grafana. Responsible to establish complete pipeline work-flow starting from pulling source code from git repository till deploying end product into Kubernetes cluster. Managing infrastructure of client both Windows and Linux Creation of files and directories. Creating users and groups. Assigning access permissions for files and directories to users and group. Installing and managing Web Server. Installation of packages using YUM (HTTP, HTTPS) Monitoring system Performance of Disk utilization and CPU utilization Technical Skills Operating Systems: Linux, Cent OS, Ubuntu and Windows. AWS: EC2, VPC, S3, EBS, IAM, Load balancing, Autoscaling, CloudFormation, CloudWatch, CloudFront, SNS, EFS, Route-53 DevOps Tools: Git, Ansible, Chef, Docker, Jenkins, Kubernetes, Terraform. Scripting Languages: Shell, Python. Monitoring Tools: CloudWatch, Grafana, Prometheus. Job Types: Full-time, Permanent, Fresher Pay: ₹345,405.87 - ₹500,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Morning shift Rotational shift Supplemental Pay: Performance bonus Yearly bonus Work Location: In person Speak with the employer +91 8668118196

Posted 1 month ago

Apply

8.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

About Zscaler Serving thousands of enterprise customers around the world including 45% of Fortune 500 companies, Zscaler (NASDAQ: ZS) was founded in 2007 with a mission to make the cloud a safe place to do business and a more enjoyable experience for enterprise users. As the operator of the world’s largest security cloud, Zscaler accelerates digital transformation so enterprises can be more agile, efficient, resilient, and secure. The pioneering, AI-powered Zscaler Zero Trust Exchange™ platform, which is found in our SASE and SSE offerings, protects thousands of enterprise customers from cyberattacks and data loss by securely connecting users, devices, and applications in any location. Named a Best Workplace in Technology by Fortune and others, Zscaler fosters an inclusive and supportive culture that is home to some of the brightest minds in the industry. If you thrive in an environment that is fast-paced and collaborative, and you are passionate about building and innovating for the greater good, come make your next move with Zscaler. Our Engineering team built the world’s largest cloud security platform from the ground up, and we keep building. With more than 100 patents and big plans for enhancing services and increasing our global footprint, the team has made us and our multitenant architecture today's cloud security leader, with more than 15 million users in 185 countries. Bring your vision and passion to our team of cloud architects, software engineers, security experts, and more who are enabling organizations worldwide to harness speed and agility with a cloud-first strategy. Responsibilities We are looking for an experienced Manager, Software Development Engineering skilled in the DevOps space. You would be reporting to the Director of Software Engineering and your responsibilities will include: Leading the DevSecOps team, you will oversee operations for multiple development teams, hands-on responsibilities and guiding a team of engineers by distributing tasks, tracking progress, and facilitating their success Spearheading the development of internal product tools using AI/ML, MLOps and Code LLM functionalities Owning the end-to-end process for Open-Source License Governance and Vulnerability assessment in software applications developed by several other teams, using tools like Blackduck or Snyk integrated in the CI pipelines Managing the Infrastructure: Responsible for administration & maintenance of source control management systems, such as GitLab, GitHub, BitBucket Nexus, CI/CD systems, Artefact management repositories, test beds etc Integrating SAST, DAST or IaC, tools into the CI / CD pipelines What We're Looking For (Minimum Qualifications) Bachelor's Degree in Engineering, CS, MIS, or related field along with 8+ years of hands-on experience in app development, build & release management and setting up CI/CD pipelines 2 years of Team Lead experience managing various tools required in the software lifecycle, Scripting in shell, Python, Groovy or Programming knowledge on Java / C /C++ with Unix / Linux systems expertise Experience in domains like Application Security, API Security, DevSecOps , Devops and AI/ML is preferred Excellent leadership skills with a track record of managing high-performing teams with strong presentation and communication skills Good understanding of the principles and best practices of Software Configuration Management (SCM) in Agile, scrum, and Waterfall methodologies What Will Make You Stand Out (Preferred Qualifications) Experience writing and developing yaml based CI/CD Pipelines using GitLab, GitHub and knowledge of build tools like makefiles/gradle/npm/maven etc Experience with Networking, Load Balancers, Firewalls, Web Security Experience with AI and ML tools in day to day DevSecOps activities At Zscaler, we are committed to building a team that reflects the communities we serve and the customers we work with. We foster an inclusive environment that values all backgrounds and perspectives, emphasizing collaboration and belonging. Join us in our mission to make doing business seamless and secure. Benefits Our Benefits program is one of the most important ways we support our employees. Zscaler proudly offers comprehensive and inclusive benefits to meet the diverse needs of our employees and their families throughout their life stages, including: Various health plans Time off plans for vacation and sick time Parental leave options Retirement options Education reimbursement In-office perks, and more! By applying for this role, you adhere to applicable laws, regulations, and Zscaler policies, including those related to security and privacy standards and guidelines. Zscaler is committed to providing equal employment opportunities to all individuals. We strive to create a workplace where employees are treated with respect and have the chance to succeed. All qualified applicants will be considered for employment without regard to race, color, religion, sex (including pregnancy or related medical conditions), age, national origin, sexual orientation, gender identity or expression, genetic information, disability status, protected veteran status, or any other characteristic protected by federal, state, or local laws. See more information by clicking on the Know Your Rights: Workplace Discrimination is Illegal link. Pay Transparency Zscaler complies with all applicable federal, state, and local pay transparency rules. Zscaler is committed to providing reasonable support (called accommodations or adjustments) in our recruiting processes for candidates who are differently abled, have long term conditions, mental health conditions or sincerely held religious beliefs, or who are neurodivergent or require pregnancy-related support.

Posted 1 month ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

We go beyond the obvious, using intelligence, passion and creativity to inspire new thinking and shape the world we live in. To start a career that is out of the ordinary, please apply... Job Details Role Overview As a Senior DevOps Engineer for the Kantar Insights team, you will be tasked with supporting a custom application development team who codes and deploys mostly to Microsoft Azure environments as well as minimal AWS environments. You will work as a member of the DevOps team who are responsible for all custom application deployments as well as the supporting cloud infrastructure. The platforms supported today are built with Microsoft .Net and MSSQL backends. There are also some large data warehouses and large data flow management in scope. The DevOps team works closely with application developers, database admins, QA testers and project managers to support and provide guidance for all custom development deployed to a multi-tier application environment (Dev/Test/UAT/Prod). Daily tasks will include, but are not always limited to, the following: Azure DevOps support for code repositories, YAML CI/CD pipelines using shared library/templating Deploying infrastructure in both BICEP and manual click will be ongoing as we move toward a fully automated infrastructure-as-code model. Completion expected by end of 2025. Working with tools such as Visual Studio, VS-Code or other clients to manage all versioning and repositories Monitoring application health and availability with ongoing visibility using Azure dashboards, monitors and alerting tools Converting or migrating legacy application resources to modern cloud or platform services will be ongoing while we move away from IaaS (VMs and Containers) to serverless PaaS. Example: Supporting applications from virtual machine console apps to Azure App Services Apply and maintain Kantar’s security, platform and budgeting standards to all areas of the process from application conception through production run. Maintenance of cloud virtual machine inventory including managing patching and updates as well as recovery Occasional Windows Domain Controller and DNS tasks will need to be handled within the platform environmnets Trouble shoot all aspects of application or infrastructure CI/CD including build, deploy or runtime errors. Continuous optimization of processes using automation and scripting with YAML, PowerShell and BICEP Conduct testing of new solutions and collaborate with leaders on results to make informed technical decisions Staggered off-hour monitoring and escalation support will be needed for 24/7/365 application SLAs Continuous cost and systems footprint reduction efforts expected while modernizing all platforms and applications Key Responsibilities The Senior DevOps Engineer must balance several roles in the application development process, including application code packaging and deployments, management and automated deployments of cloud resources and application availability and performance monitoring. They must also possess the communication and management skills needed to work within a custom application development team. They will need to collaborate with product development, data services, QA testing and architecture teams throughout the development lifecycle. Data handling, privacy and compliance, as well as overall application security standards are woven into all processes and will be a constant factor in all decision making. Capabilities And Experience Must possess a strong understanding of both Azure IaaS and PaaS solutions and their use cases Hands-on experience in the packaging, building, automating, managing, and releasing of code from one environment to another. Supporting a muti-tier application environment. Expert level experience (2+ years) with Infa as Code (IaC) using Terraform or BICEP though Azure DevOps pipelines is a requirement. BICEP is preferred. Experience managing IaaS in cloud is required. Both Windows and Linux VMs are deployed with a focus on Windows 2016-2022 imaging. Must have strong scripting and automation experience with PowerShell, Terraform Ansible or similar technologies Ability to configure and navigate code management tools such ad VS Code as well as maintaining versioning and protection Network connectivity troubleshooting using tcp ping and basic URL testing. Internal and external tcp/ip connectivity is in scope. Experience with disaster recovery and high availability within a client/partner facing environments Ability to diagnose issues at all layers (full stack) including infrastructure, application or deployment errors, capacity or performance related and connectivity Strong documentation and presentation skills. Must communicate with all directly supported teams with clarity to drive operational excellence throughout the development lifecycle Plan, coordinate and execute environmental changes and set reasonable expectations while delegating tasks and managing projects through completion Diagramming and documentation for all work will be requires Availability for On-Call monitoring and incident escalation for areas of ownership Why join Kantar? We shape the brands of tomorrow by better understanding people everywhere. By understanding people, we can understand what drives their decisions, actions, and aspirations on a global scale. And by amplifying our in-depth expertise of human understanding alongside ground-breaking technology, we can help brands find concrete insights that will help them succeed in our fast-paced, ever shifting world. And because we know people, we like to make sure our own people are being looked after as well. Equality of opportunity for everyone is our highest priority and we support our colleagues to work in a way that supports their health and wellbeing. While we encourage teams to spend part of their working week in the office, we understand no one size fits all; our approach is flexible to ensure everybody feels included, accepted, and that we can win together. We’re dedicated to creating an inclusive culture and value the diversity of our people, clients, suppliers and communities, and we encourage applications from all backgrounds and sections of society. Even if you feel like you’re not an exact match, we’d love to receive your application and talk to you about this job or others at Kantar. Location Bangalore, Prestige Technology ParkIndia Kantar Rewards Statement At Kantar we have an integrated way of rewarding our people based around a simple, clear and consistent set of principles. Our approach helps to ensure we are market competitive and also to support a pay for performance culture, where your reward and career progression opportunities are linked to what you deliver. We go beyond the obvious, using intelligence, passion and creativity to inspire new thinking and shape the world we live in. Apply for a career that’s out of the ordinary and join us. We want to create an equality of opportunity in a fair and supportive working environment where people feel included, accepted and are allowed to flourish in a space where their mental health and well being is taken into consideration. We want to create a more diverse community to expand our talent pool, be locally representative, drive diversity of thinking and better commercial outcomes. Kantar is the world’s leading data, insights and consulting company. We understand more about how people think, feel, shop, share, vote and view than anyone else. Combining our expertise in human understanding with advanced technologies, Kantar’s 30,000 people help the world’s leading organisations succeed and grow.

Posted 1 month ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Senior Data Scientist Location: Bangalore Reporting to: Senior Manager Analytics Purpose of the role Contributing to the Data Science efforts of AB InBevʼs global commercial analytics capability of Sales & Distribution Analytics. Work with internal stakeholders at ABI to understand their business problems, translate those problems into statistical problems like survey design, experiment design, optimization, forecasting, etc which can best address those business problems, work with statistical experts to develop robust models and generate business insights. Key tasks & accountabilities Understand the business problem and translate that to an analytical problem; participate in the solution design process. Storyboarding and presenting the insights to stakeholders & senior leadership. Independently lead project delivery. Working with Analytics Manager to create project plan, and design analytics roadmap. End to end development and deployment of machine learning or deep learning models. Ability to communicate findings clearly to both technical and business stakeholders. Should be able to quantify the impact, and continuously implement improvements. Document every aspect of the project in standard ways. Summarize insights and recommendations to be presented back to the business. Use innovative methods to continuously improve the quality of statistical models. 3. Qualifications, Experience, Skills Level Of Educational Attainment Required. Bachelor’s or master’s degree in engineering, Statistics, Applied Statistics, Economics, Econometrics, Operations Research or any other quantitative analysis. Previous work experience required: 6+ years in data science role, preferably in CPG domain Expert level proficiency in Python (knowledge of classes, decorators, written end-to-end ML or Software or data pipelines in python) Experience working with SQL (knowledge of data warehouses, different databases, and fundamentals about RDBMS) Experience working with Azure (ADLS, Databricks, Azure SQL or Postgres, app services and related) Well versed with Machine learning and Deep learning algorithms implementation Exposure to working with complex datasets, machine learning and DL libraries like scikit-learn, TensorFlow, Keras, Pytorch etc. Capable of building insightful visualizations in Python Good to have –category management, Optimization techniques, knowledge of html, CSS, JS, YAML/docker & has worked on Dash /Flask /Django or any web application framework. Technical Skills Required Hands-on experience in data manipulation using Excel, Python, SQL. Expert level proficiency in Python (knowledge of writing end-to-end ML or data pipelines in python). Proficient in application of ML concepts and optimization techniques to solve end-to-end business problems. Familiarity with Azure Tech Stack, Databricks, ML Flow in any cloud platform. Other Skills Required. Demonstrated leadership skills. Passion for solving problems using data. Detail oriented, analytical, and inquisitive Ability to work independently and with others. Takes responsibility and makes effective decisions. Problem solving Planned and organized. And above all of this, an undying love for beer! We dream big to create future with more cheers.

Posted 1 month ago

Apply

10.0 years

33 Lacs

Bengaluru

On-site

Job Title: DevOps Integration Engineer Location: Malaysia Experience Required: 10+ Years Employment Type: Full-Time Annual Package: ₹33.81 Lakhs per annum (INR)* About the Role We are looking for an experienced and highly capable DevOps Integration Engineer with 10+ years of professional experience in enterprise systems integration and DevOps practices. You will act as the subject matter expert (SME) for designing, developing, deploying, and supporting cloud-based integration solutions, primarily using the Microsoft Azure technology stack. This role is critical in leading the integration platform modernization and aligning technology with business goals. Key Responsibilities: Lead the end-to-end design and implementation of integration workflows, APIs, and messaging systems on Azure. Define integration patterns, enforce development standards, and streamline deployment processes. Collaborate with cross-functional teams including architects, developers, QA, and PMs for successful solution delivery. Monitor production integrations, troubleshoot issues, and ensure high availability and compliance with SLAs. Maintain technical documentation including architecture diagrams, deployment models, and interface specifications. Play a key role in migrating from legacy platforms to modern integration technologies. Ensure all integration processes comply with information security and regulatory requirements. Technical Skills Required: Agile tools: Azure DevOps, Jira, Asana Version control: GitHub, Azure DevOps, TFS Programming: C# (.NET Framework 4.8 / .NET Core 6) Node.js, JavaScript, PowerShell Microsoft SQL Server Azure Integration Services: Logic Apps API Management Service Bus Functions Data Factory Event Grid CI/CD: Azure DevOps Pipelines, NPM, Visual Studio Test Automation: SonarQube, Selenium, Katalon Preferred: Scripting/Other Tools: GoLang, Python, Bash, YAML, JSON IaC Tools: Terraform, Bicep, ARM Templates Deployment Automation: Octopus Deploy, Jenkins, LaunchDarkly, Ansible Cloud: AWS, Mulesoft, Boomi Containers: Docker, Kubernetes Candidate Profile: Bachelor’s or Master’s degree in Software Engineering, Computer Science, or a related field. 10+ years of experience in software development, systems integration, or DevOps engineering. Strong analytical and problem-solving skills with the ability to work independently. Excellent communication and stakeholder engagement skills. Deep understanding of software delivery pipelines, cloud-native development, and enterprise architecture. Knowledge of Agile methodologies and ITIL processes. Job Type: Full-time Pay: From ₹3,381,000.00 per year Benefits: Food provided Work Location: In person Speak with the employer +91 6364835344

Posted 1 month ago

Apply

7.0 years

15 Lacs

Coimbatore

On-site

Job Title: Senior DevOps Engineer Location: Coimbatore Job Type: Full-time Experience: 7+ Years About the Client Our client is a product-based technology firm focused on delivering secure, scalable, and enterprise-grade digital transformation solutions. With a strong emphasis on engineering excellence, they foster a collaborative and agile environment to drive innovation across industries. Role Overview We are looking for a Senior DevOps Engineer to lead DevOps strategy, automation, and infrastructure management. The ideal candidate will have deep expertise in Azure , CI/CD pipelines , Docker , Kubernetes , and Terraform , with a passion for optimizing cloud-based environments and streamlining engineering workflows. Key Responsibilities Design, build, and maintain CI/CD pipelines (Azure Pipelines, GitHub Actions, Jenkins) Manage infrastructure as code using Terraform Oversee containerization and orchestration with Docker and Kubernetes Deploy and maintain Java-based applications in production Implement monitoring, logging, and alerting tools Collaborate with development, QA, and security teams Automate routine tasks to enhance productivity Optimize cloud usage and reduce infrastructure costs Ensure compliance with security and regulatory standards Evaluate and integrate DevOps tools to improve delivery and reliability Required Skills Azure (must-have) ; AWS or GCP (nice-to-have) Proficient with Git , Git-based workflows, and command-line tools Scripting experience in Bash and PowerShell ; YAML proficiency Programming in Python , JavaScript/TypeScript , or C# Strong experience with Docker and Kubernetes Hands-on with Terraform ; experience with Ansible , Chef , or Puppet CI/CD tools: Azure DevOps , GitHub Actions , or Jenkins Nice-to-Have Skills Basic DBA/database administration Familiarity with Azure Boards , Jira , or GitLab Issues Experience with Linux environments Cloud cost optimization and performance tuning Key Behaviours Strong analytical and problem-solving skills Excellent communication and collaboration across teams Proven leadership in driving DevOps initiatives Highly adaptable and committed to continuous improvement Qualifications Bachelor’s degree in Computer Science or related field (or 5+ years of relevant experience in lieu of a degree) 7+ years of total IT experience with significant DevOps exposure Demonstrated leadership in DevOps strategy and implementation Interested? Send your resume to: jobs@prognova.co Job Types: Full-time, Permanent Pay: ₹1,500,000.00 per year Schedule: Monday to Friday Ability to commute/relocate: Coimbatore, Tamil Nadu: Reliably commute or planning to relocate before starting work (Required) Experience: DevOps: 7 years (Required) Azure: 7 years (Required) Docker: 7 years (Required) Work Location: In person Application Deadline: 15/07/2025

Posted 1 month ago

Apply

0 years

5 - 9 Lacs

Noida

On-site

Posted On: 27 Jun 2025 Location: Noida, UP, India Company: Iris Software Why Join Us? Are you inspired to grow your career at one of India’s Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? It’s happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client’s most trusted technology partner, and the first choice for the industry’s top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about “Being Your Best” – as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We’re a place where everyone can discover and be their best version. Job Description Key Tasks & Responsibilities Development of highly performant public facing REST API's and associated system integrations in an Azure hosted environment. Documentation of API's conforming to the Open API 3.x framework. Participating in code reviews, design workshops, story/ticket elaboration, etc. Review existing legacy implementations and input from the architecture team, and aid in designing and building appropriate solutions on the new platform following best practices. Ensure that the new platform is developed, tested and hosted in the most secure, scalable manner. Aid in the automation of testing and deployments of all deliverables. Required Skills C#, Microsoft SQL Server or Azure SQL, Azure CosmosDB, Azure Service Bus, Azure Function Apps, Auth0, WebSockets Strong development experience in C# and .NET core technologies built up across a range of different projects Experience of developing API's which conform as much as possible to REST principles in terms of Resources, Sub Resources, Responses, Error Handling Experience of API design and documentation using Open API 3.x / YAML / Swagger Experience of development, deployment and support within an Azure Environment, with an understanding of security and authorisation concepts Performance tuning of APIs and Azure Functions Ability and willingness to learn quickly and adapt to a fast-changing environment, with a strong interest in continuous improvement and delivery. Strong problem-solving skills and a good understanding of the best practices and the importance of Test Automation processes. Some familiarity with AWS, and especially ElasticSearch would be beneficial but not mandatory. Education and Professional Membership Educated to Degree Level or equivalent, preferably in Computer Science or related subject Azure Certifications an advantage Mandatory Competencies Fundamental Technical Skills - C# .NET Desktop - .Net Core Cloud - Azure Cloud - AWS Beh - Communication and collaboration Database - SQL Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.

Posted 1 month ago

Apply

0.5 years

0 Lacs

New Delhi, Delhi, India

On-site

At AlgoSec, What you do matters! Over 2,200 of the world’s leading organizations trust AlgoSec to help secure their most critical workloads across public cloud, private cloud, containers, and on-premises networks. Join our global team, securing application connectivity, anywhere. AlgoSec is looking for talented and motivated students/graduates to join our team and take part in developing the tests automation with cutting edge technologies. Location: Gurugram, India Direct employment Responsibilities E2E testing, including designing tests and then automate them. Develop and maintain UI & API automation tests in CI/CD environment. Writing and executing automatic tests based on the specified environment. Support, maintain, and enhance all test case automation related activities during iterative development and regression testing. Review user stories and functional requirements. Assist with manual testing; execute manual test cases and scripts for products under development using test management/tracking tools. Technical Requirements Computer Science student or equivalent degree student, GPA 8.5 and above. (Maximum 0.5 years of studies remaining). Knowledge or relevant experience with programming languages, such as C#, C++, and Java. Strong understanding of OOP, TDD, SW architecture designs and patterns. Strong troubleshooting and problem-solving skills with high attention to detail. Able to work independently, self-motivated, detail-oriented and organized. knowledge of web technologies including HTML, Yaml, JSON – Advantage. Experience with Selenium – an advantage. Experience with Git – an advantage. Knowledge and experience in testing methodologies - an advantage. Soft Skills Requirements Multitasking and problem-solving abilities, context switching and "out-of-the-box" thinking abilities. Team player, pleasant and with a high level of integrity. Very organized, thorough, and devoted. Bright, fast learner, independent. Excellent written and spoken communication skills in English. AlgoSec is an Equal Opportunity Employer (EEO), committed to creating a friendly, diverse and inclusive company culture.

Posted 1 month ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description Template Job Title - Decision Science Practitioner Analyst S&C GN Management Level : Senior Analyst Location: Bangalore/ Kolkata Must have skills: Collibra Data Quality - data profiling, anomaly detection, reconciliation, data validation, Python, SQL Good to have skills: PySpark, Kubernetes, Docker, Git Job Summary: We are seeking a highly skilled and motivated Data Science cum Data Engineer Senior Analyst to lead innovative projects and drive impactful solutions in domains such as Consumer Tech , Enterprise Tech , and Semiconductors . This role combines hands-on technical expertise , and client delivery management to execute cutting-edge projects in data science & data engineering Key Responsibilities Data Science and Engineering Implement and manage end to end Data Quality frameworks using Collibra Data Quality (CDQ). This includes – requirement gathering from the client, code development on SQL, Unit testing, Client demos, User acceptance testing, documentation etc. Work extensively with business users, data analysts, and other stakeholders to understand data quality requirements and business use cases. Develop data validation, profiling, anomaly detection, and reconciliation processes. Write SQL queries for simple to complex data quality checks. Python, and PySpark scripts to support data transformation and data ingestion. Deploy and manage solutions on Kubernetes workloads for scalable execution. Maintain comprehensive technical documentation of Data Quality processes and implemented solutions. Work in an Agile environment, leveraging Jira for sprint planning and task management. Troubleshoot data quality issues and collaborate with engineering teams for resolution. Provide insights for continuous improvement in data governance and quality processes. Build and manage robust data pipelines using Pyspark and Python to read and write from databases such as Vertica and PostgreSQL. Optimize and maintain existing pipelines for performance and reliability. Build custom solutions using Python, including FastAPI applications and plugins for Collibra Data Quality. Oversee the infrastructure of the Collibra application in Kubernetes environment, perform upgrades when required, and troubleshoot and resolve any Kubernetes issues that may affect the application's operation. Deploy and manage solutions, optimize resources for deployments in Kubernetes, including writing YAML files and managing configurations Build and deploy Docker images for various use cases, ensuring efficient and reusable solutions. Collaboration and Training Communicate effectively with stakeholders to align technical implementations with business objectives. Provide training and guidance to stakeholders on Collibra Data Quality usage and help them build and implement data quality rules. Version Control and Documentation Use Git for version control to manage code and collaborate effectively. Document all implementations, including data quality workflows, data pipelines, and deployment processes, ensuring easy reference and knowledge sharing. Database and Data Model Optimization Design and optimize data models for efficient storage and retrieval. Required Qualifications Experience: 4+ years in data science Education: B tech, M tech in Computer Science, Statistics, Applied Mathematics, or related field Industry Knowledge: Preferred experience in Consumer Tech, Enterprise Tech, or Semiconductors but not mandatory Technical Skills Programming: Proficiency in Python, SQL for data analysis and transformation. Tools : Hands-on experience with Collibra Data Quality (CDQ) or similar Data Quality tools (e.g., Informatica DQ, Talend, Great Expectations, Ataccama, etc.). Experience working with Kubernetes workloads. Experience with Agile methodologies and task tracking using Jira. Preferred Skills Strong analytical and problem-solving skills with a results-oriented mindset. Good communication, stakeholder management & requirement gathering capabilities. Additional Information: The ideal candidate will possess a strong educational background in quantitative discipline and experience in working with Hi-Tech clients This position is based at our Bengaluru (preferred) and Kolkata office. About Our Company | Accenture Experience: 4+ years in data science Educational Qualification: B tech, M tech in Computer Science, Statistics, Applied Mathematics, or related field

Posted 1 month ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: DevOps Engineer Experience: 5 to 8 Years Location: Pune Job Description: We are lookingfor a skilled DevOps Engineer with 5 to 12 years of experience to join our dynamic team. The ideal candidatewill have a strong background in DevOps practices, CI/CD pipeline creation, and experience with GCP services. You will play a crucial role in ensuring smoothdevelopment, deployment, and integration processes. Key Responsibilities: CI/CD Pipeline Creation: Design, implement, and manage CI/CD pipelines using GitHub, ensuring seamless integration and delivery of software. Version Control: Manage and maintain code repositories using GitHub, ensuring best practices forversion control andcollaboration. Infrastructure as Code: Write and maintain infrastructure as code (IaC) using Terraform/YAML, ensuringconsistent and reliabledeployment processes. GCP Services Management: Utilize Google Cloud Platform (GCP) services to build, deploy, and scaleapplications. Manage and optimize cloudresources to ensure cost-effective operations. Automation s Monitoring: Implement automation scripts and monitoring tools to enhance the efficiency, reliability, and performance of our systems. Collaboration: Work closely with development, QA, and operations teams to ensure smooth workflows and resolve issuesefficiently. Security s Compliance: Ensure that all systems andprocesses comply with security and regulatory standards. Required Skills: DevOps Practices: Strong understanding of DevOps principles, including continuous integration, continuous delivery, and continuous deployment. GitHub: Extensive experience with GitHub for version control, collaboration, and pipeline integration. CI/CD: Hands-on experience in creating and managing CI/CD pipelines. GCP Services: Solid experience with GCP services, including compute, storage, networking, and security. Preferred Qualifications: GCP Certification: Google Cloud Platform certification is highly desirable and will be an added advantage. Scripting Languages: Proficiency in scripting languages such as Python, Bash, or similar. Monitoring Tools: Experience with monitoring and logging tools like Prometheus, Grafana, or Stackdriver. Educational Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field.

Posted 1 month ago

Apply

4.0 - 6.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Josh Software is relentlessly focused on discovering, developing and delivering innovative solutions that connect our customers to the people they serve through the advanced use of technology. With our reach, range and resources, we provide our customers a universal platform for driving their vision into their markets using consistent and reliable technology solutions. Josh Software has more than 15 years experience and operates in the key strategic geographies in the USA, Australia, Europe, SE Asia and India. Skills (Must have): Strong in Core Java, familiar with latest concepts e.g. Java 17 Familiar with Spring Boost, JSON, XML, Yaml configs etc Sound knowledge of Rest API wrt Security, Performance, Logging and Error handle Hands on with Tools like Postman, Swagger, ELK(Logs), AWS Cloud watch etc. Familiar with Development processes and Network concepts Knowledge of AWA Cloud (good to have) Should have good understanding of DB and should be able to write basic SQL queries Skills (Desirable): Knowledge / Experience in Dockers and Containers AWS or any Cloud concepts Fluency in written and spoken English Qualification: M.C.A, B.Sc/MSc Computers, B.E /B.Tech in Computer Science, Engineering, or a related field. Additional Information: We offer a competitive salary and excellent benefits that are above industry standard. Do check our impressive growth rate on and ratings on Pls submit your resume in this standard 1-page or 2-page Please hear from our employees on

Posted 1 month ago

Apply

5.0 - 8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

At Broadridge, we've built a culture where the highest goal is to empower others to accomplish more. If you’re passionate about developing your career, while helping others along the way, come join the Broadridge team. Responsibilities of the Role: Open position for SRE engineer in reputed financial services firm. The main goals of SRE engineers are to create scalable and highly reliable software systems. Drive Site Reliability Engineering agenda to improve availability, reliability, and performance of services. Experience in triaging incidents drive to resolutions and RCA Build new systems/applications as per requirements. Coordinate build activities with external teams. Identify tasks which can be automated and reduce manual work for the team. Work on break/fix hands on Windows, AIX, and Linux platforms as well as interaction with external parties for critical issue or hardware / software support tickets. Work on Strategic Projects related to DR testing, Automated failover, Next gen monitoring, System readiness. Ensuring our systems are healthy, monitored, automation, fault tolerant and designed to scale. Directly interact with business unit executives and project management teams to communicate operational status, key project status, and business value of services produced. Provide clear communication/escalation/follow up and closure to business impacting events involving multiple Broadridge teams and external partners. Required Skills and Qualifications: 5-8 years of experience in Service delivery role for Systems Administration of Redhat Linux systems in a large-scale production environment Knowledgeable in Middleware Administration in Apache, Tomcat, IHS, IISand Websphere Working knowledge of Network Load balancing. Thorough knowledge of good server and system administration principles. Monitoring tools like SCOM, BSM, Datadog, Splunk Strong functional knowledge of core protocols such as TCP/IP, SSH, SMTP, LDAP, NFS and DNS. Have knowledge on ITIL process, change management, Incident management. Having experience running Disaster recovery tests, application failovers. Understanding of SSL certificates and chain certificates Technology Skills: Must have Middleware knowledge - IIS, IHS, Websphere, Tomcat (Administration and Configuration) RedHat Linux administration, RHEL6, 7 or 8 IT Operations, ITIL, Remedy Monitoring tools Good to have Windows Server administration AIX administration Networking knowledge on DNS, IP, NFS, Load balancers, Certificates management Scripting skills like shell, YAML, automations like Ansible. Knowledge on AWS and/or Azure

Posted 1 month ago

Apply

6.0 years

0 Lacs

India

Remote

Job Title: Azure DevOps Engineer Location: Remote Experience Required: 6+ Years Employment Type: Full-Time Job Summary: We are seeking a skilled Azure DevOps Engineer with 6+ years of experience in designing and implementing CI/CD pipelines, managing cloud infrastructure, and automating deployment processes. The ideal candidate will have a deep understanding of Azure services, DevOps practices, and infrastructure-as-code tools. Key Responsibilities: Design, build, and maintain secure and scalable CI/CD pipelines using Azure DevOps Manage infrastructure and deployments using tools like ARM templates, Bicep, Terraform, or Azure CLI Automate configuration and application deployments across environments Monitor system performance, availability, and security in Azure environments Collaborate with development, QA, and operations teams to ensure smooth delivery of applications Implement best practices for source control, branching strategies, and release management Manage build and release artifacts, approvals, and deployment strategies Troubleshoot and resolve infrastructure and deployment issues in a timely manner Required Skills: 6+ years of experience in DevOps or infrastructure automation roles Expertise in Azure services including Azure App Services, Azure Functions, AKS, Azure Storage, and Azure Key Vault Strong hands-on experience with Azure DevOps (Pipelines, Repos, Artifacts, Boards) Proficiency in scripting with PowerShell, Bash, or Python Experience with Infrastructure as Code tools (ARM, Bicep, Terraform) Knowledge of containerization (Docker) and orchestration (Kubernetes/AKS) Familiarity with Git, YAML, and pipeline-as-code practices Understanding of monitoring tools such as Azure Monitor, Log Analytics, and Application Insights

Posted 1 month ago

Apply

0.0 - 7.0 years

0 Lacs

Coimbatore, Tamil Nadu

On-site

Job Title: Senior DevOps Engineer Location: Coimbatore Job Type: Full-time Experience: 7+ Years About the Client Our client is a product-based technology firm focused on delivering secure, scalable, and enterprise-grade digital transformation solutions. With a strong emphasis on engineering excellence, they foster a collaborative and agile environment to drive innovation across industries. Role Overview We are looking for a Senior DevOps Engineer to lead DevOps strategy, automation, and infrastructure management. The ideal candidate will have deep expertise in Azure , CI/CD pipelines , Docker , Kubernetes , and Terraform , with a passion for optimizing cloud-based environments and streamlining engineering workflows. Key Responsibilities Design, build, and maintain CI/CD pipelines (Azure Pipelines, GitHub Actions, Jenkins) Manage infrastructure as code using Terraform Oversee containerization and orchestration with Docker and Kubernetes Deploy and maintain Java-based applications in production Implement monitoring, logging, and alerting tools Collaborate with development, QA, and security teams Automate routine tasks to enhance productivity Optimize cloud usage and reduce infrastructure costs Ensure compliance with security and regulatory standards Evaluate and integrate DevOps tools to improve delivery and reliability Required Skills Azure (must-have) ; AWS or GCP (nice-to-have) Proficient with Git , Git-based workflows, and command-line tools Scripting experience in Bash and PowerShell ; YAML proficiency Programming in Python , JavaScript/TypeScript , or C# Strong experience with Docker and Kubernetes Hands-on with Terraform ; experience with Ansible , Chef , or Puppet CI/CD tools: Azure DevOps , GitHub Actions , or Jenkins Nice-to-Have Skills Basic DBA/database administration Familiarity with Azure Boards , Jira , or GitLab Issues Experience with Linux environments Cloud cost optimization and performance tuning Key Behaviours Strong analytical and problem-solving skills Excellent communication and collaboration across teams Proven leadership in driving DevOps initiatives Highly adaptable and committed to continuous improvement Qualifications Bachelor’s degree in Computer Science or related field (or 5+ years of relevant experience in lieu of a degree) 7+ years of total IT experience with significant DevOps exposure Demonstrated leadership in DevOps strategy and implementation Interested? Send your resume to: jobs@prognova.co Job Types: Full-time, Permanent Pay: ₹1,500,000.00 per year Schedule: Monday to Friday Ability to commute/relocate: Coimbatore, Tamil Nadu: Reliably commute or planning to relocate before starting work (Required) Experience: DevOps: 7 years (Required) Azure: 7 years (Required) Docker: 7 years (Required) Work Location: In person Application Deadline: 15/07/2025

Posted 1 month ago

Apply

12.0 - 15.0 years

13 - 17 Lacs

Bengaluru

Work from Office

Mission/Position Headline: Responsible for the development and on-time delivery of software component(s) in a project, translating software design into code in accordance to the product quality requirements with teams productivity improvements. Areas of Responsibility: Analyzes requirements, translates into design and drives estimation of work product. Defines and implements the work break down structure for the development. Provides inputs for project management and effort tracking. Leads implementation and developer testing in the team Supports engineers within the team with technical/technology/requirement/design expertise. Performs regular internal technical coordination and reviews with all relevant project stakeholders. Tests the work product, and investigates, fixes software defects found through test and code review, submits work products for release after integration, ensuring requirements are addressed and deliverables are of high quality. Ensuring integration and submission of solution into software configuration management system, within committed delivery timelines. Desired Experience: Proficiency in ASP.Net core web Api, C#, .Net Core, WPF, Entity Framework, Sql Server 2022 Secondary skills Docker, Kubernetes, Terraform, YAML. Strong knowledge on Git or any other equivalent source control. UI Controls- Telerik Nice to have skills on Python, Knowledge of various CI/CD Qualification and Experience Bachelors or Masters degree in Computer Science/Electronics Engineering required, or equivalent 12 to 15 years of experience in software development lifecycle. Capabilities Should have good communication skills, be self-motivated, quality and result oriented Strong Analytical and Problem-Solving Skills

Posted 1 month ago

Apply

12.0 - 15.0 years

13 - 17 Lacs

Bengaluru

Work from Office

Mission/Position Headline: Responsible for the development and on-time delivery of software component(s) in a project, translating software design into code in accordance to the product quality requirements with teams productivity improvements. Areas of Responsibility: Analyzes requirements, translates into design and drives estimation of work product. Defines and implements the work break down structure for the development. Provides inputs for project management and effort tracking. Leads implementation and developer testing in the team Supports engineers within the team with technical/technology/requirement/design expertise. Performs regular internal technical coordination and reviews with all relevant p roject stakeholders. Tests the work product, and investigates, fixes software defects found through test and code review, submits work products for release after integration, ensuring requirements are addressed and deliverables are of high quality. Ensuring integration and submission of solution into software configuration management system, within committed delivery timelines. Desired Experience: Proficiency in JS, HTML, CSS, Angular / React / Vue / Ember, SQL/Postgres, Graphql (apollo) Secondary skills Docker, Kubernetes, Terraform, YAML. Strong knowledge on Git or any other equivalent source control. Nice to have skills on Python, Knowledge of various CI/CD Qualification and Experience Bachelors or Masters degree in Computer Science/Electronics Engineering required, or equivalent 12 to 15 years of experience in software development lifecycle. Capabilities Should have good communication skills, be self-motivated, quality and result oriented Strong Analytical and Problem-Solving Skills

Posted 1 month ago

Apply

6.0 - 11.0 years

11 - 16 Lacs

Pune

Work from Office

Hello Visionary! We empower our people to stay resilient and relevant in a constantly evolving world. Were looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. Does that sound like youThen it seems like youd make a phenomenal addition to our vibrant team. Siemens Mobility is an independent handled company of Siemens AG. Its core business includes rail vehicles, rail automation and electrification solutions, turnkey systems, intelligent road traffic technology and related services. The Information Technology (IT) department has the global responsibility for the internal IT of Siemens Mobility. Its goal is to provide a robust and efficient IT landscape derived from business and market demands. Your personality and individuality make the difference. In our team, we increase business performance and point the way into the digital age. Is that exactly your thingThen live your passion in a cross-location team in which you can actively craft the future of our company. You open up new possibilities for our customers with your competence. Connected with this is an exciting career path that leads you to ever new projects and solutions in the field of IT for Siemens Mobility. We are looking for a Senior AI Developer Youll make a difference by Core AI Capabilities Expert in text understanding and generation Development of complex, agentic AI services Implementation of semantic search capabilities RAG (Retrieval-Augmented Generation) variants development Technical Skills Python-based prompt flows implementation LLM-based processing logic JSON/YAML schema development Integration with multiple AI frameworks AWS Bedrock Azure Document Intelligence Azure OpenAI LangChain/LlamaIndex Quality & Evaluation Design and implement AI evaluation pipelines Implement quality metrics G-Eval Faithfulness Answer correctness Answer relevance Synthetic ground truth data generation Performance optimization and monitoring Use Case Development Text extraction and analysis Document comparison capabilities List generation from documents Template filling implementations Chat function development Youll win us over by Experience level- 6+ years Very good English skills are required. Join us and be yourself! We value your outstanding identity and perspective and are fully committed to providing equitable opportunities and building a workplace that reflects the diversity of society. Come bring your authentic self and build a better tomorrow with us. Make your mark in our exciting world at Siemens. This role is based inPuneand is an Individual contributor role. You might be required to visit other locations within India and outside. In return, you'll get the chance to work with teams impacting - and the shape of things to come. Find out more about Siemens careers at & more about mobility at https://new.siemens.com/global/en/products/mobility.html

Posted 1 month ago

Apply

3.0 - 5.0 years

15 - 20 Lacs

Pune

Work from Office

Hello eager tech expert! To create a better future, you need to think outside the box. Thats why we at Siemens need innovators who arent afraid to push boundaries to join our diverse team of tech gurus. Got what it takesThen help us create lasting, positive impact! Working for Siemens Financial Services Information Technology (SFS IT), you will work on the continuous enhancement of our Siemens Credit Warehouse solution by translating business requirements into IT solutions and working hand in hand on the implementation of these with our interdisciplinary and international team of IT experts. The Siemens Credit Warehouse is a business-critical IT-application that provides credit rating information and credit limits of our customers to all Siemens entities worldwide. We are looking for an experienced Release Manager to become part of our Siemens Financial Services Information Technology (SFS IT) Data Management team. You will have a pivotal role in the moderation of all aspects related to release management of our Data Platform, liaising between the different stakeholders that range from senior management to our citizen developer community. Through your strong communication & presentation skills, coupled with your solid technical background and critical thinking, youre able to connect technical topics to a non-technical/management quorum, leading the topics under your responsibility towards a positive outcome based on your natural constructive approach. Youll break new ground by: Lead topics across multiple stakeholders from different units in our organization (IT and Business). Actively listen to issues and problems faced by technical and non-technical members. Produce outstanding technical articles for documentation purposes. Youre excited to build on your existing expertise, including University degree in computer science, business information systems or similar area of knowledge. At least 3 to 5 years experience in a release manager role. Strong technical background with proven track record in: Data engineering and data warehousing, esp. with Snowflake and dbt open source, ideally dbt cloud that allow you to champing CI/ CD processes end-to-end setup and development of release management processes (CI/CD) and concepts. Azure DevOps (esp. CI/CD, project setup optimization), Github and Gitlab, including Git Bash. Reading YAML code for Azure DevOps pipeline and error handling. Very good programming skills in SQL (esp. DDL and DML statements). General good understanding of Azure Cloud tech stack (Azure Portal, Logic Apps, Synapse, Blob Containers, Kafka, Clusters and Streaming). A proven Track on AWS is a big plus. Experience in Terraform is a big plus. Create a better #TomorrowWithUs! Protecting the environment, conserving our natural resources, fostering the health and performance of our people as well as safeguarding their working conditions are core to our social and business commitment at Siemens. This role is based in Pune/Mumbai. Youll also get to visit other locations in India and beyond, so youll need to go where this journey takes you. In return, youll get the chance to work with international team and working on global topics.

Posted 1 month ago

Apply

0 years

0 Lacs

Greater Chennai Area

On-site

Overview DevOps JSON & YAML serialised data structures Ansible Netconf & YANG Jinja2 templating Python (basic) Linux (basic) Docker (basic) CVS git (github) Test automation (Github Actions) Atlassian Jira & Confluence Responsibilities DevOps JSON & YAML serialised data structures Ansible Netconf & YANG Jinja2 templating Python (basic) Linux (basic) Docker (basic) CVS git (github) Test automation (Github Actions) Atlassian Jira & Confluence Requirements DevOps JSON & YAML serialised data structures Ansible Netconf & YANG Jinja2 templating Python (basic) Linux (basic) Docker (basic) CVS git (github) Test automation (Github Actions) Atlassian Jira & Confluence

Posted 1 month ago

Apply

10.0 - 15.0 years

6 - 10 Lacs

Bengaluru

Work from Office

OSS Inventory Management Service Orchastration Network Transmission Technologies TM Forum APIs YANG YAML Create Solution stories as per requirements specified in BRD that deliver the specified business requirements Produce a modular and flexible E2E design that meets the business requirements The Solution Designer Role is responsible for the overall integrity of solution so must have knowledge of existing system set Telco systems strategy and any applicable IT standards Proven Experience in Service Fulfillment domains in Telco which includes Service Orchestration Service Design and Physical Logical Inventory Management COTS product or homegrown tools Well versed with prior design operational experience with any or all of these BT components in Network OSS SRIMS BerT PACS and NGAE Collaborate with Product owners Business Analysts systems designers platform architects to ensure that every system impacted by the design clearly understand the broader business objectives and ensure documentation sign off of system designs Be responsible for Rapid Impact Assessments RIA IA generating Estimates VROM ROM Collaborate with the Enterprise Architect ensuring that Solution Architectures are in alignment with the overall EA and Technology strategy Uphold the architectural governance principles and good implementation design for architectural deliverables Effectively manage risks and issues associated with Solution designs Use tools techniques that transform E2E design Understanding of generic inventory management protocols interfaces such as YAML and YANG Ability to understand and define REST SOAP based web services Familar with TM forum standards and APIs Create Solution stories as per requirements specified in BRD that deliver the specified business requirements Produce a modular and flexible E2E design that meets the business requirements The Solution Designer Role is responsible for the overall integrity of solution so must have knowledge of existing system set Telco systems strategy and any applicable IT standards Proven Experience in Service Fulfillment domains in Telco which includes Service Orchestration Service Design and Physical Logical Inventory Management COTS product or homegrown tools Well versed with prior design operational experience with any or all of these BT components in Network OSS SRIMS BerT PACS and NGAE Collaborate with Product owners Business Analysts systems designers platform architects to ensure that every system impacted by the design clearly understand the broader business objectives and ensure documentation sign off of system designs Be responsible for Rapid Impact Assessments RIA IA generating Estimates VROM ROM Collaborate with the Enterprise Architect ensuring that Solution Architectures are in alignment with the overall EA and Technology strategy Uphold the architectural governance principles and good implementation design for architectural deliverables Effectively manage risks and issues associated with Solution designs Use tools techniques that transform E2E design Understanding of generic inventory management protocols interfaces such as YAML and YANG Ability to understand and define REST SOAP based web services Familar with TM forum standards and APIs

Posted 1 month ago

Apply

8.0 - 10.0 years

9 - 12 Lacs

Bengaluru

Work from Office

Key Responsibilities : Automation Collaborate to use and create as required automation for the installation, configuration, and maintenance of IBM Z/OS hardware components, including servers, storage systems, networking equipment and appropriate operating systems and hypervisors. System Management Oversee the installation, configuration, and maintenance of IBM Z/OS hardware components, including servers, storage systems, networking equipment and appropriate operating systems and hypervisors. Performance Monitoring Regularly monitor system performance and resource utilization, identifying and resolving performance bottlenecks Troubleshooting Diagnose and resolve hardware-related issues, coordinating with vendors as necessary for repairs and parts replacement. Security Administration Ensure compliance with security policies, manage user access controls, and implement necessary security measures to safeguard data. Capacity Planning Analyze capacity trends and forecast future hardware requirements to ensure system scalability and optimization . Backup and Recovery Implement robust backup and recovery strategies to protect data and minimize downtime during failures . Documentation Maintain detailed documentation of system configurations, processes, and procedures for training and compliance purposes. Collaboration Work closely with software engineers, network administrators, and IT support teams to integrate hardware systems with software applications and maintain overall system health. Updates and Upgrades Plan and execute hardware upgrades and software patches in coordination with maintenance windows and organizational protocols. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 8-10 years of experience working with IBM Mainframe systems as a hardware and OS administrator /System Programmer. Proven experience with IBM Z/OS systems, mainframe architecture and hardware management. Industry experience in working with IBM Z Hardware Management Console (HMC), Dynamic Partition Manager (DPM) Experience with python, yaml, Ansible, Terraform Real world experience in defining Z Sysplexes , Coupling Structures etc. Strong troubleshooting and problem-solving skills Knowledge of system security best practices and compliance standards Familiarity with backup solutions and disaster recovery planning. Excellent communication and interpersonal skills Preferred technical and professional experience Experience with IBM hardware components, such as IBM Z mainframes, storage systems, Tape Systems, Crypto Cards, Ficon Channels, Network Adaptersand all other hardware components. Certifications related to IBM Z/OS or Mainframe systems

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies