Home
Jobs

1094 Puppet Jobs - Page 22

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Greater Hyderabad Area

On-site

Linkedin logo

Description Role Description Our Tech and Product team is tasked with innovating and maintaining a massive distributed systems engineering platform that ships hundreds of features to production for tens of millions of users across all industries every day. Our users count on our platform to be highly reliable, lightning fast, supremely secure, and to preserve all of their customizations and integrations every time we ship. Our platform is deeply customizable to meet the differing demands of our vast user base, creating an exciting environment filled with complex challenges for our hundreds of agile engineering teams every day. Required Skills And Experience Salesforce is looking for Site Reliability Engineers to build and manage a multi-substrate kubernetes and microservices platform which powers Core CRM and a growing set of applications across Salesforce. This platform provides the ability to develop and deploy microservices quickly and efficiently, accelerating their path to production.In this role, You are responsible for the high availability of a large fleet of clusters running various technologies like Kubernetes, software load balancers, service mesh and so on. You’ll gain valuable experience troubleshooting real production issues which will expand your knowledge on the architecture of k8s ecosystem services and internals. You will contribute code wherever possible to drive improvement You will drive automation efforts in Python/Golang/Terraform/Spinnaker/Puppet/Jenkins to eliminate manual work with day-to-day operations. You will help improve the visibility of the platform by implementing necessary monitoring and metrics. You’ll implement self-healing mechanisms to proactively fix issues to reduce manual labor. You will get a chance to improve your communication and collaboration skills working with various other Infrastructure teams across Salesforce. You will be interacting with a highly innovative and creative team of developers and architects. You will evaluate new technologies to solve problems as neededYou are the ideal candidate if you have a passion for live site service ownership. You have demonstrated a strong ability to manage large distributed systems. You are comfortable with troubleshooting complex production issues that span multiple disciplines. You bring a solid understanding of how infrastructure software components work. You are able to automate tasks using a modern high-level language. You have good written and spoken communication skills.Required Skills:Experience operating large-scale distributed systems, especially in cloud environments Excellent troubleshooting skills with the ability to learn new technologies in complex distributed systems Strong working experience with Linux Systems Administration. Good knowledge of linux internals. Good experience in any of the scripting/programming languages: Python, GoLang etc ., Basic knowledge of Networking protocols and components: TCP/IP Stack, Switches, Routers, Load Balancers. Experience in any of Puppet, Chef, Ansible or other devops tools. Experience in any of the monitoring tools like Nagios, grafana, Zabbix etc., Experience with Kubernetes, Docker or Service Mesh Experience with AWS, Terraform, Spinnaker A continuous learner and a critical thinker A team player with great communication skills Areas where you may be working on include highly scalable, highly performant distributed systems with highly available and durable data storage capabilities that ensure high availability of the stack above that includes databases. A thorough understanding of distributed systems, system programming, working with system resources is required. Practical knowledge for challenges regarding clustering solutions, hands-on experience in deploying your code in the public cloud environments, working knowledge of Kubernetes and working with APIs provided by various public cloud vendors to handle data are highly desired skills. Benefits & Perks Comprehensive benefits package including well-being reimbursement, generous parental leave, adoption assistance, fertility benefits, and more! World-class enablement and on-demand training with Trailhead.com Exposure to executive thought leaders and regular 1:1 coaching with leadership Volunteer opportunities and participation in our 1:1:1 model for giving back to the community For more details, visit https://www.salesforcebenefits.com/ Show more Show less

Posted 2 weeks ago

Apply

4.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Entity: Technology Job Family Group: IT&S Group Job Description: Work location Pune Experience- 4- 7 years (excluding internship), Required 2-3 years of experience in Azure You will work with A multi-disciplinary squad, engaging enterprise platform teams, data platform teams, vendors, third party resources in resilient and optimal operations of one or more business critical platform. Let me tell you about the role As a site reliability engineers, we will be responsible for building, maintaining and operating the software solutions, infrastructure and services that powers technology platforms. In this role, we work with a team of engineers and team members to ensure that the digital solutions are highly available, scalable, and secure and will be responsible for automating routine tasks, improving the solution's performance, and providing technical support to other teams. What you will deliver Ensure the reliability, performance, and scalability of large-scale, cloud-based applications and infrastructure. Creating automated solutions to improve operational aspects of the site. Ensure that applications and websites run smoothly and efficiently. Detect issues and automatically managing failures to keep systems up and running. Work with software developers, engineers, and operations teams to improve system performance. Analyse incidents to prevent future disruptions. What you will need to be successful (experience and qualifications) Technical Skills A bachelor's degree in computer science, engineering, or a related field or equivalent work experience. Relevant certifications (e.g., Azure cloud engineering, fundamentals, DevOps, architect certifications) can be helpful. Knowledge of networking concepts, protocols, and tools, willingness to learn new technologies and adapt to changing environments. Skilled in managing configuration, deployments, observability, handling and resolving incidents, including root cause analysis, managing and operating complex systems for scalability, availability and performance. Proficient in communication and collaboration skills to work effectively with development and operations teams. Software Skills Skilled in languages like Python, Go, Java, or Ruby, and scripting skills in Bash or PowerShell. Skilled in software engineering practices for full SDLC, including coding standards, code reviews, source control management, continuous deployments (e.g., Jenkins, GitLab CI, or CircleCI), testing, and operations. Skilled in building complex software systems end-to-end which have been optimally delivered and operated in production, should understand security and privacy standard methodologies as well as how to properly monitor, log, and alarm production systems. Infrastructure Skills Skilled knowledge of Linux/Unix systems, including system configuration, networking, and debugging. Expert in building and scaling infrastructure services using Microsoft Azure Skilled with infrastructure tools like Ansible, Puppet, Chef, or Terraform for infrastructure as code, monitoring tools (e.g., Prometheus, Grafana) and logging systems (e.g., ELK stack). Skilled in the understanding of using core cloud application infrastructure services including identity platforms, networking, storage, databases, containers, and serverless Skillful knowledge of databases, such as relational, graph, document, and key-value, including performance tuning and improvement Skills That Set You Apart Possess a passion for mentoring and coaching engineers in both technical and soft skills About Bp Our purpose is to deliver energy to the world, today and tomorrow. For over 100 years, bp has focused on discovering, developing, and producing oil and gas in the nations where we operate. We are one of the few companies globally that can provide governments and customers with an integrated energy offering. Delivering our strategy sustainably is fundamental to achieving our ambition to be a net zero company by 2050 or sooner! We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Additional Information We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. Even though the job is advertised as full time, please contact the hiring manager or the recruiter as flexible working arrangements may be considered. Travel Requirement Negligible travel should be expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is a hybrid of office/remote working Skills: Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks. Show more Show less

Posted 2 weeks ago

Apply

100.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Xerox Holdings Corporation For more than 100 years, Xerox has continually redefined the workplace experience. Harnessing our leadership position in office and production print technology, we’ve expanded into software and services to sustainably power the hybrid workplace of today and tomorrow. Today, Xerox is continuing its legacy of innovation to deliver client-centric and digitally-driven technology solutions and meet the needs of today’s global, distributed workforce. From the office to industrial environments, our differentiated business and technology offerings and financial services are essential workplace technology solutions that drive success for our clients. At Xerox, we make work, work. Learn more about us at www.xerox.com . Purpose: Collaborating with development and operations teams to design, develop, and implement solutions for continuous integration, delivery, and deployment ML-Models rapidly with confidence. Use managed online endpoints to deploy models across powerful CPU and GPU machines without managing the underlying infrastructure. Package models quickly and ensure high quality at every step using model profiling and validation tools. Optimize model training and deployment pipelines, build for CI/CD to facilitate retraining, and easily fit machine learning into your existing release processes. Use advanced data-drift analysis to improve model performance over time. Build flexible and more secure end-to-end machine learning workflows using MLflow and Azure Machine Learning. Seamlessly scale your existing workloads from local execution to the intelligent cloud and edge. Store your MLflow experiments, run metrics, parameters, and model artifacts in the centralized workspace. Track model version history and lineage for auditability. Set compute quotas on resources and apply policies to ensure adherence to security, privacy, and compliance standards. Use the advanced capabilities to meet governance and control objectives and to promote model transparency and fairness. Facilitate cross-workspace collaboration and MLOps with registries. Host machine learning assets in a central location, making them available to all workspaces in your organization. Promote, share, and discover models, environments, components, and datasets across teams. Reuse pipelines and deploy models created by teams in other workspaces while keeping the lineage and traceability intact. General : Builds knowledge of the organization, processes and customers. Requires knowledge and experience in own discipline; still acquiring higher level knowledge and skills. Receives a moderate level of guidance and direction. Moderate decision-making authority guided by policies, procedures, and business operations protocol. Technical Skills : Will need to be strong on ML pipelines, modern tech stack. Proven experience with MLOPs with Azure and MLFlow etc. Experience with scripting and coding using Python. Working Experience with container technologies (Docker, Kubernetes). Familiarity with standard concepts and technologies used in CI/CD build, deployment pipelines. Experience in relational database (e.g.:- MS SQL Server) & NoSQL Databases (e.g.:- MongoDB) Python and Strong math skills (e.g. statistics). Problem-solving aptitude and Excellent communication and presentation skills. Automating and streamlining infrastructure, build, test, and deployment processes. Monitoring and troubleshooting production issues and providing support to development and operations teams. Managing and maintaining tools and infrastructure for continuous integration and delivery. Managing and maintaining source control systems and branching strategies. Strong knowledge of Linux/Unix administration. Experience with configuration management tools like Ansible, Puppet, or Chef. Strong understanding of networking, security, and storage. Understanding and Practice of AGILE Methodologies. Proficiency and experience in working as part of the Software Development Lifecycle (SDLC) using Code Management & Release Tools (MS DevOps, Github, Team Foundation Server) Above average verbal, written and presentation skills. Show more Show less

Posted 2 weeks ago

Apply

0.0 - 3.0 years

3 - 5 Lacs

Mumbai

Work from Office

Naukri logo

Responsibilities: Designing, developing and maintaining core system features, services and engines Collaborating with a cross functional team of the backend, Mobile application, AI, signal processing, robotics Engineers, Design, Content, and Linguistic Team to realize the requirements of conversational social robotics platform which includes investigate design approaches, prototype new technology, and evaluate technical feasibility Ensure the developed backend infrastructure is optimized for scale and responsiveness Ensure best practices in design, development, security, monitoring, logging, and DevOps adhere to the execution of the project. Introducing new ideas, products, features by keeping track of the latest developments and industry trends Operating in an Agile/Scrum environment to deliver high quality software against aggressive schedules Requirements Proficiency in distributed application development lifecycle (concepts of authentication/authorization, security, session management, load balancing, API gateway), programming techniques and tools (application of tested, proven development paradigms) Proficiency in working on Linux based Operating system. Proficiency in at least one server-side programming language like Java. Additional languages like Python and PHP are a plus Proficiency in at least one server-side framework like Servlets, Spring, java spark (Java). Proficient in using ORM/Data access frameworks like Hibernate,JPA with spring or other server-side frameworks. Proficiency in at least one data serialization framework: Apache Thrift, Google ProtoBuffs, Apache Avro,Google Json,JackSon etc. Proficiency in at least one of inter process communication frameworks WebSocket's, RPC, message queues, custom HTTP libraries/frameworks ( kryonet, RxJava ), etc. Proficiency in multithreaded programming and Concurrency concepts (Threads, Thread Pools, Futures, asynchronous programming). Experience defining system architectures and exploring technical feasibility tradeoffs (architecture, design patterns, reliability and scaling) Experience developing cloud software services and an understanding of design for scalability, performance and reliability Good understanding of networking and communication protocols, and proficiency in identification CPU, memory I/O bottlenecks, solve read write-heavy workloads. Proficiency is concepts of monolithic and microservice architectural paradigms. Proficiency in working on at least one of cloud hosting platforms like Amazon AWS, Google Cloud, Azure etc. Proficiency in at least one of database SQL, NO-SQL, Graph databases like MySQL, MongoDB, Orientdb Proficiency in at least one of testing frameworks or tools JMeter, Locusts, Taurus Proficiency in at least one RPC communication framework: Apache Thrift, GRPC is an added plus Proficiency in asynchronous libraries (RxJava), frameworks (Akka),Play,Vertx is an added plus Proficiency in functional programming ( Scala ) languages is an added plus Proficiency in working with NoSQL/graph databases is an added plus Proficient understanding of code versioning tools, such as Git is an added plus Working Knowledge of tools for server, application metrics logging and monitoring and is a plus Monit, ELK, graylog is an added plus Working Knowledge of DevOps containerization utilities like Ansible, Salt, Puppet is an added plus Working Knowledge of DevOps containerization technologies like Docker, LXD is an added plus Working Knowledge of container orchestration platform like Kubernetes is an added plus

Posted 2 weeks ago

Apply

100.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

About Xerox Holdings Corporation For more than 100 years, Xerox has continually redefined the workplace experience. Harnessing our leadership position in office and production print technology, we’ve expanded into software and services to sustainably power the hybrid workplace of today and tomorrow. Today, Xerox is continuing its legacy of innovation to deliver client-centric and digitally-driven technology solutions and meet the needs of today’s global, distributed workforce. From the office to industrial environments, our differentiated business and technology offerings and financial services are essential workplace technology solutions that drive success for our clients. At Xerox, we make work, work. Learn more about us at www.xerox.com . Designation: MLOps Engineer Location: Kochi, India Experience: 5-8 years Qualification: B. Tech /MCA /BCA Timings: 10 AM to 7 PM (IST) Work Mode: Hybrid Purpose: Collaborating with development and operations teams to design, develop, and implement solutions for continuous integration, delivery, and deployment ML-Models rapidly with confidence. Use managed online endpoints to deploy models across powerful CPU and GPU machines without managing the underlying infrastructure. Package models quickly and ensure high quality at every step using model profiling and validation tools. Optimize model training and deployment pipelines, build for CI/CD to facilitate retraining, and easily fit machine learning into your existing release processes. Use advanced data-drift analysis to improve model performance over time. Build flexible and more secure end-to-end machine learning workflows using MLflow and Azure Machine Learning. Seamlessly scale your existing workloads from local execution to the intelligent cloud and edge. Store your MLflow experiments, run metrics, parameters, and model artifacts in the centralized workspace. Track model version history and lineage for auditability. Set compute quotas on resources and apply policies to ensure adherence to security, privacy, and compliance standards. Use the advanced capabilities to meet governance and control objectives and to promote model transparency and fairness. Facilitate cross-workspace collaboration and MLOps with registries. Host machine learning assets in a central location, making them available to all workspaces in your organization. Promote, share, and discover models, environments, components, and datasets across teams. Reuse pipelines and deploy models created by teams in other workspaces while keeping the lineage and traceability intact. General: Builds knowledge of the organization, processes and customers. Requires knowledge and experience in own discipline; still acquiring higher level knowledge and skills. Receives a moderate level of guidance and direction. Moderate decision-making authority guided by policies, procedures, and business operations protocol. Technical Skills Will need to be strong on ML pipelines, modern tech stack. Proven experience with MLOPs with Azure and MLFlow etc. Experience with scripting and coding using Python and Shell Scripts. Working Experience with container technologies (Docker, Kubernetes). Familiarity with standard concepts and technologies used in CI/CD build, deployment pipelines. Experience in SQL and Python and Strong math skills (e.g. statistics). Problem-solving aptitude and Excellent communication and presentation skills. Automating and streamlining infrastructure, build, test, and deployment processes. Monitoring and troubleshooting production issues and providing support to development and operations teams. Managing and maintaining tools and infrastructure for continuous integration and delivery. Managing and maintaining source control systems and branching strategies. Strong skills in scripting languages like Python, Bash, or PowerShell. Strong knowledge of Linux/Unix administration. Experience with configuration management tools like Ansible, Puppet, or Chef. Strong understanding of networking, security, and storage. Understanding and Practice of AGILE Methodologies. Proficiency and experience in working as part of the Software Development Lifecycle (SDLC) using Code Management & Release Tools (MS DevOps, Github, Team Foundation Server) Required: Proficiency and experience working with Relational Databases and SQL Scripting (MS SQL Server) Above average verbal, written and presentation skills. Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

About Us : At TELUS Digital, we enable customer experience innovation through spirited teamwork, agile thinking, and a caring culture that puts customers first. TELUS Digital is the global arm of TELUS Corporation, one of the largest telecommunications service providers in Canada. We deliver contact center and business process outsourcing (BPO) solutions to some of the world's largest corporations in the consumer electronics, finance, telecommunications and utilities sectors. With global call center delivery capabilities, our multi-shore, multi-language programs offer safe, secure infrastructure, value-based pricing, skills-based resources and exceptional customer service - all backed by TELUS, our multi-billion dollar telecommunications parent. Required Skills: Design, develop, and support data pipelines and related data products and platforms. Design and build data extraction, loading, and transformation pipelines and data products across on-prem and cloud platforms. Perform application impact assessments, requirements reviews, and develop work estimates. Develop test strategies and site reliability engineering measures for data products and solutions. Participate in agile development "scrums" and solution reviews. Mentor junior Data Engineers. Lead the resolution of critical operations issues, including post-implementation reviews. Perform technical data stewardship tasks, including metadata management, security, and privacy by design. Design and build data extraction, loading, and transformation pipelines using Python and other GCP Data Technologies Demonstrate SQL and database proficiency in various data engineering tasks. Automate data workflows by setting up DAGs in tools like Control-M, Apache Airflow, and Prefect. Develop Unix scripts to support various data operations. Model data to support business intelligence and analytics initiatives. Utilize infrastructure-as-code tools such as Terraform, Puppet, and Ansible for deployment automation. Expertise in GCP data warehousing technologies, including BigQuery, Cloud SQL, Dataflow, Data Catalog, Cloud Composer, Google Cloud Storage, IAM, Compute Engine, Cloud Data Fusion and Dataproc (good to have). Qualifications: Bachelor's degree in Software Engineering, Computer Science, Business, Mathematics, or related field. 4+ years of data engineering experience. 2 years of data solution architecture and design experience. GCP Certified Data Engineer (preferred). Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Key Responsibilities: Develop and maintain automation scripts using Bash, Python, or Shell for provisioning, deployment, and monitoring tasks. Manage cloud infrastructure on AWS, Azure, or GCP, ensuring scalability, security, and performance. Implement and maintain orchestration tools like Kubernetes, Docker Swarm, or Ansible. Build and optimize CI/CD pipelines using Jenkins, GitLab CI, GitHub Actions, or equivalent. Monitor system performance and availability; troubleshoot infrastructure issues on Linux/UNIX servers. Work closely with Development and QA teams to streamline application deployment. Manage and respond to ticketing systems such as JIRA, ServiceNow, or Zendesk. Ensure system reliability, uptime, and recovery by implementing robust automation and backup strategies. Apply best practices in infrastructure security, secrets management, and access control. Qualifications & Experience: 1–3 years of hands-on experience in a DevOps role. Proficiency in scripting languages: Bash, Python, or Shell. Strong experience with Linux/UNIX system administration. Hands-on experience with at least one major cloud provider: AWS, Azure, or GCP. Proficiency in containerization and orchestration tools: Docker, Kubernetes, Helm. Experience building and maintaining CI/CD pipelines. Familiarity with configuration management tools such as Ansible, Puppet, or Chef. Exposure to ticketing platforms like JIRA or ServiceNow. Experience with infrastructure monitoring tools (e.g., Prometheus, Grafana, ELK stack, Datadog). If this sounds like you, please share your resume along with CTC details and notice period at pawan.shukla@dotpe.in to discuss further. Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

About Us : At TELUS Digital, we enable customer experience innovation through spirited teamwork, agile thinking, and a caring culture that puts customers first. TELUS Digital is the global arm of TELUS Corporation, one of the largest telecommunications service providers in Canada. We deliver contact center and business process outsourcing (BPO) solutions to some of the world's largest corporations in the consumer electronics, finance, telecommunications and utilities sectors. With global call center delivery capabilities, our multi-shore, multi-language programs offer safe, secure infrastructure, value-based pricing, skills-based resources and exceptional customer service - all backed by TELUS, our multi-billion dollar telecommunications parent. Required Skills : Minimum 6 years of experience in Architectecture, Design and building data extraction, loading, and transformation pipelines and data products across on-prem and cloud platforms. Perform application impact assessments, requirements reviews, and develop work estimates. Develop test strategies and site reliability engineering measures for data products and solutions. Lead agile development "scrums" and solution reviews. Mentor junior Data Engineering Specialists. Lead the resolution of critical operations issues, including post-implementation reviews. Perform technical data stewardship tasks, including metadata management, security, and privacy by design. Demonstrate expertise in SQL and database proficiency in various data engineering tasks. Automate complex data workflows by setting up DAGs in tools like Control-M, Apache Airflow, and Prefect. Develop and manage Unix scripts for data engineering tasks. Intermediate proficiency in infrastructure-as-code tools like Terraform, Puppet, and Ansible to automate infrastructure deployment. Proficiency in data modeling to support analytics and business intelligence. Working knowledge of ML Ops to integrate machine learning workflows with data pipelines. Extensive expertise in GCP technologies, including BigQuery, Cloud SQL, Dataflow, Data Catalog, Cloud Composer, Google Cloud Storage, IAM, Compute Engine, Cloud Data Fusion, Dataproc (good to have), and BigTable. Advanced proficiency in programming languages (Python). Qualifications: Bachelor's degree in Software Engineering, Computer Science, Business, Mathematics, or related field. Analytics certification in BI or AI/ML. 6+ years of data engineering experience. 4 years of data platform solution architecture and design experience. GCP Certified Data Engineer (preferred). Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Our tools are highly complex in-house data coding application software and services in AWS, Python/Spark, C# and Postgres. These data coding/ETL software applications are used within. As part of this team you will have the opportunity to work in a young multicultural high performance environment that will give you the possibility to work with other teams in the Nielsen Media business space. The DevOps Engineer is ultimately responsible for delivering technical solutions: starting from the project's onboard until post launch support and including development, testing and user acceptance. It is expected to coordinate, support and work with multiple delocalized project teams in multiple regions. As a DevOps Engineer, you will play a pivotal role in bridging the gap between development and operations, focusing on automating and streamlining our processes to ensure a robust and efficient software delivery pipeline. Your responsibilities will encompass infrastructure management, continuous integration/delivery (CI/CD) implementation and collaboration with development and operations teams. Responsibilities Infrastructure as Code (IaC): Implement and manage infrastructure as code using tools such as Terraform or Cloud Formation Ensure consistent and repeatable provisioning of infrastructure resources CI/CD Pipeline Development: Design, implement, and maintain CI/CD pipelines for automated build, test, and deployment processes Integrate CI/CD tools with version control systems and artifact repositories Containerization and Orchestration: Utilize containerization technologies like Docker to package applications and services Implement and manage container orchestration tools such as Kubernetes for scalable and resilient deployments Automation Scripting: Develop automation scripts using scripting languages (e.g., Bash, Python) to streamline operational tasks Implement automated monitoring and alerting solutions Configuration Management: Implement and manage configuration management tools (e.g., Ansible, Puppet, Chef) to ensure consistency across environments Enforce configuration standards and best practices Collaboration with Development and Operations: Collaborate with development teams to understand application requirements and optimize deployment processes Work closely with operations teams to ensure smooth transition of applications into production Security and Compliance: Implement security best practices for infrastructure and application deployments Ensure compliance with industry standards and regulations Monitoring and Logging: Set up monitoring tools to track system performance and identify issues proactively Implement centralized logging solutions for effective troubleshooting Cloud Services Management: Manage and optimize cloud infrastructure resources on platforms such as AWS, Azure, or Google Cloud Monitor cloud costs and implement cost-saving strategies Disaster Recovery Planning: Develop and maintain disaster recovery plans and procedures Conduct periodic tests to validate the effectiveness of the disaster recovery processes Key Skills Bachelor's degree in Computer Science, Software Engineering, or a related field Proven experience in a DevOps role, minimum 5+ years, with a focus on automation and infrastructure management Proficiency in scripting languages (e.g., Bash, Python).Experience with CI/CD tools (e.g., Jenkins, GitLab CI) Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes) Familiarity with infrastructure as code (IaC) tools (e.g., Terraform, Cloud Formation) Strong understanding of configuration management tools (e.g., Ansible, Puppet, Chef) Experience with Amazon cloud platforms AWS Strong communication and collaboration skills with ability to communicate complex technical concepts and align organization on decisions Sound problem-solving skills with the ability to quickly process complex information and present it clearly and simply Utilizes team collaboration to create innovative solutions efficiently Other desirable skills Certifications in relevant areas (e.g., AWS Certified DevOps Engineer, Kubernetes Certified Administrator) Experience with server less computing Knowledge of networking principles and security best practices Familiarity with logging and monitoring tools (e.g. Prometheus, Grafana) Understanding of agile development methodologies Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @ nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or other characteristics protected by law. Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Job Summary We are seeking a highly skilled Infrastructure Engineer with deep expertise in Linux and Windows administration to design, implement, and support mission-critical infrastructure for Milliman’s financial solutions. In this role, you will focus on ensuring optimal health, performance, reliability, and scalability of Linux and Windows-based systems, while also contributing to broader infrastructure projects as needed. This role will collaborate with US-based Finance IT, Operations, and Infrastructure teams, champion best practices, and foster an inclusive environment that values diverse perspectives and continuous learning. This role will report to the Manger, Infrastructure & Operations – India. Primary Duties & Responsibilities Linux Systems Administration Architect, install, configure, and maintain enterprise-grade Linux systems (e.g., Red Hat Enterprise Linux, Ubuntu) across development, testing, and production environments. Develop and implement monitoring, troubleshooting, and patch management procedures to maintain system stability, performance, and security. Apply security hardening standards and compliance requirements to Linux environments. Implement robust backup, recovery, and disaster recovery strategies to ensure business continuity. Infrastructure Support Collaborate with an Infrastructure Architect and cross-functional teams to align solutions with business objectives, compliance standards, and best practices. Participate in capacity planning, resource optimization, and infrastructure expansion to support on-premises and/or cloud environments. Recommend new tools, technologies, or processes for continuous improvement, staying current with emerging trends in Linux and Windows technologies. Security & Compliance Enforce security best practices and access controls for on-premises and cloud-based Windows and Linux environments. Partner with security and compliance teams to address vulnerability assessments and ensure alignment with regulatory standards. Embed security considerations into infrastructure design and database configurations. Monitoring & Incident Response Develop key performance indicators (KPIs) and performance metrics to proactively monitor Linux systems and databases. Lead root-cause analysis for complex issues and implement preventive measures to reduce downtime. Coordinate with cross-functional teams for escalations and incident resolution. Collaboration & Leadership Mentor junior engineers, providing training and guidance on Linux best practices. Champion inclusive decision-making and encourage varied perspectives to drive better outcomes. Work effectively within a ticketing system, documenting issues, changes, and solutions for transparency. Additional Technical Contributions (as Needed) Contribute to broader infrastructure projects—including networking, storage, virtualization, cloud migrations—where Linux and Windows expertise is critical. Utilize automation tools (Terraform, Ansible, Puppet, Chef) to streamline deployments and configuration management. Assist with infrastructure planning, budgeting, and aligning resources to business needs. Required Qualifications & Requirements: BSc Computer Science, BTech CSE, or Engineering, or a related field (or equivalent work experience). 10+ years of hands-on experience in Linux systems administration (e.g., Red Hat, Ubuntu) including system design, automation, and patch management. Proficiency in scripting (Bash, Python, PowerShell) for automation and maintenance. Demonstrated ability to design and maintain mission-critical, high-availability Linux and Windows environments. Strong understanding of security best practices and regulatory compliance requirements. Excellent problem-solving, interpersonal, and communication skills, with a demonstrated ability to work effectively in diverse, cross-functional teams. Preferred Experience with cloud platforms (AWS, Azure, or GCP) and Infrastructure-as-Code (Terraform, CloudFormation). Certifications in Linux (e.g., RHCE), Oracle (e.g., OCP), or cloud platforms. Experience with virtualization (VMware), containerization (Docker, Kubernetes), and storage area networks (SANs). Working knowledge of networking (TCP/IP, DNS, load balancing, firewalls) for broader infrastructure context. Track record of mentoring or leadership in a technology team environment, ideally applying agile methodologies. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Greater Kolkata Area

Remote

Linkedin logo

Senior PostgreSQL DBC India| IST | Remote | Work from Home Available Shifts PST - 10 PM - 6 AM IST Why Pythian? At Pythian, we are experts in strategic database and analytics services, driving digital transformation and operational excellence. Pythian, a multinational company, was founded in 1997 and started by ensuring the reliability and performance of mission-critical databases. We quickly earned a reputation for solving tough data challenges. We were there when the industry moved from on-premises to cloud environments, and as enterprises sought more from their data, we expanded our competencies to include advanced analytics. Today, we empower organizations to embrace transformation and leverage advanced technologies, including AI, to stay competitive. We deliver innovative solutions that meet each client’s data goals and have built strong partnerships with Google Cloud, AWS, Microsoft, Oracle, SAP, and Snowflake. The powerful combination of our extensive expertise in data and cloud and our ability to keep on top of the latest bleeding edge technologies make us the perfect partner to help mid and large-sized businesses transform to stay ahead in today’s rapidly changing digital economy. Why you? Are you a Senior PostgreSQL who lives in India (any location)? Are you community minded? Do you blog, contribute to the Open Source community? Are you inspired by ever-shifting challenges, constant growth and collaboration with a team of peers who push you constantly to up your game? At Pythian, we are actively shaping what it means to be an open-source database engineer and administrator, and we want you to be a part of the world’s top team of MongoDB, Cassandra, and MySQL professionals. If this is you, and you wonder what it would be like to work at Pythian, reach out to us and find out! Intrigued to see what a life is like at Pythian? Check out #pythianlife on LinkedIn and follow @loveyourdata on Instagram! Not the right job for you? Check out what other great jobs Pythian has open around the world! Pythian Careers What will you be doing? As a Senior PostgreSQL Consultant (DBC) you will work as part of Pythian's open source team and supply complete support for all aspects of database and application infrastructure to a variety of our customers. Our collaborative environment means everyone works together to solve complex puzzles and develop innovative solutions for our customers. You'll work closely with the customer teams to understand their needs, in both a project based and long term support capacity. You'll create and document database standards, create optimized queries, indexes, and data structure. Monitor and support database environments and serve as an escalation point for complex troubleshooting and interactive production support. Use database vendor provided tools and Pythian developed accelerators to performance tune various database system, specific queries and applications scenarios. Diagnose and address database performance issues using performance monitors and various tuning techniques. Identify areas of opportunity and recommend appropriate improvement suggestions. Cross-functional training in NoSQL, Site Reliability Engineering and DevOps methodologies are encouraged. When you're not fixing things, you'll be authoring new blog posts on interesting topics for our open-source community to digest, creating new articles in our customer facing knowledge base for more frequently seen issues, and hosting webinars amongst other things like participating in conferences and meetups promoting Pythian to the open source community. What do we need from you? While we understand you might not have everything on the list, to be the successful candidate for the PostgreSQL & MySQL job you are likely to have skills such as; Knowledge and experience in installing, configuring and upgrading PostgreSQL & MySQL databases & tools relevant in PostgreSQL Administration. Experience administering PostgreSQL & MySQL in virtualized and cloud environments, especially AWS, GCP or Azure. Experience with scripting (bash/python) and software development (C++, Java, Go) Automation technologies such as Ansible, Terraform, Puppet, Chef, SALT experience. Previous remote working experience a plus. Debugging skills and the ability to troubleshoot methodically, identifying and applying fixes for known errors, and when necessary, capacity to think outside of the box to resolve complex issues Very good documentation skills. Nice to haves include; Understanding of current IT service standards such as ITIL. Being a contributor to Open Source projects relevant to PostgreSQL, MySQL or other database or infrastructure software. What do you get in return? Love your career: Competitive total rewards and salary package. Blog during work hours; take a day off and volunteer for your favorite charity. Love your work/life balance: Flexibly work remotely from your home, there’s no daily travel requirement to an office! All you need is a stable internet connection. Love your coworkers: Collaborate with some of the best and brightest in the industry! Love your development: Hone your skills or learn new ones with our substantial training allowance; participate in professional development days, attend training, become certified, whatever you like! Love your workspace: We give you all the equipment you need to work from home including a laptop with your choice of OS, and an annual budget to personalize your work environment! Love yourself: Pythian cares about the health and well-being of our team. You will have an annual wellness budget to make yourself a priority (use it on gym memberships, massages, fitness and more). Additionally, you will receive a generous amount of paid vacation and sick days, as well as a day off to volunteer for your favorite charity. Disclaimer The successful applicant will need to fulfill the requirements necessary to obtain a background check. Accommodations are available upon request for candidates taking part in any aspect of the selection process. Show more Show less

Posted 2 weeks ago

Apply

7.0 - 12.0 years

18 - 30 Lacs

South Goa, Pune

Hybrid

Naukri logo

We are looking for a DevOps leader with deep experience in python build/ci/cd ecosystem for an exciting and cutting edge stealth startup in Silicon Valley. Responsibilities: Design and implement complex CI/CD pipelines in python and leveraging cutting-edge python packaging, dependency management, and CI/CD practices Optimize speed and reliability of builds Define test automation tools, architecture and integration with CI/CD platforms, and drive TA implementation in python Implement configuration management to set standards and best practices Manage and optimize cloud infrastructure resources: GCP or AWS or Azure Collaborate with development teams to understand application requirements and optimize deployment processes Work closely with operations teams to ensure smooth transition of applications into production. Develop and maintain documentation for system configurations, processes, and procedures Eligibility: 5-12 years of experience in DevOps, with minimum 2-5years of experience on python build echo system Python packaging, distribution,concurrent builds, dependencies, environments, test framework integrations, linting. Pip, poetry, uv, flint CI/CD: pylint, coverage.py , cprofile, python scripting, docker, k8s, IaC (Terraform, Ansible, Puppet, Helm) Platforms (Teamcity (preferred) or Jenkins or Github Actions or CircleCI, TravisCI) Test Automation: pytest, unittest, integration tests, plyright (preferred) Cloud platforms: AWS or Azure or GCP and platform specific CI/CD services and tools. Familiarity with logging and monitoring tools (e.g. Prometheus, Grafana).

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

India

On-site

GlassDoor logo

Hyderabad Begumpet, Telangana | 2025-06-04 Experience : 7+ Years Shift Type : Night shift Qualification : Graduation Location : Hyderabad Begumpet, Telangana Mode of Operation : Work From Office Number of Openings : 3 Job Description : We are looking for a skilled Sr.Linux Administrator to manage and maintain our Linux-based systems, ensuring optimal performance, security, and reliability. The ideal candidate will have expertise in Linux environments, virtualization, and server management, along with strong troubleshooting and team management skills. Roles & Responsibilities : Server Management: Hands-on experience with cPanel, Apache, Tomcat, and mail servers (Zimbra, Postfix, MS Exchange). Virtualization: Knowledge of ESXi/Hypervisor/Citrix/Proxmox and experience with both physical and virtual servers creating and managing hardware and software RAID Operating Systems: Proficiency in CentOS, Fedora, and RHEL. Databases: Familiarity with MySQL and PostgreSQL. Scripting: Ability to write and manage scripts (e.g., Bash, Batch). Troubleshooting: Strong skills in resolving Level 3 technical issues. Monitoring & Backup: Experience with monitoring tools (Nagios, Zabbix, Zenoss, Cacti) and backup strategies. Networking: Understanding of DHCP, DNS, FTP, NFS, Samba, and content filtering (Squid proxy). Security: Implementation of network security policies using IPTables/Firewalld and access lists User Management: Experience in creating user accounts and setting password policies. Automation: Knowledge of automated Linux server installations. Soft Skills: Excellent communication and team leadership abilities. Required Key Skills : VM Ware/openstack/xenservw/hyperv /proxmox. Google/AWS /Azure . Zabbix / Nagios . Zimbra / exim / exchange . Ansible / Puppet .

Posted 2 weeks ago

Apply

0 years

2 - 6 Lacs

Hyderābād

On-site

GlassDoor logo

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients . Powered by our purpose – the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Consultant - Linux Administrator A Linux Server Engineer patches, maintains , and troubleshoots server issues, remediates vulnerabilities, and resolves server-related queries. They support IT infrastructure services across all Linux servers' versions. Responsibilities Applying OS patches and upgrades regularly across various Linux platforms. Automating pre- and post-patching tasks using scripts and tools. Managing patching processes with automation tools like Ansible. Troubleshooting repository and system issues related to patching. Monitoring system performance and addressing vulnerabilities. Documenting patch management strategies and compliance metrics. Qualifications we seek in you! Minimum Qualifications / Skills Bachelor's degree or equivalent work experience Soft Skills: Excellent problem-solving and analytical skills, strong communication and collaboration abilities, ability to work independently and as part of a team Preferred Qualifications/ Skills Intermediate level of Knowledge/Responsibility Redhat /Centos Expert level knowledge of UNIX/Linux (L3). Expert level knowledge on automation/configuration management tools like Ansible, Puppet, Python Excellent troubleshooting and analytical skills. Experience with Linux servers in virtualized environments Performance troubleshooting and capacity planning Troubleshooting, Installation, maintenance, and tuning of Linux / Unix OS. Strong knowledge on server hardware and software Prepares installs, and implement Linux / UNIX operating software and associated components Experience on User Administration using LDAP • VMware Administration – Virtualization techniques or any other virtualization technology • Experience on system security, patching and upgradations. Strong knowledge of host-based SAN migrations and good experience with SAN LUNS scanning and mapping etc., Proficient with LVM and Linux File systems Ability to tune kernel/System parameters Knowledge on Oracle ASM and SAN Storage Experience/Troubleshooting TCP/IP, Routing, DNS, NFS Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 4, 2025, 7:02:42 AM Unposting Date Ongoing Master Skills List Consulting Job Category Full Time

Posted 2 weeks ago

Apply

5.0 - 7.0 years

6 - 7 Lacs

Thiruvananthapuram

On-site

GlassDoor logo

5 - 7 Years 1 Opening Trivandrum Role description Job Role Proficiency: DevOps Engineer (Lead I / Lead II Level) Role Summary Acts under the guidance of a Lead II/Architect to understand customer requirements and translate them into design and implementation of new DevOps (CI/CD) components. Capable of managing and guiding at least one Agile team. Key Outcomes Interpret DevOps tool/feature/component designs to develop and support them per specifications. Adapt existing DevOps solutions or create new ones for evolving project contexts. Code, debug, test, document, and report on development/support stages and issues. Select optimal technical options: reuse, improve, or reconfigure components. Optimize DevOps tools and processes for cost, efficiency, and quality. Validate deliverables with user representatives; support integration and commissioning. Troubleshoot complex, novel issues beyond standard operating procedures. Design, install, configure, and troubleshoot CI/CD pipelines and environments. Automate infrastructure provisioning across cloud and on-premise systems. Mentor DevOps engineers and assist in supporting existing components. Collaborate with Agile teams across diverse environments. Drive automation to enable cost savings and process improvements. Participate in code reviews and mentor junior (A1/A2 level) resources. Measures of Outcomes Quality of deliverables Error/completion rates across SDLC/PDLC stages Number of reusable components developed Number of certifications obtained (domain, technology, or product-specific) SLA compliance for onboarding and support Ticket resolution metrics Outputs Expected Automated Components Automate installation/configuration tasks for software/tools (cloud and on-premises). Automate build and deployment processes for applications. Configured Components Configure CI/CD pipelines for application development and support teams. Scripts Develop and maintain scripts (PowerShell, Shell, Python) for automation tasks. User Onboarding Onboard and extend tools to support new application development/support teams. Mentoring Guide and mentor peers and junior team members (A1, A2 levels). Stakeholder Management Assist team in preparing and presenting status updates to management. Training & SOPs Create training plans and SOPs to support DevOps activities and onboarding processes. Process Efficiency Measure, analyze, and improve process efficiency and effectiveness. Skill Examples CI/CD Tools: Jenkins, Bamboo, Azure DevOps, GitHub Actions Configuration & Automation: Ansible, Puppet, Chef, PowerShell, Terraform, DSC Scripting: Python, Shell, Groovy, Perl, PowerShell Code Quality Tools: SonarQube, Cobertura, Clover Testing Tools Integration: Selenium, JUnit, NUnit Version Control: Git, Bitbucket, GitHub, ClearCase Build Tools: Maven, Ant Artifact Repositories: Nexus, Artifactory Monitoring & Dashboards: ELK Stack, Splunk Containerization: Docker, Kubernetes, Helm Cloud Platforms: AWS, Azure, GCP Infra as Code: Terraform, ARM Templates Migration: On-premises to Cloud Jira Administration and Git/Bitbucket management Debugging Skills: Strong in C#, .NET stack Knowledge Examples Installation, configuration, build, and deployment workflows and tools IaaS on AWS, Azure, and GCP with respective tooling Full SDLC and application lifecycle Quality Assurance and Automation strategies Multi-tool stack proficiency Build branching and merging strategies Containerization and orchestration concepts Security policies and DevSecOps tools Agile methodologies and practices Additional Comments Strong expertise in AWS (EC2, S3, RDS, IAM, etc.) Extensive hands-on experience in Kubernetes and Docker , including Helm Skilled in Terraform and infrastructure scripting using Python Proven ability to work independently with minimal guidance Excellent communication and interpersonal skills Strong ownership and problem-solving abilities Skills Kubernetes,Devops,Aws,Github About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 2 weeks ago

Apply

10.0 years

5 - 8 Lacs

Gurgaon

On-site

GlassDoor logo

Company Description We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 38 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 10+ years. Extensive experience in back-end development utilizing Java 8 or higher, Spring Framework (Core/Boot/MVC), Hibernate/JPA, and Microservices Architecture. Hands-on experience with REST APIs, Caching system (e.g Redis) etc. Proficiency in Service-Oriented Architecture (SOA) and Web Services (Apache CXF, JAX-WS, JAX-RS, SOAP, REST). Hands-on experience with multithreading, and cloud development. Strong working experience in Data Structures and Algorithms, Unit Testing, and Object-Oriented Programming (OOP) principles. Familiarity with secure coding practices and vulnerability assessment tools like OWASP, Snyk, etc. Hands-on experience with relational databases such as SQL Server, Oracle, MySQL, and PostgreSQL. Experience with DevOps tools and technologies such as Ansible, Docker, Kubernetes, Puppet, Jenkins, and Chef. Hands on experience on cloud technologies such as AWS/ Azure. Strong understanding of UML and design patterns. Ability to simplify solutions, optimize processes, and efficiently resolve escalated issues. Strong problem-solving skills and a passion for continuous improvement. Excellent communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code Understanding functional requirements thoroughly and analyzing the client’s needs in the context of the project Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realize it Determining and implementing design methodologies and tool sets Enabling application development by coordinating requirements, schedules, and activities. Being able to lead/support UAT and production roll outs Creating, understanding and validating WBS and estimated effort for given module/task, and being able to justify it Addressing issues promptly, responding positively to setbacks and challenges with a mindset of continuous improvement Giving constructive feedback to the team members and setting clear expectations. Helping the team in troubleshooting and resolving of complex bugs Coming up with solutions to any issue that is raised during code/design review and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.

Posted 2 weeks ago

Apply

11.0 years

0 Lacs

Gurgaon

On-site

GlassDoor logo

Company Description We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 38 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 11+ years. Strong working experience with architecture and development in Java 8 or higher. Hands on working experience in Spring Boot, Spring Cloud, and related frameworks. Strong command over Object-Oriented Programming and Design Patterns (Creational, Structural, Behavioral). Extensive experience building and maintaining microservices architecture in cloud or hybrid environments. Hands-on experience with REST APIs, Caching system (e.g Redis) etc. Hands-on experience with multithreading, and cloud development. In-depth understanding of Apache Kafka, including Kafka Streams, Kafka Connect, and Kafka clients in Java. Experience with SQL and NoSQL databases such as MySQL, PostgreSQL, and MongoDB. Experience with DevOps tools and technologies such as Ansible, Docker, Kubernetes, Puppet, Jenkins, and Chef. Proven expertise in CI/CD pipelines using Azure DevOps, Jenkins, or GitLab CI/CD. Proficiency in build automation tools like Maven, Ant, and Gradle. Hands on experience on cloud technologies such as AWS/ Azure/GCP. Working knowledge of Snowflake or equivalent cloud data platforms. Understanding of predictive analytics and basic ML/NLP workflows. Strong understanding of UML and design patterns. Strong problem-solving skills and a passion for continuous improvement. Excellent communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code Understanding functional requirements thoroughly and analyzing the client’s needs in the context of the project Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realize it Determining and implementing design methodologies and tool sets Enabling application development by coordinating requirements, schedules, and activities. Being able to lead/support UAT and production roll outs Creating, understanding and validating WBS and estimated effort for given module/task, and being able to justify it Addressing issues promptly, responding positively to setbacks and challenges with a mindset of continuous improvement Giving constructive feedback to the team members and setting clear expectations. Helping the team in troubleshooting and resolving of complex bugs Coming up with solutions to any issue that is raised during code/design review and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.

Posted 2 weeks ago

Apply

5.0 years

4 - 7 Lacs

Gurgaon

On-site

GlassDoor logo

Company Description We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (18000 experts across 38 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That is where you come in! Job Description REQUIREMENTS: Total experience 5+ years Extensive experience in back-end development utilizing Java 8 or higher, Spring Framework (Core/Boot/MVC), Hibernate/JPA, and Microservices Architecture. Experience with messaging systems like Kafka. Hands-on experience with REST APIs, Caching system (e.g Redis) etc. Proficiency in Service-Oriented Architecture (SOA) and Web Services (Apache CXF, JAX-WS, JAX-RS, SOAP, REST). Hands-on experience with multithreading, and cloud development. Strong working experience in Data Structures and Algorithms, Unit Testing, and Object-Oriented Programming (OOP) principles. Hands-on experience with relational databases such as SQL Server, Oracle, MySQL, and PostgreSQL. Experience with DevOps tools and technologies such as Ansible, Docker, Kubernetes, Puppet, Jenkins, and Chef. Proficiency in build automation tools like Maven, Ant, and Gradle. Hands on experience on cloud technologies such as AWS/ Azure. Strong understanding of UML and design patterns. Ability to simplify solutions, optimize processes, and efficiently resolve escalated issues. Strong problem-solving skills and a passion for continuous improvement. Excellent communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code Understanding functional requirements thoroughly and analyzing the client’s needs in the context of the project Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realize it Determining and implementing design methodologies and tool sets Enabling application development by coordinating requirements, schedules, and activities. Being able to lead/support UAT and production roll outs Creating, understanding and validating WBS and estimated effort for given module/task, and being able to justify it Addressing issues promptly, responding positively to setbacks and challenges with a mindset of continuous improvement Giving constructive feedback to the team members and setting clear expectations. Helping the team in troubleshooting and resolving of complex bugs Coming up with solutions to any issue that is raised during code/design review and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Gurgaon

On-site

GlassDoor logo

Key Responsibilities Automate deployments utilizing custom templates and modules for customer environments on AWS. Architect AWS environment best practices and deployment methodologies. Create automation tools and processes to improve day to day functions. Educate customers on AWS and Rackspace best practices and architecture. Ensure the control, integrity, and accessibility of the cloud environment for the enterprise Lead Workload/Workforce Management and Optimization related tasks. Mentor and assist Rackers across the Cloud Function. Quality check development of technical training for all Rackers supporting Rackspace Supported CLOUD Products. Provide technical expertise underpinning communications targeting a range of stakeholders - from individual contributors to leaders across the business. Collaborate with Account Managers and Business Development Consultants to build strong customer relationships. Technical Expertise Experienced in solutioning and implementation of Green field projects leveraging IaaS, PaaS for Primary site and DR. Near expert knowledge of AWS Products & Services, Compute, Storage, Security, networking, etc. Proficient skills in at least one of the following languages: Python, Linux, Shell scripting. Proficient skills with git and git workflows. Excellent working knowledge of Windows or Linux operating systems – experience of supporting and troubleshooting issues and performance. Highly skilled in Terraform/IaC, including CI/CD practices. Working knowledge in Kubernetes. Experience in designing, building, implementing, analysing, Migrating and troubleshooting highly available systems. Knowledge of at least one configuration management system such as Chef, Ansible, Puppet or any other such tools. Understanding of services and protocols, configuration, management, and troubleshooting of hosting environments, including web servers, databases, caching, and database services. Knowledge in the application of current and emerging network software and hardware technology and protocols. Skills Passionate about technology and has a desire to constantly expand technical knowledge. Detail-oriented in documenting information and able to own customer issues through resolution. Able to handle multiple tasks and prioritize work under pressure. Demonstrate sound problem-solving skills coupled with a desire to take on responsibility. Strong written and verbal communication skills, both highly technical and non-technical. Ability to communicate technical issues to nontechnical and technical audiences. Education Required Bachelor’s degree in Computer Science or equivalent degree. Certifications Requires all 3 Associate level Certificates in AWS or professional level certificate. Experience 7+ Years of total IT experience

Posted 2 weeks ago

Apply

6.0 years

12 - 18 Lacs

Chennai

On-site

GlassDoor logo

Role: Mainframe Production support Location: Pune or Chennai (Hybrid Role) Duration: Long Term Contract Notice Period: Immediate to 15 days Job Summary Must have Strong knowledge JCL, Different ways to pass data from JCL and JCL Optimizations Must have strong knowledge on UTILITIES, Datasets, Must have strong knowledge on STEPLIB, JCLLIB, Instream PROC, Catalog PROC, Possess Strong knowledge on COBOL language, Possess Strong knowledge on DB2, Should be proficient in Writing Stored procedures, CICS programs, Strong problem solving skills, log analysis, debugging/troubleshooting Strong knowledge on Different codes, like SQL Code, Abend code, deadlocks, Good to have CI/CD Jenkins, chef/puppet knowledge, Good to have Splunk or Dynatrace knowledge Should have Production support experience with ITSM Knowledge. Good and confident communications skills Job Types: Full-time, Permanent, Contractual / Temporary Contract length: 6 months Pay: ₹1,283,011.65 - ₹1,848,702.28 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Morning shift Supplemental Pay: Performance bonus Yearly bonus Experience: JCL: 6 years (Required) COBOL: 7 years (Required) DB2: 4 years (Required) Utilities: 4 years (Required) Work Location: In person

Posted 2 weeks ago

Apply

10.0 years

5 - 8 Lacs

Noida

On-site

GlassDoor logo

Company Description We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 38 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 10+ years. Extensive experience in back-end development utilizing Java 8 or higher, Spring Framework (Core/Boot/MVC), Hibernate/JPA, and Microservices Architecture. Hands-on experience with REST APIs, Caching system (e.g Redis) etc. Proficiency in Service-Oriented Architecture (SOA) and Web Services (Apache CXF, JAX-WS, JAX-RS, SOAP, REST). Hands-on experience with multithreading, and cloud development. Strong working experience in Data Structures and Algorithms, Unit Testing, and Object-Oriented Programming (OOP) principles. Familiarity with secure coding practices and vulnerability assessment tools like OWASP, Snyk, etc. Hands-on experience with relational databases such as SQL Server, Oracle, MySQL, and PostgreSQL. Experience with DevOps tools and technologies such as Ansible, Docker, Kubernetes, Puppet, Jenkins, and Chef. Hands on experience on cloud technologies such as AWS/ Azure. Strong understanding of UML and design patterns. Ability to simplify solutions, optimize processes, and efficiently resolve escalated issues. Strong problem-solving skills and a passion for continuous improvement. Excellent communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code Understanding functional requirements thoroughly and analyzing the client’s needs in the context of the project Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realize it Determining and implementing design methodologies and tool sets Enabling application development by coordinating requirements, schedules, and activities. Being able to lead/support UAT and production roll outs Creating, understanding and validating WBS and estimated effort for given module/task, and being able to justify it Addressing issues promptly, responding positively to setbacks and challenges with a mindset of continuous improvement Giving constructive feedback to the team members and setting clear expectations. Helping the team in troubleshooting and resolving of complex bugs Coming up with solutions to any issue that is raised during code/design review and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

4 - 6 Lacs

Noida

On-site

GlassDoor logo

About the Role: Grade Level (for internal use): 09 The Role: Platform Engineer Department overview PVR DevOps is a global team that provides specialized technical builds across a suite of products. DevOps members work closely with the Development, Testing and Client Services teams to build and develop applications using the latest technologies to ensure the highest availability and resilience of all services. Our work helps ensure that PVR continues to provide high quality service and maintain client satisfaction. Position Summary S&P Global is seeking a highly motivated engineer to join our PVR DevOps team in Noida. DevOps is a rapidly growing team at the heart of ensuring the availability and correct operation of our valuations, market and trade data applications. The team prides itself on its flexibility and technical diversity to maintain service availability and contribute improvements through design and development. Duties & accountabilities The role of Principal DevOps Engineer is primarily focused on building functional systems that improve our customer experience. Responsibilities include: Creating infrastructure and environments to support our platforms and applications using Terraform and related technologies to ensure all our environments are controlled and consistent. Implementing DevOps technologies and processes, e.g: containerisation, CI/CD, infrastructure as code, metrics, monitoring etc Automating always Supporting, monitoring, maintaining and improving our infrastructure and the live running of our applications Maintaining the health of cloud accounts for security, cost and best practices Providing assistance to other functional areas such as development, test and client services. Knowledge, Skills & Experience Strong background of At least 3 to 5 years of experience in Linux/Unix Administration in IaaS / PaaS / SaaS models Deployment, maintenance and support of enterprise applications into AWS including (but not limited to) Route53, ELB, VPC, EC2, S3, ECS, SQS Good understanding of Terraform and similar ‘Infrastructure as Code’ technologies Strong experience with SQL and NoSQL databases such MySQL, PostgreSQL, DB/2, MongoDB, DynamoDB Experience with automation/configuration management using toolsets such as Chef, Puppet or equivalent Experience of enterprise systems deployed as micro-services through code pipelines utilizing containerization (Docker) Working knowledge, understanding and ability to write scripts using languages including Bash, Python and an ability to understand Java, JavaScript and PHP Personal competencies Personal Impact Confident individual – able to represent the team at various levels Strong analytical and problem-solving skills Demonstrated ability to work independently with minimal supervision Highly organised with very good attention to detail Takes ownership of issues and drives through the resolution. Flexible and willing to adapt to changing situations in a fast moving environment Communication Demonstrates a global mindset, respects cultural differences and is open to new ideas and approaches Able to build relationships with all teams, identifying and focusing on their needs Ability to communicate effectively at business and technical level is essential. Experience working in a global-team Teamwork An effective team player and strong collaborator across technology and all relevant areas of the business. Enthusiastic with a drive to succeed. Thrives in a pressurized environment with a “can do” attitude Must be able to work under own initiative ​ About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 309235 Posted On: 2025-06-04 Location: Noida, Uttar Pradesh, India

Posted 2 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location Hyderabad, India. Who are we? At Celigo, we are pioneering the future of application integration with novel new strategies, cutting edge technologies, and of course a diehard team that will go to any length to make your most complicated integrations just work. Our core mission at Celigo is simple: to enable independent best of breed applications to work together as one. We believe that every independent department and every business end user should always have choices when it comes to picking software, and that integration challenges should never stand in the way. Your Role Celigo Powers Integration For a Large Number Of Customers Across The Globe Through Our Integrator.io IPaaS Platform. Several Millions Of Transactions Flow Through This Platform On a Daily Basis To Several Destinations. We Own Mission Critical, Large Scale Services In The Area Of IPaaS For Our Customers. The Challenges Include Developing And Owning Highly Scalable Services In Our Production On Cloud Environments. We Are Looking For Passionate DevOps Engineers Who Love To Run And Manage Modern, Large-scale Services In Production On Cloud In Line With Celigo’s DevOps Charter To enable the Development & QA organisations to deliver high quality products faster, safer and in compliance with the norms set for our Company using automation, tooling and processes. DevOps organisation should keep Customers as their primary focus and work towards a continuous value delivery pipeline. Our criteria of a successful DevOps engineer is one who is excited to work with complex production systems, highly collaborative, innovative, energetic, passionate about our product & our customers, with strong technical skills. Who are we looking for? Masters/Bachelors degree required in Computer Science/Engineering, Software Engineering or Equivalent discipline. 5-7 years of total experience in Software Product Development organization(s) with at least 4 years of experience in DevOps. Hands-on experience in owning and operating mission-critical, large-scale product operations like provisioning, deployment, upgrades, patching and incidents in Production on cloud. Should ensure high-availability and scalability of our Production software by working with engineering, wherever required. Must have working knowledge on public cloud services, preferably AWS. Proficient and competent skills in Infrastructure as code (IaC) like Terraform, code/scripting like Python/BASH, DVCS like Git. Good working knowledge with configuration management and automation tools like Chef, Ansible, Puppet. Basic understanding of security compliance standards and regulations (e.g., SOC2, HIPAA, GDPR). Subject matter expertise in designing, delivering and maintaining CI/CD pipeline(s) with automation using tools like Jenkins. Strong problem solving, troubleshooting and analytical skills demonstrated in past projects. Developer experience and mindset. Experience working in an Agile development environment. Experience with enterprise monitoring systems is highly desirable. Excellent communication skills both verbal and written. What would you do, if hired? Build and deployment of Integrator.io platform on Production, Staging and other environments on cloud as required. Use automation tools wherever possible to automate manual steps including but not limited to provisioning of infrastructure, deployment of code, monitoring/notifications. Research, deploy and administer various tools like Splunk, Kafka etc. that Integrator.io platform uses. Work closely with the Security & Compliance team to help with data points needed for various audits; also, work closely with the rest of the engineering team to triage and fix any security related gaps identified through programs like HackerOne. Design & build CI/CD pipeline for our Integrator.io platform working with senior architects and engineers in the team. Continuously raise standard of engineering excellence by implementing best DevOps practices The Best candidate? Is passionate about making a world-class DevOps organization. Has demonstrated a strong ability to manage large distributed systems in Production on cloud. Brings a solid understanding of how infrastructure software components work. Has experience working in globally distributed teams. Is passionate about learning new techniques in complex distributed systems and excited about debugging difficult problems on the cloud. Has demonstrated ability to automate tasks using high level language. Enjoys a fast-paced environment, working with a highly-talented team and shifting priorities. Learns quickly; must know when to listen, and when to take charge. Why you’ll love it here Everything Integrated. We are solving a really hard problem that affects almost every business on the planet: integrating cloud apps. Automation Nation. We’re the only iPaaS to automate business processes across multiple cloud applications using a single prebuilt integration. Celigo Values. Celigo’s guiding principles and beliefs help shape our mission and work environment, and that we want to foster and reinforce as we scale. Take A Stand. We’re a company that stands for something. Celigo’s Taking a Stand initiative has the goal to promote diversity, equity, and inclusion. Work. Life. Balanced. Starting your first year, we offer a 3-weeks of vacation, plus holidays to recharge and spend time with family and friends. Perks. We offer a strong benefits package, a tech stipend, pre-tax commuter expense reimbursement, recognition opportunities, and many other cool perks. Celigo is proud to be an equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Indore

On-site

GlassDoor logo

Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Job Description Role Purpose Required Skills: 5+Years of experience in system administration, application development, infrastructure development or related areas 5+ years of experience with programming in languages like Javascript, Python, PHP, Go, Java or Ruby 3+ years of in reading, understanding and writing code in the same 3+years Mastery of infrastructure automation technologies (like Terraform, Code Deploy, Puppet, Ansible, Chef) 3+years expertise in container/container-fleet-orchestration technologies (like Kubernetes, Openshift, AKS, EKS, Docker, Vagrant, etcd, zookeeper) 5+ years Cloud and container native Linux administration /build/ management skills ͏ Key Responsibilities: Hands-on design, analysis, development and troubleshooting of highly-distributed large-scale production systems and event-driven, cloud-based services Primarily Linux Administration, managing a fleet of Linux and Windows VMs as part of the application solutions Involved in Pull Requests for site reliability goals Advocate IaC (Infrastructure as Code) and CaC (Configuration as Code) practices within Honeywell HCE Ownership of reliability, up time, system security, cost, operations, capacity and performance-analysis Monitor and report on service level objectives for a given applications services. Work with the business, Technology teams and product owners to establish key service level indicators. Ensuring the repeatability, traceability, and transparency of our infrastructure automation Support on-call rotations for operational duties that have not been addressed with automation Support healthy software development practices, including complying with the chosen software development methodology (Agile, or alternatives), building standards for code reviews, work packaging, etc. Create and maintain monitoring technologies and processes that improve the visibility to our applications' performance and business metrics and keep operational workload in-check. Partnering with security engineers and developing plans and automation to aggressively and safely respond to new risks and vulnerabilities. Develop, communicate, collaborate, and monitor standard processes to promote the long-term health and sustainability of operational development tasks. ͏ ͏ ͏ Reinvent your world.¿We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

JD - Key Responsibilities: Design, implement, and maintain scalable CI/CD pipelines using Azure DevOps. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools such as ARM Templates, Bicep, or Terraform. Manage and monitor Azure cloud infrastructure including compute, storage, and networking resources. Ensure secure, reliable, and efficient deployment practices across environments. Integrate testing, security scanning, and code quality tools into the DevOps lifecycle. Collaborate with developers, QA, and IT teams to resolve build and deployment issues. Implement and manage configuration management tools like Ansible, Chef, or Puppet. Maintain documentation related to system configuration, processes, and deployment guides. Monitor system performance and provide production support as needed. Required Skills: Hands-on experience with Azure DevOps Services (Repos, Pipelines, Artifacts, Boards) . Strong understanding of CI/CD principles and experience in setting up automated pipelines. Experience with Azure Cloud Services (VMs, App Services, AKS, Azure Functions, etc.). Proficiency in scripting using PowerShell , Bash , or similar. Experience with Infrastructure as Code (IaC) tools – ARM Templates, Terraform, Bicep. Knowledge of containerization tools like Docker and orchestration using Kubernetes (AKS preferred). Familiarity with source control systems like Git . Understanding of DevSecOps principles and integrating security in CI/CD pipelines. Excellent troubleshooting and problem-solving skills. Preferred Qualifications: Azure Certifications such as AZ-400 (DevOps Engineer Expert) or AZ-104 . Experience with monitoring tools like Azure Monitor , Log Analytics , or App Insights . Familiarity with Agile and Scrum methodologies. Show more Show less

Posted 2 weeks ago

Apply

Exploring Puppet Jobs in India

The demand for professionals skilled in Puppet configuration management software is on the rise in India. Puppet is widely used in the IT industry for automating infrastructure management tasks, making it an essential skill for job seekers in the technology sector.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities are known for their thriving IT industries and have a high demand for Puppet professionals.

Average Salary Range

The average salary range for Puppet professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can command salaries ranging from INR 10-15 lakhs per annum.

Career Path

In the field of Puppet, a typical career path may involve starting as a Junior Puppet Developer, advancing to a Senior Puppet Developer, and eventually becoming a Puppet Tech Lead. With experience and expertise, professionals can also explore roles such as Puppet Architect or Puppet Consultant.

Related Skills

In addition to Puppet expertise, professionals in this field are often expected to have knowledge of related tools and technologies such as Ansible, Chef, Docker, Kubernetes, and scripting languages like Python or Ruby.

Interview Questions

  • What is Puppet and how does it differ from other configuration management tools? (basic)
  • Explain the Puppet architecture and its components. (medium)
  • How do you handle dependencies in Puppet manifests? (medium)
  • What are Puppet facts and how are they useful in Puppet manifests? (basic)
  • Explain the role of Puppet modules in Puppet configuration management. (medium)
  • How do you enforce idempotency in Puppet manifests? (advanced)
  • Can you explain the difference between Puppet apply and Puppet agent? (basic)
  • How do you manage secrets or sensitive data in Puppet manifests? (medium)
  • What is Hiera and how is it used in Puppet for data separation? (medium)
  • How do you test Puppet manifests before applying them to production environments? (medium)
  • Explain the Puppet Forge and its importance in Puppet ecosystem. (basic)
  • How do you handle errors or failures in Puppet runs? (medium)
  • Can you explain the purpose of resource collectors in Puppet manifests? (advanced)
  • How do you monitor Puppet infrastructure for performance and reliability? (medium)
  • What are Puppet environments and how are they used in Puppet deployments? (basic)
  • Explain how Puppet manages package installations across different operating systems. (medium)
  • How do you troubleshoot Puppet agent connectivity issues? (medium)
  • What are some best practices for Puppet module development? (medium)
  • How do you handle Puppet code deployments across multiple nodes? (medium)
  • Explain how Puppet manages file resources and permissions. (medium)
  • How do you integrate Puppet with version control systems like Git? (medium)
  • What are Puppet reports and how do you use them for auditing Puppet runs? (medium)
  • Can you explain the differences between Puppet standalone and Puppet client-server setups? (advanced)
  • How do you handle Puppet upgrades and migrations in a production environment? (advanced)
  • Explain how Puppet manages service resources and ensures service availability. (medium)

Conclusion

As the demand for Puppet professionals continues to grow in India, job seekers can enhance their career prospects by acquiring proficiency in Puppet and related technologies. By preparing effectively and showcasing their skills confidently during job interviews, individuals can secure rewarding opportunities in the dynamic field of Puppet configuration management. Good luck on your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies