Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3 - 7 years
7 - 12 Lacs
Mohali
Work from Office
Role & responsibilities Job Summary: We are seeking a highly motivated and experienced DevOps and CloudOps Engineer to join our dynamic team. You will be instrumental in building, automating, and maintaining our cloud infrastructure and deployment pipelines across multiple cloud platforms (AWS, Azure, GCP) while ensuring the reliability, scalability, security, and performance of our applications and services. Your expertise in Kubernetes, automation tools, and cloud best practices will be crucial in driving our DevOps culture and improving our software delivery lifecycle. Responsibilities: Cloud Infrastructure Management: Design, deploy, and manage scalable and secure cloud infrastructure on AWS, Azure, and/or GCP, leveraging Infrastructure-as-Code (IaC) tools such as Terraform or CloudFormation. Implement and manage hybrid and multi-cloud environments. Ensure high availability, fault tolerance, and disaster recovery capabilities of cloud resources. Manage and troubleshoot operating systems (Linux and Windows) and underlying infrastructure components. Optimize cloud resource utilization and cost efficiency. Implement and maintain network configurations within cloud environments (e.g., VPCs, subnets, load balancers, security groups). Containerization and Orchestration: Manage and scale containerized applications using Docker and Kubernetes. Design and implement Kubernetes clusters, including setup, configuration, and maintenance. Develop and manage Helm charts for application deployment and management. Implement service mesh technologies (e.g., Istio, Linkerd) for enhanced observability and control. CI/CD Pipeline Automation: Design, build, and maintain robust and automated CI/CD pipelines using tools like Jenkins, GitLab CI/CD, GitHub Actions, or Azure DevOps. Integrate automated testing (unit, integration, end-to-end) into the CI/CD pipeline. Implement automated deployment strategies (e.g., blue/green, canary deployments). Manage and optimize build and artifact management processes. Automation and Scripting: Develop and maintain automation scripts using languages such as Python, Bash, or PowerShell to automate repetitive tasks and infrastructure provisioning. Implement configuration management using tools like Ansible, Chef, or Puppet. Automate monitoring, alerting, and logging processes. Monitoring and Logging: Implement and manage comprehensive monitoring and logging solutions using tools like Prometheus, Grafana, ELK stack (Elasticsearch, Logstash, Kibana), Splunk, or cloud-native monitoring services. Set up alerts and notifications for critical system events and performance thresholds. Perform root cause analysis of incidents and implement preventative measures. Security and Compliance: Implement and enforce security best practices in cloud infrastructure and deployment pipelines. Integrate security scanning tools into the CI/CD pipeline. Ensure compliance with relevant industry standards and regulations. Manage secrets and configurations securely. Collaboration and Communication: Work closely with development, QA, and operations teams to ensure smooth and efficient software delivery. Communicate effectively with technical and non-technical stakeholders. Participate in agile ceremonies and contribute to a DevOps culture. Provide technical guidance and mentorship to junior team members. Troubleshooting and Support: Troubleshoot and resolve issues related to cloud infrastructure, deployments, and application performance. Participate in on-call rotations as needed. Develop and maintain documentation for infrastructure, processes, and procedures. Preferred candidate profile
Posted 1 month ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Overview Viraaj HR Solutions is a leading provider of innovative human resource services that cater to the diverse needs of companies across India. Our mission is to empower organizations by offering tailored recruitment solutions that align with their strategic business goals. We value integrity, collaboration, and excellence, creating a dynamic and supportive work environment where our team members can thrive. As we expand our offerings, we are seeking a dedicated Site Reliability Engineer to join our team and help us drive operational excellence. Role Responsibilities Monitor the performance and availability of critical applications and services. Implement and manage infrastructure as code to promote efficiency. Develop automation scripts to streamline operational processes. Conduct regular capacity planning and performance analysis. Respond to incidents and troubleshoot issues in a timely manner. Collaborate with development teams for continuous integration and deployment practices. Establish and maintain monitoring, alerting, and logging solutions. Participate in on-call duty rotations to ensure high availability. Build and manage load balancing and failover solutions. Conduct root cause analysis for production incidents. Document solutions and create knowledge base articles. Evaluate and recommend tools and technologies for improving reliability. Work with security teams to ensure infrastructure security compliance. Engage in performance testing and tuning of applications. Provide training and mentorship to junior engineers. Qualifications Bachelor's degree in Computer Science or related field. 3+ years of experience in site reliability engineering or DevOps. Strong understanding of cloud infrastructure (AWS, Azure, GCP). Experience with automation tools (Ansible, Puppet, Chef). Familiarity with monitoring tools (Prometheus, Grafana, Nagios). Proficient in scripting languages (Python, Bash, Ruby). Knowledge of containerization (Docker, Kubernetes). Experience with incident management and resolution. Understanding of networking concepts and security best practices. Ability to work well in a fast-paced, collaborative environment. Strong analytical and problem-solving skills. Excellent communication and documentation skills. Ability to manage multiple priorities and meet deadlines. Experience in performance tuning and optimization. Willingness to participate in on-call support as needed. Continuous learning mindset with a passion for technology. Skills: cloud infrastructure (aws, azure, gcp),performance tuning,networking concepts,monitoring tools (prometheus, grafana, nagios),cloud infrastructure,automation tools,incident management,devops,load balancing,network security,scripting languages (python, bash, ruby),containerization (docker, kubernetes),scripting languages,automation tools (ansible, puppet, chef),site reliability engineering,security best practices Show more Show less
Posted 1 month ago
2 - 5 years
2 - 5 Lacs
Bengaluru
Work from Office
Databricks Engineer Full-time DepartmentDigital, Data and Cloud Company Description Version 1 has celebrated over 26+ years in Technology Services and continues to be trusted by global brands to deliver solutions that drive customer success. Version 1 has several strategic technology partners including Microsoft, AWS, Oracle, Red Hat, OutSystems and Snowflake. Were also an award-winning employer reflecting how employees are at the heart of Version 1. Weve been awardedInnovation Partner of the Year Winner 2023 Oracle EMEA Partner Awards, Global Microsoft Modernising Applications Partner of the Year Award 2023, AWS Collaboration Partner of the Year - EMEA 2023 and Best Workplaces for Women by Great Place To Work in UK and Ireland 2023. As a consultancy and service provider, Version 1 is a digital-first environment and we do things differently. Were focused on our core values; using these weve seen significant growth across our practices and our Digital, Data and Cloud team is preparing for the next phase of expansion. This creates new opportunities for driven and skilled individuals to join one of the fastest-growing consultancies globally. About The Role This is an exciting opportunity for an experienced developer of large-scale data solutions. You will join a team delivering a transformative cloud hosted data platform for a key Version 1 customer. The ideal candidate will have a proven track record as a senior/self-starting data engineerin implementing data ingestion and transformation pipelines for large scale organisations. We are seeking someone with deep technical skills in a variety of technologies, specifically SPARK performanceuning\optimisation and Databricks , to play an important role in developing and delivering early proofs of concept and production implementation. You will ideally haveexperience in building solutions using a variety of open source tools & Microsoft Azure services, and a proven track record in delivering high quality work to tight deadlines. Your main responsibilities will be: Designing and implementing highly performant metadata driven data ingestion & transformation pipelines from multiple sources using Databricks and Spark Streaming and Batch processes in Databricks SPARK performanceuning\optimisation Providing technical guidance for complex geospatial problems and spark dataframes Developing scalable and re-usable frameworks for ingestion and transformation of large data sets Data quality system and process design and implementation. Integrating the end to end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times Working with other members of the project team to support delivery of additional project components (Reporting tools, API interfaces, Search) Evaluating the performance and applicability of multiple tools against customer requirements Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. Qualifications Direct experience of building data piplines using Azure Data Factory and Databricks Experience Required is 6 to 8 years. Building data integration with Python Databrick Engineer certification Microsoft Azure Data Engineer certification. Hands on experience designing and delivering solutions using the Azure Data Analytics platform. Experience building data warehouse solutions using ETL / ELT tools like Informatica, Talend. Comprehensive understanding of data management best practices including demonstrated experience with data profiling, sourcing, and cleansing routines utilizing typical data quality functions involving standardization, transformation, rationalization, linking and matching. Nice to have Experience working in a Dev/Ops environment with tools such as Microsoft Visual Studio Team Services, Chef, Puppet or Terraform Experience working with structured and unstructured data including imaging & geospatial data. Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience with Azure Event Hub, IOT Hub, Apache Kafka, Nifi for use with streaming data / event-based data Additional Information At Version 1, we believe in providing our employees with a comprehensive benefits package that prioritises their well-being, professional growth, and financial stability. One of our standout advantages is the ability to work with a hybrid schedule along with business travel, allowing our employees to strike a balance between work and life. We also offer a range of tech-related benefits, including an innovative Tech Scheme to help keep our team members up-to-date with the latest technology. We prioritise the health and safety of our employees, providing private medical and life insurance coverage, as well as free eye tests and contributions towards glasses. Our team members can also stay ahead of the curve with incentivized certifications and accreditations, including AWS, Microsoft, Oracle, and Red Hat. Our employee-designed Profit Share scheme divides a portion of our company's profits each quarter amongst employees. We are dedicated to helping our employees reach their full potential, offering Pathways Career Development Quarterly, a programme designed to support professional growth. Cookies Settings
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Come work at a place where innovation and teamwork come together to support the most exciting missions in the world! CAMS QGS Job Description As Software Engineer you will be working on the Centralized Appliance Management Service (CAMS). It allows us to optimize existing Qualys products and create an innovative way of delivering those to a customer. This opening is your chance to create a significant impact on product improvement and delivery options Responsibilities: Design, develop and deliver Linux services and automation behaviors using Python (shell scripts appreciated) Conceive and deliver new features and improvements in a fast-paced environment as a part of a growing engineering team Develop capacity and monitoring plans for the services you write Collaborate across the company to define, design, build and improve various products Qualifications Experience in Linux system-oriented software development using C/C++ (Makefile, RPMBuild, Docker, Kubernetes/Swarm) Experience in developing micro-services for private and public clouds Hands-on experience with DevOps tools like Puppet and/or Ansible is appreciated Good knowledge of networking and Linux system services (systemd, etcd). Understanding of HTTP (0.9/1.0, HTTPs, TLS/SSL, Certificates, HTTP proxy/reverse proxy architecture and behavior understanding appreciated). Ability to think out-of-box and zeal to continuously improve design and implementation. Excellent communicator and team player BS/MS in Computer Science or related field Preferred Skills Knowledge of Linux, Kubernetes, Docker, Swarm Knowledge of Kafka, Casandra, Elastic Search, python, bash script Good understanding of how distributed systems Show more Show less
Posted 1 month ago
0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
" Strong working experience on AWS services – EKS, ECS, EC2, RDS, S3, Cloudwatch, Secrets Manager, Elasticsearch Service, Kinesis, VPC, Route 53, Direct Connect, etc. Strong working experience on AWS services – Developer Service –Code Commit , Code Build ,Code Pipeline, Code Deploy,CodeStar. Good Knowledge on the Integration of AWS Developer services with Devops Native Tools. Good Knowledge on the Server-less Service Lambda ,Function to integrate with AWS service and Also to trigger Cloud Native Tools. Good Understanding on the Concepts of HA ,DA ,Scalability and Elasticity on the AWS Services Should have experience with either Elasticsearch Should have experience with Docker and Kubernetes. Should have knowledge in automation tools like ansible, chef, puppet, terraform, cloudformation, etc. Ability to work on the agile model." Show more Show less
Posted 1 month ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Role Description Location : All UST Locations Experience Range : 5-8yrs Responsibilities Infrastructure as Code & Cloud Automation Design and implement Infrastructure as Code (IaC) using Terraform, Ansible, or equivalent for both Azure and on-prem environments. Automate provisioning and configuration management for Azure PaaS services (App Services, AKS, Storage, Key Vault, etc.). Manage Hybrid Cloud Deployments, ensuring seamless integration between Azure and on-prem alternatives. CI/CD Pipeline Development (Without Azure DevOps) Develop and maintain CI/CD pipelines using GitHub Actions or Jenkins. Automate containerized application deployment using Docker, Kubernetes (AKS). Implement canary deployments, blue-green deployments, and rollback strategies for production releases. Cloud Security & Secrets Management Implement role-based access control (RBAC) and IAM policies across cloud and on-prem environments. Secure API and infrastructure secrets using HashiCorp Vault (instead of Azure Key Vault). Monitoring, Logging & Observability Set up observability frameworks using Prometheus, Grafana, and ELK Stack (ElasticSearch, Kibana, Logstash). Implement centralized logging and monitoring across cloud and on-prem environments. Must Have Skills & Experience Cloud & DevOps Azure PaaS Services: App Services, AKS, Azure Functions, Blob Storage, Redis Cache Kubernetes & Containerization: Hands-on experience with AKS, Kubernetes, Docker CI/CD Tools: Experience with GitHub Actions, Jenkins Infrastructure as Code (IaC): Proficiency in Terraform Security & Compliance IAM & RBAC: Experience with Active Directory, Keycloak, LDAP Secrets Management: Expertise in HashiCorp Vault or Azure Key Vault Cloud Security Best Practices: API security, network security, encryption Networking & Hybrid Cloud Azure Networking: Knowledge of VNets, Private Endpoints, Load Balancers, API Gateway, Nginx Hybrid Cloud Connectivity: Experience with VPN Gateway, Private Peering Monitoring & Performance Optimization Observability tools: Prometheus, Grafana, ELK Stack, Azure Monitor & App Insights Logging & Monitoring: Experience with ElasticSearch, Logstash, OpenTelemetry, Log Analytics Good To Have Skills & Experience Experience with additional IaC tools (Ansible, Chef, Puppet) Experience with additional container orchestration platforms (OpenShift, Docker Swarm) Knowledge of advanced Azure services (e.g., Azure Logic Apps, Azure Event Grid) Familiarity with cloud-native monitoring solutions (e.g., CloudWatch, Datadog) Experience in implementing and managing multi-cloud environments Key Personal Attributes Strong problem-solving abilities Ability to work in a fast-paced and dynamic environment Excellent communication skills and ability to collaborate with cross-functional teams Proactive and self-motivated, with a strong sense of ownership and accountability. Skills Azure,Scripting,CI/CD Show more Show less
Posted 1 month ago
8 - 10 years
0 Lacs
Hyderabad, Telangana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism SAP Management Level Manager Job Description & Summary At PwC, our people in integration and platform architecture focus on designing and implementing seamless integration solutions and robust platform architectures for clients. They enable efficient data flow and optimise technology infrastructure for enhanced business performance. In enterprise architecture at PwC, you will focus on designing and implementing architectural solutions that align with the organisation's overall strategy and goals. Your work will involve understanding business products, business strategies and customer usage of products. You will be responsible for defining architectural principles, analysing business and technology landscapes and translating content / develop frameworks to guide technology decisions and investments. Working in this area, you will have a familiarity with business strategy, processes and experience in business solutions which enable an organisation's technology infrastructure. You will help to confirm that technology infrastructure is optimised, scalable, and aligned with business needs, enabling efficient data flow, interoperability, and agility. Through your work, you will communicate a deep understanding of the business and a broad knowledge of architecture and applications. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organizations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: Certifications ○ Minimum: Google Professional Cloud Architect ○ Alternate: Google Professional Cloud Network Engineer, Google Professional Cloud Security Engineer 10+ years direct experience working in IT Infrastructure 5+ years in a customer facing role working with enterprise clients Experience in Managing large scale Windows/Linux environments. Experience in understanding a complex customer’s existing soware workloads and the ability to deny a technical migration roadmap to the Cloud. Experience in Identity and Access Management, networking, storage, and compute infrastructure (servers, databases, rewalls, load balancers) for architecting/Implementing/maintaining cloud solutions in virtualized environments. Experience in architecting and developing soware or infrastructure for scalable and secure distributed systems. Experience in advanced areas of networking, including Linux, soware-dened networking, network virtualization, open protocols, application acceleration and load balancing, DNS, virtual private networks and their application to PaaS and IaaS technologies Experience with Relational Databases, NoSQL Databases and/or Big Data technologies (e.g. Oracle, SQL Server, Postgres, Spark, Hadoop, other Open Source). Experience with application development concepts and technologies (e.g. CI/CD, Java, Python, Chef, Puppet, Ansible) Experience automating infrastructure provisioning, DevOps, and/or continuous integration/delivery. Understanding of open source server soware (such as NGINX, RabbitMQ, Redis, Elasticsearch) Expert knowledge of containerization and container orchestration technologies such as Google Kubernetes Engine (GKE). Customer facing migration experience, including service discovery, assessment, planning, execution, and operations. Demonstrated excellent communication, presentation, and problem-solving skills. Experience in project governance and enterprise customer management. Willingness to travel around 30%-40% Mandatory skill sets: Cloud Architecture Preferred skill sets: Cloud Architecture Years of experience required:8-10 Years Education qualification: BE/Btech/Post Graduate (MBA,MCA & M.Tech) Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Master of Engineering, Bachelor of Engineering Degrees/Field Of Study Preferred: Certifications (if blank, certifications not specified) Required Skills Cloud Architectures Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Methodology, Analytical Thinking, Business Architecture, Business Continuity, Business Process Modeling, Business Process Workflow, Coaching and Feedback, Communication, Creativity, Embracing Change, Emotional Regulation, Empathy, Enterprise Application Integration, Enterprise Architecture, Enterprise Integration, Enterprise Service Bus (ESB), Inclusion, Intellectual Curiosity, IT Service Management (ITSM), Learning Agility, Operational Excellence, Optimism {+ 11 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Work Location: Hyderabad Years of Experience: 5-8 Years Required Skills Technical SME to quickly grasp the environment's architecture/layout to promote quick issue identification and troubleshooting Data focused in the sense of being able to understand data models, service level dependencies, and data flows between services Solid programming experience with Java 8 or above, Spring Core, Spring MVC, Spring Cloud, RESTful Webservices Experience on ORM tools like Hibernate etc. Experience in design and implementation of applications based on Microservice architecture patterns with Spring boot and Docker containers. Experience in implementing and deploying docker containerized solutions in a Kubernetes Environment Hosted on Azure/AWS/Google IDEs: Eclipse/IntelliJ/Spring tool suite Knowledge and Hands-on PostgreSQL Azure SQL proficient: Handling complex, nested, and recursive queries, stored procedures. Proficient understanding of code versioning tools such as Git or SVN. Good understanding of DevOps CI/CD pipelines and tool chain (ADO, Jenkins , etc.) An Agile mindset with experience working in Agile environment using ADO, JIRA, etc. Exposure / Ability to collaborate with Multiple Tech and Functional teams spread across Globe. ITSM (Incident, Change, Problem Mgmt) Lead Technical Collaborations, Problem Solving & RCA Hands-on Exposure with Release / Change /Deployment implementations on Cloud Good Mentoring Skills, Technical Documentation, Knowledgebase Creation & Maintenance Good To Have Skills Experience on Angular JS – preferably experience with Angular 6+ using TypeScript. Knowledge and Hands-on – NoSQL DBs. (MongoDB) Orchestration, Automation and Configuration Management using Chef/Ansible/ Puppet Experience on Kafka and Spark. Experience on NodeJS or React JS. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Global Cloud Senior Administrator The Global Cloud Senior Administrator is responsible for the day-to-day oversight of the Baker & Taylor Microsoft Azure cloud environments. This includes provisioning, configuring, monitoring, and maintaining various Azure services and resources to ensure the smooth operation of cloud-based solutions. The role requires deep expertise in Azure Cloud security, scalability, cost optimization, and managing user access to protect data and comply with regulations. Key responsibilities include managing storage, implementing security controls, monitoring resource usage, maintaining user accounts, and ensuring data backups are properly configured. Key Roles and Responsibilities: Resource Management : Provisioning, managing, and monitoring Azure resources, including virtual machines, storage accounts, VNETs, databases. Storage Implementation : Implementing and managing storage solutions, including data redundancy, archiving, and backup strategies. PaaS Experience : Working with Web Apps, Functions, API Management (APIM), and Secure Endpoints. Security Management : Enforcing NSG security policies, managing user access controls, and monitoring for security threats. Monitoring and Logging : Using Azure Monitor, analyzing logs, and setting up alerts for critical events. Cost Optimization : Reviewing and optimizing cloud resource usage to reduce costs. User Role Management : Assigning Azure roles and managing user permissions within Identity and Access Management (IAM). Networking Configuration : Designing and managing virtual networks, subnets, and security groups. Backup and Recovery : Creating backup strategies, testing recovery, ensuring data availability. (Veeam experience is a plus.) Compliance Management : Ensuring compliance with relevant regulations and standards. Automation : Writing scripts with Azure PowerShell or Azure CLI to automate tasks. EOL Planning : Tracking resource life cycles to manage replacements before end-of-life. Troubleshooting and Optimization : Diagnosing and resolving cloud infrastructure issues; performance tuning. Required Technical Skills: Expertise in Azure CLI and PowerShell Experience with Azure Web Apps, Functions, APIM, Secure Endpoints Expertise in Azure RBAC and Key Vault Deep understanding of Azure services and architecture Solid cloud networking knowledge (UDRs, Peering, Load Balancers) Strong understanding of security best practices and identity management Familiarity with Azure Monitor and cloud logging tools Understanding of Infrastructure as Code (IaC) principles Qualifications: Education : Bachelor’s degree in Computer Science, IT, or a related field Experience : Typically 7+ years in cloud infrastructure roles Preferred Cloud Platforms and Tools: Strong experience with Microsoft Azure Experience with AWS or Google Cloud is beneficial Certifications preferred : Microsoft Azure Solutions Architect Expert AWS Certified Solutions Architect Proficiency with tools such as: Terraform , Kubernetes , Docker Cloud-native services (e.g., EC2, S3, Lambda, Azure Functions, GKE) Networking (VNETs, DNS, VPNs, Load Balancers) DevOps and Automation: Experience with CI/CD pipelines Infrastructure automation and configuration tools: Ansible , Chef , Puppet Scripting experience in Bash , Python , etc. Additional Skills: Strong customer support experience in a cloud operations role Excellent communication skills , able to explain technical issues to non-technical stakeholders Incident management experience (outages, escalations) Problem-solving and analytical skills Adaptability to new cloud technologies and environments Strong collaboration skills across teams Experience with cloud security tools Knowledge of ITIL or similar IT service management frameworks Show more Show less
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Overview Viraaj HR Solutions is a leading provider of innovative human resource services that cater to the diverse needs of companies across India. Our mission is to empower organizations by offering tailored recruitment solutions that align with their strategic business goals. We value integrity, collaboration, and excellence, creating a dynamic and supportive work environment where our team members can thrive. As we expand our offerings, we are seeking a dedicated Site Reliability Engineer to join our team and help us drive operational excellence. Role Responsibilities Monitor the performance and availability of critical applications and services. Implement and manage infrastructure as code to promote efficiency. Develop automation scripts to streamline operational processes. Conduct regular capacity planning and performance analysis. Respond to incidents and troubleshoot issues in a timely manner. Collaborate with development teams for continuous integration and deployment practices. Establish and maintain monitoring, alerting, and logging solutions. Participate in on-call duty rotations to ensure high availability. Build and manage load balancing and failover solutions. Conduct root cause analysis for production incidents. Document solutions and create knowledge base articles. Evaluate and recommend tools and technologies for improving reliability. Work with security teams to ensure infrastructure security compliance. Engage in performance testing and tuning of applications. Provide training and mentorship to junior engineers. Qualifications Bachelor's degree in Computer Science or related field. 3+ years of experience in site reliability engineering or DevOps. Strong understanding of cloud infrastructure (AWS, Azure, GCP). Experience with automation tools (Ansible, Puppet, Chef). Familiarity with monitoring tools (Prometheus, Grafana, Nagios). Proficient in scripting languages (Python, Bash, Ruby). Knowledge of containerization (Docker, Kubernetes). Experience with incident management and resolution. Understanding of networking concepts and security best practices. Ability to work well in a fast-paced, collaborative environment. Strong analytical and problem-solving skills. Excellent communication and documentation skills. Ability to manage multiple priorities and meet deadlines. Experience in performance tuning and optimization. Willingness to participate in on-call support as needed. Continuous learning mindset with a passion for technology. Skills: cloud infrastructure (aws, azure, gcp),performance tuning,networking concepts,monitoring tools (prometheus, grafana, nagios),cloud infrastructure,automation tools,incident management,devops,load balancing,network security,scripting languages (python, bash, ruby),containerization (docker, kubernetes),scripting languages,automation tools (ansible, puppet, chef),site reliability engineering,security best practices Show more Show less
Posted 1 month ago
0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
" Strong working experience on AWS services – EKS, ECS, EC2, RDS, S3, Cloudwatch, Secrets Manager, Elasticsearch Service, Kinesis, VPC, Route 53, Direct Connect, etc. Strong working experience on AWS services – Developer Service –Code Commit , Code Build ,Code Pipeline, Code Deploy,CodeStar. Good Knowledge on the Integration of AWS Developer services with Devops Native Tools. Good Knowledge on the Server-less Service Lambda ,Function to integrate with AWS service and Also to trigger Cloud Native Tools. Good Understanding on the Concepts of HA ,DA ,Scalability and Elasticity on the AWS Services Should have experience with either Elasticsearch Should have experience with Docker and Kubernetes. Should have knowledge in automation tools like ansible, chef, puppet, terraform, cloudformation, etc. Ability to work on the agile model." Show more Show less
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: The Cloud Engineer II (DevSecOps) Who we’re looking for: This opportunity is part of the Global Technology Infrastructure & Operations team (GTIO), where our mission is to deliver modern and relevant technology that supports the way McDonald’s works! We provide outstanding foundational technology products and services including Global Networking, Cloud, End User Computing, and IT Service Management. It’s our goal to always provide an engaging, relevant, and simple experience for our customers. The Cloud Engineer II (DevSecOps) reports to the Director of Enterprise DevSecOps Platform and is responsible for supporting, migrating, automation and optimization of software development and deployment process. The Cloud DevSecOps Engineer will work closely with software developers, operations engineers, and other stakeholders to ensure that the software delivery process is efficient, secure, and scalable. You will support the Corporate, Digital, Data, Restaurant, and Market application and product teams by efficiently and optimally delivering DevOps standards and services. This is a great opportunity for an experienced technology leader to help craft the transformation of infrastructure and operations products and services to the entire McDonalds environment. In this role, you will: Participate in the management, design, and solutioning of software development and deployment process. Provide direction and guidance to vendors partnering on DevSecOps tools standardization and engineering support. Build reusable pipeline templates for automated deployment of cloud infrastructure and code. Research, analyze, design, develop and support high-quality automation workflows inside and outside the cloud platform that are appropriate for business and technology strategies. Develop and maintain infrastructure and tools that support the software development and deployment process. Automate the software development and deployment process. Monitor and troubleshoot the software delivery process. Work with software developers and operations engineers to improve the software delivery process. Stay up to date on the latest DevSecOps practices and technologies. Drive proof of concepts and conduct technical feasibility studies for business requirements. Strive to provide internal and external customers with excellent customer service and world-class service. Effectively communicate project health, risks, and issues to the program partners, sponsors, and management teams. Resolve most conflicts between timeline, budget, and scope independently but intuitively raise complex or consequential issues to senior management. Work well in an agile environment Qualifications : 3+ years hands-on DevOps pipeline for automating, building and deploying Microservice Applications, API's and Non-Container Artifacts (Preferred) 3+ years GitHub Actions, ArgoCD, Helm Charts, Haness and SonarQube (Preferred) 3+ years hands-on experience with CI/CD technologies including Microservices, Terraform and Pipeline creation/management (e.g., Github, Artifactory/JFROG, Harness,etc.) (Preferred) Over 3 + years of experience with cloud technologies, including extensive hands-on work with IaaS and PaaS offerings in GCP OR AWS services. More than 3 years of experience in developing application build and deployment pipelines for .NET, Java, and Python applications, is good to have. Hands-on experience in managing Kubernetes clusters. Experience with observability tools like Datadog, New Relic and open source (O11y) observability ecosystem (Prometheus, Grafana, Jaeger) (Preferred) 4+ years of Information Technology experience Bachelor's degree in information technology, or a related field or relevant experience. 3+ years of application development using agile methodology. Advanced knowledge of application, data, and infrastructure architecture disciplines + Experience with Kubernetes, and AWS platform Hands-on knowledge of a broad range of End-to-End DevOps technologies Ability to design, develop and implement scalable, elastic microservice based platforms. Ability to help/guide team in resolving technical issues through debugging, research, and investigation. Automate ios Infrastructure: Utilize infrastructure-as-code principles to simplify the deployment and maintenance of build environments using tools like Terraform, GitHub Actions, Chef, Ansible, and Puppet for configuration management. Enhance Developer Efficiency: Develop and maintain tools that boost the productivity of iOS developers, including build automation, testing, and release management tools. Automate Android Infrastructure: Utilize infrastructure-as-code principles to simplify the deployment and maintenance of build environments using tools like Terraform, GitHub Actions, Chef, Ansible, and Puppet for configuration management. Enhance Developer Efficiency: Develop and maintain tools that boost the productivity of Android developers, including build automation, testing, and release management tools. Good to have experience in working on Code Quality SAST and DAST tools like SonarQube/SonarCloud, Veracode, Checkmarx, and Snyk. Experience using container-based technologies. Any AWS Certification and?Agile certification preferably scaled agile. Good knowledge of IaaS and PaaS offerings in AWS, Azure and GCP Good knowledge of Infrastructure-as-Code and associated technologies (e.g. repos, pipelines, Terraform, etc.). Previous DevSecOps and automation experience Experience developing scripts or automating tasks using languages such as Bash, Powershell, Python, Perl, Ruby, etc. Strong desire to automate everything you touch. Self-starter, able to come up with solutions to problems and complete those solutions while coordinating with other teams. Knowledge of foundational cloud security principles Excellent problem-solving and analytical skills Strong communication and partnership skills Knowledge of AWS and Azure and a willingness to upskill as the company’s adoption grows (preferred) Experience with Software Development Life Cycle (SDLC) (Preferred) Hands-on knowledge of an Infrastructure-as-Code and associated technologies (e.g. repos, pipelines, Terraform, etc.) (Preferred) Advanced knowledge of "AWS" Platform preferably 3+ years AWS/Kubernetes experience (Preferred) Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Technical Expertise Required Primary Skills Required. Strong knowledge of Linux/Unix systems and command line tools. Proficiency in scripting languages such as Python, Shell, or Perl. Experience with configuration management tools like Ansible, Puppet, or Chef. Familiarity with cloud platforms like AWS, Azure, or Google Cloud. Understanding of networking principles and protocols (TCP/IP, HTTP, DNS, etc.). Knowledge of containerization technologies (Docker, Kubernetes) and orchestration tools. Expertise in monitoring and logging tools such as Prometheus, Grafana, ELK stack, or Splunk. (Optional - But Good to Know) Experience with Citrix technologies such as XenApp, XenDesktop, and NetScaler Support the administration and engineering of the Citrix environment. Work with Citrix Provisioning Server, SQL Database, and Citrix License Server. Experienced knowledge of virtualization technologies such as VMware or Hyper-V Strong problem-solving and troubleshooting skills, with the ability to analyze and resolve complex technical issues. Excellent communication and collaboration skills to work effectively with cross-functional teams. Strong attention to detail and ability to work in a fast-paced, dynamic environment. Terraform basic syntax and GitLab CI/CD configuration, pipelines, jobs Cloud resources provisioning and configuration through CLI/API Understanding of how to do basic queries in logs tools for general questions Operating system (Linux) configuration, package management, startup and troubleshooting Block and object storage configuration Networking VPCs, proxies and CDNs Secondary Skills Required For The Role. Bachelor's degree in computer science, engineering, or a related field. Proven experience as a Site Reliability Engineer or a similar role. Solid understanding of software development methodologies and DevOps principles. Experience with agile and iterative development processes. Certification in relevant technologies or frameworks is a plus (e.g., AWS Certified DevOps Engineer, Certified Kubernetes Administrator). Familiarity with continuous integration/continuous deployment (CI/CD) pipelines. Experience with source control systems such as Git or SVN. Knowledge of security best practices and experience implementing security measures in a production environment. Ability to work independently and handle multiple projects and priorities simultaneously. Strong analytical and problem-solving skills, with a focus on continuous improvement and automation. Role & Responsibilities Of The Profile Design and implement highly available and scalable systems, ensuring the reliability and performance of the company's website or application. Collaborate with cross-functional teams to define and establish service level objectives (SLOs) and service level agreements (SLAs) for critical systems. Monitor systems and applications, proactively identifying and resolving any performance bottlenecks or availability issues. Develop and maintain monitoring tools, alerts, and dashboards to provide visibility into system health and performance. Conduct post-incident analyses to identify root causes and implement preventive measures to avoid future incidents. Automate repetitive tasks and processes to improve efficiency and reduce manual intervention. Create and maintain documentation for system architecture, configuration, and troubleshooting procedures. Perform capacity planning and resource allocation to ensure optimal system performance and scalability. Collaborate with development teams to implement and deploy new features and enhancements, ensuring they meet reliability and performance standards. Stay up to date with industry best practices, new technologies, and emerging trends in site reliability engineering. Objectives of this role Run the production environment by monitoring availability and taking a holistic view of system health Build software and systems to manage platform infrastructure and applications Improve reliability, quality, and time-to-market of our suite of software solutions Measure and optimize system performance, with an eye toward pushing our capabilities forward, getting ahead of customer needs, and innovating for continual improvement Provide primary operational support and engineering for multiple large-scale distributed software applications Show more Show less
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description L2 Infrastructure Engineer – Linux patching (3 positions) Shift -24/7 shift Work location Pune Extensive experience managing RedHat, SLES, CentOS, Ubuntu OS in enterprise environments. Strong Hands On Experience In Patching . Proficiency in coordinating patch deployments and troubleshooting vulnerabilities on Linux servers. Experience with Ansible AWX for configuration management tasks. Experience with other automation tools like Ansible, Chef, or Puppet. Typically has 5- 8 years of IT and business/industry work experience working as Linux support engineer or similar role with hands-on experience managing cloud infrastructure on the Azure platform. Previous experience working in a large-scale, enterprise environment providing operational support by building, configuring, and managing Virtual servers Excellent verbal and written communications Ability to track and administer several tasks concurrently Strong change management and ITIL background Relevant certifications in Linux administration are preferred. Skills: Good to Have Experience working within an enterprise ticketing system – incidents, requests, change control – preferably Service-Now Job Description: Experienced Linux Server Patching Engineer with a strong background in managing and maintaining RedHat, SLES, CentOS, and Ubuntu operating systems. The ideal candidate will be responsible for coordinating, deploying, and troubleshooting patches and vulnerabilities on Linux servers while ensuring adherence to best practices in vulnerability management. Key Responsibilities: Coordinate, deploy, and troubleshoot patches and vulnerabilities on Linux servers running RedHat, SLES, CentOS, and Ubuntu operating systems. Possess a good understanding of vulnerability management practices and familiarity with CVE scoring to prioritize patch deployments effectively. Utilize Python scripting skills for automation tasks related to patch management processes. Employ Ansible AWX for efficient configuration management and automation of patching procedures. Provide suggestions for scanning improvement based on the analysis of vulnerability scanner data. Collaborate with cross-functional teams to ensure successful patch deployments without disrupting business operations. Lead fellow engineers troubleshooting patching failures and provide technical guidance Show more Show less
Posted 1 month ago
7 years
0 Lacs
Gurugram, Haryana, India
On-site
Company Description 👋🏼We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 36 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Experience: 7+ Years Extensive Experience with Azure cloud platform Good Experience in maintaining cost-efficient, scalable cloud environments for the organization involving best practices for monitoring and cloud governance Experience with CI tools like Jenkins and building end to end CI/CD pipelines for projects Experience with various build tools like Maven/Ant/Gradle Rich Experience with container frameworks like Docker, Kubernetes or cloud native container services Good Experience in Infrastructure as a Code (IaC) using tools like Terraform Good Experience with anyone CM tools of following: Ansible, Chef, Saltstack, Puppet Good Experience in monitoring tools like Prometheus & Grafana, Nagios/ DataDog/Zabbix and logging tools like Splunk/LogStash Good Experience in scripting and automation using languages like Bash/Shell, Python, PowerShell, Groovy, Perl. Configure and manage data sources like MySQL, Mongo, Elasticsearch, Redis, Cassandra, Hadoop, PostgreSQL, Neo4J etc Good experience on managing version control tool like Git, SVN/BitBucket Good problem-solving ability, strong written and verbal communication skills RESPONSIBILITIES: Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the client’s requirements. Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Overview Viraaj HR Solutions is a leading provider of innovative human resource services that cater to the diverse needs of companies across India. Our mission is to empower organizations by offering tailored recruitment solutions that align with their strategic business goals. We value integrity, collaboration, and excellence, creating a dynamic and supportive work environment where our team members can thrive. As we expand our offerings, we are seeking a dedicated Site Reliability Engineer to join our team and help us drive operational excellence. Role Responsibilities Monitor the performance and availability of critical applications and services. Implement and manage infrastructure as code to promote efficiency. Develop automation scripts to streamline operational processes. Conduct regular capacity planning and performance analysis. Respond to incidents and troubleshoot issues in a timely manner. Collaborate with development teams for continuous integration and deployment practices. Establish and maintain monitoring, alerting, and logging solutions. Participate in on-call duty rotations to ensure high availability. Build and manage load balancing and failover solutions. Conduct root cause analysis for production incidents. Document solutions and create knowledge base articles. Evaluate and recommend tools and technologies for improving reliability. Work with security teams to ensure infrastructure security compliance. Engage in performance testing and tuning of applications. Provide training and mentorship to junior engineers. Qualifications Bachelor's degree in Computer Science or related field. 3+ years of experience in site reliability engineering or DevOps. Strong understanding of cloud infrastructure (AWS, Azure, GCP). Experience with automation tools (Ansible, Puppet, Chef). Familiarity with monitoring tools (Prometheus, Grafana, Nagios). Proficient in scripting languages (Python, Bash, Ruby). Knowledge of containerization (Docker, Kubernetes). Experience with incident management and resolution. Understanding of networking concepts and security best practices. Ability to work well in a fast-paced, collaborative environment. Strong analytical and problem-solving skills. Excellent communication and documentation skills. Ability to manage multiple priorities and meet deadlines. Experience in performance tuning and optimization. Willingness to participate in on-call support as needed. Continuous learning mindset with a passion for technology. Skills: cloud infrastructure (aws, azure, gcp),performance tuning,networking concepts,monitoring tools (prometheus, grafana, nagios),cloud infrastructure,automation tools,incident management,devops,load balancing,network security,scripting languages (python, bash, ruby),containerization (docker, kubernetes),scripting languages,automation tools (ansible, puppet, chef),site reliability engineering,security best practices Show more Show less
Posted 1 month ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Company: OpsTree Solutions Website: Visit Website Business Type: Small/Medium Business Company Type: Product & Service Business Model: B2B Funding Stage: Pre-seed Industry: IT Services and IT Consulting Salary Range: ₹ 11-18 Lacs PA Job Description About the Role We’re looking for a talented and motivated DevOps Engineer with 3 to 6 years of hands-on experience to join our dynamic team. In this role, you’ll play a key part in bridging development and operations, ensuring smooth deployments, high system reliability, and continuous improvement of our delivery pipelines. Key Responsibilities Deploy product updates and fixes with minimal downtime. Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI, Docker, and Kubernetes. Automate infrastructure and operational tasks using Terraform, Ansible, and scripting (Shell/Python). Monitor system health and troubleshoot production issues to ensure high availability and performance. Collaborate with software developers and IT staff to optimize and scale delivery workflows. Maintain secure, reliable, and scalable infrastructure environments. Document systems, configurations, processes, and standard operating procedures. Required Skills & Qualifications Hands-on experience with cloud platforms such as AWS, Azure, or GCP. Expertise with containerization and orchestration tools: Docker and Kubernetes. Proficiency in Infrastructure as Code tools (Terraform) and configuration management (Ansible, Chef, or Puppet). Strong scripting capabilities in Python, Shell, or similar. Familiarity with CI/CD tools (e.g., Jenkins, GitLab CI) and version control systems like Git. Solid knowledge of Linux/Unix system administration, networking concepts, and monitoring tools. Excellent analytical and troubleshooting skills with a team-oriented approach. If you're passionate about DevOps and have the skills and experience to excel in this role, we'd love to hear from you! Join us in shaping the future of our infrastructure and operations. 🌟 Apply now and be part of our dynamic team! Show more Show less
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Agoda Agoda is an online travel booking platform for accommodations, flights, and more. We build and deploy cutting-edge technology that connects travelers with a global network of 4.7M hotels and holiday properties worldwide, plus flights, activities, and more . Based in Asia and part of Booking Holdings, our 7,100+ employees representing 95+ nationalities in 27 markets foster a work environment rich in diversity, creativity, and collaboration. We innovate through a culture of experimentation and ownership, enhancing the ability for our customers to experience the world. Our Purpose – Bridging the World Through Travel We believe travel allows people to enjoy, learn and experience more of the amazing world we live in. It brings individuals and cultures closer together, fostering empathy, understanding and happiness. We are a skillful, driven and diverse team from across the globe, united by a passion to make an impact. Harnessing our innovative technologies and strong partnerships, we aim to make travel easy and rewarding for everyone. Get to Know our Team: In Agoda’s Back End Engineering department, we build the scalable, fault-tolerant systems and APIs that host our core business logic. Our systems cover all major areas of our business: inventory and pricing, product information, customer data, communications, partner data, booking systems, payments, and more. These mission-critical systems change frequently with dozens of releases per day, so we must employ state-of-the-art CI/CD and testing techniques in order to make sure everything works without any downtime. We also ensure that our systems are self-healing, responding gracefully to extreme loads or unexpected input. In order to accomplish this, we use state-of-the-art languages like Scala and Go, data technologies like Kafka and Aerospike, and agile development practices. Most importantly though, we hire great people from all around the world and empower them to be successful. Whether it’s building new projects like Flights and Packages or reimagining our existing business, you’ll make a big impact as part of the Back End Engineering team. The Opportunity: Agoda is looking for developers to work on mission critical systems that deal with the designing and development of APIs that serve millions of user search requests a day. In this Role, you’ll get to Lead development of features, experiments, technical projects and complex systems Be a technical architect, mentor, and driver towards the right technology Continue to evolve our architecture and build better software Be a major contributor to our agile and scrum practices Get involved with software engineering and collaborate with server, other client, and infrastructure technical team members to build the best solution Constantly look for ways to improve our products, code-base and development practices Write great code and help others write great code Drive Technical decisions in the organization What You’ll Need To Succeed 7+ years’ experience under your belt developing performance-critical applications that run in a production environment using Scala, Java or C# Experience in leading projects, initiatives and/or teams, with full ownership of the systems involved Data platforms like SQL, Cassandra or Hadoop. You understand that different applications have different data requirements Good understanding of algorithms and data structures Strong coding ability You are passionate about the craft of software development and constantly work to improve your knowledge and skills Excellent verbal and written English communication skills It’s Great If You Have Experience with Scrum/Agile development methodologies Experience building large-scale distributed products Core engineering infrastructure tools like Git for source control, TeamCity for Continuous Integration and Puppet for deployment Hands-on experience working with technology like queueing systems (Kafka, RabbitMQ, ActiveMQ, MSMQ), Spark, Hadoop, NoSQL (Cassandra, MongoDB), Play framework, Akka library #india #newdelhi #Bangalore #Bengaluru #Pune #Hyderabad #Chennai #Kolkata #Lucknow #IT #ENG #4 #Mumbai #Delhi #Noida Equal Opportunity Employer At Agoda, we pride ourselves on being a company represented by people of all different backgrounds and orientations. We prioritize attracting diverse talent and cultivating an inclusive environment that encourages collaboration and innovation. Employment at Agoda is based solely on a person’s merit and qualifications. We are committed to providing equal employment opportunity regardless of sex, age, race, color, national origin, religion, marital status, pregnancy, sexual orientation, gender identity, disability, citizenship, veteran or military status, and other legally protected characteristics. We will keep your application on file so that we can consider you for future vacancies and you can always ask to have your details removed from the file. For more details please read our privacy policy . To all recruitment agencies: Agoda does not accept third party resumes. Please do not send resumes to our jobs alias, Agoda employees or any other organization location. Agoda is not responsible for any fees related to unsolicited resumes. Show more Show less
Posted 1 month ago
3 - 7 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About the role : Responsible for guiding teams towards continuous integration and continuous deployment; hence, he must have extensive knowledge of the automation tools and their application like Gradle, Git, Jenkins, Bamboo, Docker, Kubernetes, Puppet Enterprise, Nagios, Chef, and Ansible. Expectations/ Requirements * Experience 3 to 7 years of experience. * Linux OS/application installation and configuration. * Shell/Bash/Python scripting. * Experience in building, deploying and operating infrastructure and applications. * Cloud experience: AWS (Azure/Google Cloud). * CI/CD tools: Kubernetes, Jenkins/Bamboo/GitLab, Chef/Puppet/Ansible, Maven/Nexus. * Ability to oversee and mentor junior software developers, as well as report to management. * Ability to ensure smooth software deployment by writing script updates and running diagnostics. Superpowers/ Skills that will help you succeed in this role * High level of drive, initiative and self-motivation * Ability to take internal and external stakeholders along * Understanding of Technology and User Experience * Love for simplifying * Growth Mindset * Willingness to experiment and improve continuously Why join us A collaborative output driven program that brings cohesiveness across businesses through technology Improve the average revenue per use by increasing the cross-sell opportunities A solid 360 feedbacks from your peer teams on your support of their goals Respect, that is earned, not demanded from your peers and manager. Compensation: Show more Show less
Posted 1 month ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Requisition Number: 100676 Cloud Infrastructure Engineer II Location- Bangalore, it's 5 days work from office. Shifts- 24*7 rotational Insight at a Glance 14,000+ engaged teammates globally with operations in 25 countries across the globe. Received 35+ industry and partner awards in the past year $9.2 billion in revenue #20 on Fortune’s World's Best Workplaces™ list #14 on Forbes World's Best Employers in IT – 2023 #23 on Forbes Best Employers for Women in IT- 2023 $1.4M+ total charitable contributions in 2023 by Insight globally Now is the time to bring your expertise to Insight. We are not just a tech company; we are a people-first company. We believe that by unlocking the power of people and technology, we can accelerate transformation and achieve extraordinary results. As a Fortune 500 Solutions Integrator with deep expertise in cloud, data, AI, cybersecurity, and intelligent edge, we guide organizations through complex digital decisions. About The Role We are looking for a Cloud and On-Prem Security Engineer with expertise in managing vulnerabilities, hardening servers, and ensuring the security of both cloud and on-premises environments. The ideal candidate should have hands-on experience with Orca Security for cloud security and Qualys for on-prem vulnerability management. Additionally, they should be proficient in patching using Puppet (Cloud) and SCCM/MECM (On-Prem), as well as server hardening across Windows and Linux environments. As a Cloud Infra Engineer II, you will get to: Vulnerability Management: Experience in managing and remediate vulnerabilities in Azure Cloud using Orca Security. Perform on-prem vulnerability assessments and patching using Qualys. Server Hardening & Security Compliance: Implement security best practices for Windows Server 20 (various versions) and Linux (CentOS, RedHat, Ubuntu). Ensure compliance with security standards and policies for both cloud and on-prem servers. Patch Management: Conduct monthly patching of Windows and Linux servers using: Puppet for cloud-based patching. SCCM/MECM for on-prem patching. Cloud & On-Prem Infrastructure Security: Secure and manage Azure cloud resources. Experience in managing on-prem virtualization using Hypervisor and Failover Clustering. Be Ambitious: This opportunity is not just about what you do today but also about where you can go tomorrow. As a Cloud Infra Engineer III, you are positioned for swift advancement within our organization through a structured career path. When you bring your hunger, heart, and harmony to Insight, your potential will be met with continuous opportunities to upskill, earn promotions, and elevate your career. We are looking for a Cloud Infra Engineer II with: 4+ years of experience in cloud and on-prem security. Strong understanding of server security hardening and vulnerability remediation. Experience with compliance frameworks such as ISO 27001, NIST, CIS benchmarks, PCI-DSS, and OWASP security principles, ensuring adherence to industry security standards and best practices. Bachelor’s degree in computer science, Information Technology, or a related field. Exp on Security Tools & Platforms: Cloud Security: Orca Security On-Prem Security: Qualys Patch Management: Puppet (Cloud), SCCM/MECM (On-Prem) Operating Systems: Exp on Windows Server 2016, 2019, 2022 or Linux (CentOS, RedHat, Ubuntu) Infrastructure & Cloud Expertise: Azure Cloud Security & Administration , On-Prem Hypervisor & Failover Cluster Management (good to have) What you can expect - We’re legendary for taking care of you, your family and to help you engage with your local community. We want you to enjoy a full, meaningful life and own your career at Insight. Some of our benefits include: Freedom to work from another location, even an international destination—for up to 30 consecutive calendar days per year. Medical Insurance Health Benefits Professional Development: Learning Platform and Certificate Reimbursement Shift Allowance The position described above provides a summary of some the job duties required and what it would be like to work at Insight. For a comprehensive list of physical demands and work environment for this position, click here. Internal Teammate Application Guidelines Meet the minimum qualifications and requirements of the position; Have completed twelve (12) months service in their current position; Not be under a disciplinary evaluation or suspension period; Have satisfactory performance in their current position; Have their current manager/supervisor recommendation Do you know someone who would make a great Insight teammate? Referrals are the best way to build quality teams – and a great way for you to earn a little extra cash. Insight to find out how you can refer someone to this job at Insight. Insight is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, sexual orientation or any other characteristic protected by law. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
About Agoda Agoda is an online travel booking platform for accommodations, flights, and more. We build and deploy cutting-edge technology that connects travelers with a global network of 4.7M hotels and holiday properties worldwide, plus flights, activities, and more . Based in Asia and part of Booking Holdings, our 7,100+ employees representing 95+ nationalities in 27 markets foster a work environment rich in diversity, creativity, and collaboration. We innovate through a culture of experimentation and ownership, enhancing the ability for our customers to experience the world. Our Purpose – Bridging the World Through Travel We believe travel allows people to enjoy, learn and experience more of the amazing world we live in. It brings individuals and cultures closer together, fostering empathy, understanding and happiness. We are a skillful, driven and diverse team from across the globe, united by a passion to make an impact. Harnessing our innovative technologies and strong partnerships, we aim to make travel easy and rewarding for everyone. Get to Know our Team: In Agoda’s Back End Engineering department, we build the scalable, fault-tolerant systems and APIs that host our core business logic. Our systems cover all major areas of our business: inventory and pricing, product information, customer data, communications, partner data, booking systems, payments, and more. These mission-critical systems change frequently with dozens of releases per day, so we must employ state-of-the-art CI/CD and testing techniques in order to make sure everything works without any downtime. We also ensure that our systems are self-healing, responding gracefully to extreme loads or unexpected input. In order to accomplish this, we use state-of-the-art languages like Scala and Go, data technologies like Kafka and Aerospike, and agile development practices. Most importantly though, we hire great people from all around the world and empower them to be successful. Whether it’s building new projects like Flights and Packages or reimagining our existing business, you’ll make a big impact as part of the Back End Engineering team. The Opportunity: Agoda is looking for developers to work on mission critical systems that deal with the designing and development of APIs that serve millions of user search requests a day. In this Role, you’ll get to Lead development of features, experiments, technical projects and complex systems Be a technical architect, mentor, and driver towards the right technology Continue to evolve our architecture and build better software Be a major contributor to our agile and scrum practices Get involved with software engineering and collaborate with server, other client, and infrastructure technical team members to build the best solution Constantly look for ways to improve our products, code-base and development practices Write great code and help others write great code Drive Technical decisions in the organization What You’ll Need To Succeed 7+ years’ experience under your belt developing performance-critical applications that run in a production environment using Scala, Java or C# Experience in leading projects, initiatives and/or teams, with full ownership of the systems involved Data platforms like SQL, Cassandra or Hadoop. You understand that different applications have different data requirements Good understanding of algorithms and data structures Strong coding ability You are passionate about the craft of software development and constantly work to improve your knowledge and skills Excellent verbal and written English communication skills It’s Great If You Have Experience with Scrum/Agile development methodologies Experience building large-scale distributed products Core engineering infrastructure tools like Git for source control, TeamCity for Continuous Integration and Puppet for deployment Hands-on experience working with technology like queueing systems (Kafka, RabbitMQ, ActiveMQ, MSMQ), Spark, Hadoop, NoSQL (Cassandra, MongoDB), Play framework, Akka library #india #newdelhi #Bangalore #Bengaluru #Pune #Hyderabad #Chennai #Kolkata #Lucknow #IT #ENG #4 #Mumbai #Delhi #Noida Equal Opportunity Employer At Agoda, we pride ourselves on being a company represented by people of all different backgrounds and orientations. We prioritize attracting diverse talent and cultivating an inclusive environment that encourages collaboration and innovation. Employment at Agoda is based solely on a person’s merit and qualifications. We are committed to providing equal employment opportunity regardless of sex, age, race, color, national origin, religion, marital status, pregnancy, sexual orientation, gender identity, disability, citizenship, veteran or military status, and other legally protected characteristics. We will keep your application on file so that we can consider you for future vacancies and you can always ask to have your details removed from the file. For more details please read our privacy policy . To all recruitment agencies: Agoda does not accept third party resumes. Please do not send resumes to our jobs alias, Agoda employees or any other organization location. Agoda is not responsible for any fees related to unsolicited resumes. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary Track Role Count Skill Location Operational Support SRE 2 AWS , SignalFx, Splunk, PagerDuty, xMonitor, CloudWatch, DynamoDB, GitLab, SNS/SQS, Jenkins, Puppet Pune/Hyderabad Show more Show less
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
NVIDIA is looking for a world class engineer to join its multifaceted and fast-paced Infrastructure, Planning and Processes organization where you will be working as a Senior Devops and SRE Engineer. The position will be part of a fast-paced crew that develops and maintains sophisticated build & test environments for a multitude of hardware platforms both NVIDIA GPUs and Tegra Processors along with various operating systems (Windows/Linux/Android). The team works with various other business units within NVIDIA Software such as Graphics Processors, Mobile Processors, Deep Learning, Artificial Intelligence, Robotics and Driverless Cars to cater to their infrastructure & system’s needs. What You’ll Be Doing End-to-end Implementation of the Kubernetes architecture - design, deploy, hardening, networking, sizing, scaling etc. Implementing high availability clusters and disaster recovery solutions Strong System Admin experience using Configuration as Code, infrastructure-as-code with tools such as ansible, puppet, chef & terraform. Design and implement logging & monitoring solution to gain more insight into applications and system health. Implement critical metric using various analytics methods and dashboards. Craft and develop tools needed for automating workflows. Reuse AI techniques to extract useful signals about machines and jobs from the data generated. Take part in prototyping, crafting and developing cloud infrastructure for Nvidia. Participating in on-call support and critical issue coverage as a SRE engineer. What We Need To See Solid programming background in python/Go and/or similar scripting languages. Excellent debugging, problem solving and analytical skills. Strong understanding of architectural requirements and development processes involved in building reliable, robust, scalable data products and pipelines. Proficient in configuration management & IaC tools like Ansible, Puppet, Chef, Terraform Strong background with Gitlab, Jenkins, Flux, ArgoCD and/or other tools to build secure CI/CD systems. Strong expertise in Kubernetes architecture, networking, RBAC, persistent storage solutions like Trident, Ceph, EBS, Longhorn, etc. Proficient in secret management tools like hashicorp vault, aws secrets manager, etc. Proficient in data analytics/visualization & monitoring tools like Kibana, Grafana, Splunk, Zabbix, Prometheus and/or similar systems. 5+ years of proven experience. Bachelor’s or master’s degree in computer science, Software Engineering, or equivalent experience. Ways To Stand Out From The Crowd Thrives in a multi-tasking environment with constantly evolving priorities. Prior experience with large scale operations team. Experience with using and improving data centers. Expertise with windows server infrastructure. Outstanding interpersonal skills and communication with all levels of management. Ability to analyze complex problems into simple sub problems and then reuse available solutions to implement most of those. Ability to design simple systems that can work efficiently without needing much support. Ability to leverage AI/ML to proactively detect & resolve incidents, automated alert triaging, log analysis and automate repetitive workflows. With competitive salaries and a generous benefits package, we are widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us and, due to outstanding growth, our exclusive engineering teams are rapidly growing. If you’re a creative and autonomous engineer with a real passion for technology, we want to hear from you. JR1997450 Show more Show less
Posted 1 month ago
0 years
0 Lacs
Gandhinagar, Gujarat, India
On-site
We are searching for a skilled and experienced DevOps Engineer to join our growing team. In this role, you will play a pivotal role in bridging the gap between development and operations, ensuring a smooth and efficient software delivery lifecycle. You will be responsible for automating processes, building and maintaining infrastructure, and collaborating with developers to deploy code changes seamlessly. Responsibilities: Design, implement, and maintain CI/CD pipelines to automate software delivery processes Utilize infrastructure as code (IaC) tools like Terraform or Ansible to provision and manage infrastructure Configure and manage cloud platforms (AWS, Azure, GCP) Implement and maintain configuration management tools (Chef, Puppet, Ansible) Automate infrastructure deployments and configurations Monitor system performance and troubleshoot issues Collaborate with developers and operations teams to ensure smooth deployments Stay up-to-date on the latest DevOps tools and technologies Identify opportunities for process improvement and automation Qualifications: 4+ years of experience as a DevOps Engineer or a related role Proven experience with CI/CD tools (Jenkins, GitLab CI/CD, etc.) Solid understanding of infrastructure as code (IaC) principles Experience with cloud platforms (AWS, Azure, GCP) Experience with configuration management tools (Chef, Puppet, Ansible) Strong scripting skills (Bash, Python, etc.) Experience with containerization technologies (Docker, Kubernetes) (must) Excellent communication and collaboration skills Ability to work independently and as part of a team Problem-solving and analytical skills Preferred Qualifications: Certifications: Relevant certifications from AWS, Azure, GCP, Kubernetes Administrator, or similar. Agile Methodologies: Familiarity with Agile/Scrum methodologies. Networking: Understanding of networking concepts and technologies. Database Management: Experience with database management and optimization. Job Location : GIFT City, Gandhinagar Show more Show less
Posted 1 month ago
0 years
0 Lacs
Thane, Maharashtra, India
On-site
Education: B.E. Computer Science/IT degree (or any other engineering discipline) Work timings: 2:00 PM to 11:00 PM IST Position Requirements: 3+ years' experience in supporting web-based applications for large scale and high-traffic websites 4+ years’ experience working with UNIX/Linux or Windows servers 3+ years’ experience working with containerization technologies such as Docker and Kubernetes Certified Kubernetes Administrator (CKA) preferred Experience in troubleshooting and performance tuning of distributed systems and web applications Fundamental understanding of TCP/IP, DNS, HTTP and load balancing concepts Experience working with Load Balancers such as F5’s Big-IP Experience with configuration management tools like Chef, Puppet, Ansible or Saltstack Experience in scripting languages like Ruby, Python, Bash or Powershell Experience working with DevOps Open-Source tools such as ArgoCD, Elastic (ELK Stack), Hashicorp Tools (Terraform, Consul, Vault or Nomad), RabbitMQ, SOLR, Redis & Kafka Fundamental understanding of CI/CD with experience in creating CI/CD pipelines with tools such as GitLab, Jenkins, CircleCI Ability to communicate high-level as well as detailed technical concepts and implementations with peers and colleagues Role & Responsibilities: Manage and improve Infrastructure as Code tools and automated provisioning workflow s Administer on-premise Kubernetes and private cloud platform s Participate in a fast-paced, agile environmen t Develop your abilities via provided training opportunitie s Show more Show less
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The demand for professionals skilled in Puppet configuration management software is on the rise in India. Puppet is widely used in the IT industry for automating infrastructure management tasks, making it an essential skill for job seekers in the technology sector.
These cities are known for their thriving IT industries and have a high demand for Puppet professionals.
The average salary range for Puppet professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can command salaries ranging from INR 10-15 lakhs per annum.
In the field of Puppet, a typical career path may involve starting as a Junior Puppet Developer, advancing to a Senior Puppet Developer, and eventually becoming a Puppet Tech Lead. With experience and expertise, professionals can also explore roles such as Puppet Architect or Puppet Consultant.
In addition to Puppet expertise, professionals in this field are often expected to have knowledge of related tools and technologies such as Ansible, Chef, Docker, Kubernetes, and scripting languages like Python or Ruby.
As the demand for Puppet professionals continues to grow in India, job seekers can enhance their career prospects by acquiring proficiency in Puppet and related technologies. By preparing effectively and showcasing their skills confidently during job interviews, individuals can secure rewarding opportunities in the dynamic field of Puppet configuration management. Good luck on your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.