Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Us: People Tech Group is a leading Enterprise Solutions, Digital Transformation, Data Intelligence, and Modern Operation services provider. We have started in the year 2006 at Redmond, Washington, USA and got expanded to India, and In India, we are based out of Hyderabad, Bangalore, Pune and Chennai with overall strength of 1500+ employees. We have our presence over 4 different countries US/Canada/India /Costa Rica. One of the Recent Development happened with the company, we have got acquired by Quest Global Company, Quest Global is One of the world's largest engineering Solution provider Company, it has 20,000+ employee strength, with 70+ Global Delivery Service centers, Headquarters are based in Singapore. Going forward, we all are part of Quest Global Company. Position: DevOps Engineer Company: People Tech Group Experience: 5 yrs Location: Bengaluru Job Description: Key Responsibilities: Provisioned and secured cloud infrastructure using Terraform/ AWS CloudFormation Fully automated GitLab CI/CD pipelines for application builds, tests, and deployment, integrated with Docker containers and AWS ECS/EKS Continuous integration workflows with automated security checks, testing, and performance validation A self-service developer portal providing access to system health, deployment status, logs, and documentation for seamless developer experience AWS CloudWatch Dashboards and CloudWatch Alarms for real-time monitoring of system health, performance, and availability Centralized logging via CloudWatch Logs for application performance and troubleshooting Complete documentation for all automated systems, infrastructure code, CI/CD pipelines, and monitoring setups Monitoring - Splunk - Ability to create dashboards, alerts, integrating with tools like MS teams. Required Skills: Master's or bachelor's degree in computer science/IT or equivalent Expertise in Shell scripting Familiarity with Operating system - Windows & linux Experience in Git - version control Ansible - Good to have Familiarity with CI/CD pipelines - GitLab Docker, Kubernetes, OpenShift - Strong in Kubernetes administration Experience in Infra As Code – Terraform & AWS - CloudFormation Familiarity in AWS services like EC2, Lambda, Fargate, VPC, S3, ECS, EKS Nice to have – Familiarity with observability and monitoring tools like Open Telemetry setup, Grafana, ELK stack, Prometheus Show more Show less
Posted 1 day ago
8.0 - 15.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
Principal Consultant - VMware/Linux Data Center Operations Engineer Location: Bangalore/Hyderabad/Delhi NCR/Kolkata Experience: 8-15 years Immediate joiners Preferred. Mandatory Skills: Linux, VMware, AWS, Kernel, Ansible Kindly share resume to nsenthil.kumar@genpact.com with Sub of "VMware/Linux SME" along with notice period. In this role, you will assist in managing and maintaining VMware and Linux infrastructure within our data center environment. This role supports the availability, reliability, and performance of our virtualized and Linux-based systems. Responsibilities Assist in managing and maintaining VMware infrastructure, including vSphere, ESXi hosts, and virtual machines. Support the administration of Linux servers, ensuring availability, performance, and security. Monitor system performance and availability, performing regular maintenance tasks to ensure optimal operation of VMware and Linux environments. Provide L1/L2 support for VMware and Linux-related incidents and service requests, escalating issues as needed for timely resolution. Use scripting languages (e.g., Bash, PowerShell) to automate routine tasks and improve efficiency. Maintain accurate documentation of system configurations, operational procedures, and troubleshooting steps. Generate reports on system performance and health metrics. Implement and adhere to security policies, procedures, and best practices for VMware and Linux environments, ensuring compliance with industry standards and regulatory requirements. Work collaboratively with network engineers, system administrators, and other IT teams to ensure integrated and efficient operations of VMware and Linux systems. Assist in managing changes to the VMware and Linux environments through rigorous testing, coordination, and documentation to minimize risks and disruptions. Qualifications we seek in you! Minimum Qualifications / Skills Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent work experience). Experience in supporting VMware and Linux systems within a data center or IT infrastructure environment. Familiarity with VMware technologies such as vSphere, ESXi, and vCenter. Basic understanding of Linux operating systems (e.g., Red Hat, CentOS, Ubuntu). Experience with system management and monitoring tools. Knowledge of scripting languages (e.g., Bash, PowerShell) for automation and task execution. Preferred Qualifications/ Skills Certifications related to VMware (e.g., VCP - VMware Certified Professional) and Linux (e.g., RHCSA - Red Hat Certified System Administrator). Experience with configuration management tools (e.g., Ansible, Puppet). Familiarity with ITIL or other IT service management frameworks. Strong analytical and problem-solving skills. Excellent communication and interpersonal skills. Ability to work effectively in a team environment and collaborate across departments. Customer-focused with a commitment to delivering quality support services. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
You may be thinking -- What is a Junior Solution Architect at Red Hat? As part of the customer account team, a Junior Solution Architect at Red Hat is a technical person who collaborates with customers and helps them to be successful by using Red Hat products and services. The role of a Junior Solution Architect is to understand our customers’ challenges and requirements and address them with Red Hat technologies and solutions. We are looking for passionate individuals early in their technical career and based in Bangalore, India to help us unlock the world’s potential. What You Will Do During your first year as a Junior Solution Architect, you will engage in a cohort-based program starting in September 2025. Through this experience, you will: Learn all about Red Hat and how we support our customers to address their technical developmental and strategic business challenges with our portfolio and services Build a strong support network of your Peers, Manager, Mentor and other Red Hatters within and outside of your team Develop technical skills in the Red Hat portfolio that span Cloud, Automation and AI solutions by completing training programs and attaining industry-recognized certifications Apply your technical skills as you engage in meaningful customer interactions, experiences and job shadowing opportunities Shadow and assist more experienced Solution Architects as they deliver proof of value to our customers and partners through presentations, demonstrations, workshops and pilot projects Enhance your professional capabilities through real-world experiences working directly with customers and participating in skill development opportunities Gain understanding of the processes and tools associated with enterprise-level solution architecture Focus on your personal and professional development as you grow your career at Red Hat Support the Sales organization’s goals to deliver customer business value, advance opportunities by obtaining technical wins, educate customers, and increase sales pipeline and revenue What You Will Bring Passion and curiosity for open source technology and desire to build a career within the Tech industry and expertise in emerging technologies Strong technical skills in computer science, IT, AI, or related fields gained through university programs, up-skilling bootcamps, certificate programs, military experience, etc. Demonstrated experience applying technical, analytical, and problem-solving skills in an Enterprise IT-related project. Direct experience with Red Hat Enterprise Linux, Red Hat OpenShift, Red Hat Ansible Automation Platform, and related technologies is preferred. Motivation to engage in self-directed learning on new technologies Willingness to travel to customer sites depending on assignments and to work both in-person from a Red Hat office location and remotely Effective communication (written and verbal) and presentation skills Ability to work independently and collaboratively with internal teams and external customers Full professional proficiency in written and spoken English Thank you for your interest in Red Hat! To learn more about Life at Red Hat, follow us on Instagram. About Red Hat Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Inclusion at Red Hat Red Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com. General inquiries, such as those regarding the status of a job application, will not receive a reply. Show more Show less
Posted 1 day ago
7.0 - 15.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role: Senior Cloud DevOps Engineer Experience: 7-15 years Notice Period: Immediate to 15 days Location: Hyderabad We are seeking a highly skilled GCP DevOps Engineer to join our dynamic team. Job Description Deep GCP Services Mastery: Profound understanding and hands-on experience with core GCP services (Compute Engine, CloudRun, Cloud Storage, VPC, IAM, Cloud SQL, BigQuery, Cloud Operations Suite). Infrastructure as Code (IaC) & Configuration Management: Expertise in Terraform for GCP, and proficiency with tools like Ansible for automating infrastructure provisioning and management. CI/CD Pipeline Design & Automation: Skill in building and managing sophisticated CI/CD pipelines (e.g., using Cloud Build, Jenkins, GitLab CI) for applications and infrastructure on GCP. Containerisation & Orchestration: Advanced knowledge of Docker and extensive experience deploying, managing, and scaling applications on CloudRun and/or Google Kubernetes Engine (GKE). API Management & Gateway Proficiency: Experience with API design, security, and lifecycle management, utilizing tools like Google Cloud API Gateway or Apigee for robust API delivery. Advanced Monitoring, Logging & Observability: Expertise in implementing and utilizing comprehensive monitoring solutions (e.g., Google Cloud Operations Suite, Prometheus, Grafana) for proactive issue detection and system insight. DevSecOps & GCP Security Best Practices: Strong ability to integrate security into all stages of the DevOps lifecycle, implement GCP security best practices (IAM, network security, data protection), and ensure compliance. Scripting & Programming for Automation: Proficient in scripting languages (Python, Bash, Go) to automate operational tasks, build custom tools, and manage infrastructure programmatically. GCP Networking Design & Management: In-depth understanding of GCP networking (VPC, Load Balancing, DNS, firewalls) and the ability to design secure and scalable network architectures. Application Deployment Strategies & Microservices on GCP: Knowledge of various deployment techniques (blue/green, canary) and experience deploying and managing microservices architectures within the GCP ecosystem. Leadership, Mentorship & Cross-Functional Collaboration: Proven ability to lead and mentor DevOps teams, drive technical vision, and effectively collaborate with development, operations, and security teams. System Architecture, Performance Optimization & Troubleshooting: Strong skills in designing scalable and resilient systems on GCP, identifying and resolving performance bottlenecks, and complex troubleshooting across the stack. Regards, ValueLabs Show more Show less
Posted 1 day ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description AJA Consulting Services LLP, founded by Phaniraj Jaligama, is committed to empowering youth and creating employment opportunities in both IT and non-IT sectors. With a focus on skill development, AJA provides exceptional resource augmentation, staffing solutions, interns pool management, and corporate campus engagements for a diverse range of clients. Through its flagship CODING TUTOR platform, AJA trains fresh graduates and IT job seekers in full-stack development, enabling them to transition seamlessly into industry roles. Based in Hyderabad, AJA operates from a state-of-the-art facility in Q City. Role Description We're hiring a Senior DevOps/Site Reliability Engineer with 5–6 years of hands-on experience in managing cloud infrastructure, CI/CD pipelines, and Kubernetes environments. You’ll also mentor junior engineers and lead real-time DevOps initiatives. 🔧 What You’ll Do *Build and manage scalable, fault-tolerant infrastructure (AWS/GCP/Azure) *Automate CI/CD with Jenkins, Github Actions or CircleCI *Work with IaC tools: Terraform, Ansible, CloudFormation *Set up observability with Prometheus, Grafana, Datadog *Mentor engineers on best practices, tooling, and automation ✅ What You Bring *5–6 years in DevOps/SRE roles *Strong scripting (Bash/Python/Go) and automation skills *Kubernetes & Docker expertise *Experience in production monitoring, alerting, and RCA *Excellent communication and team mentorship skills 💡 Bonus: GitOps, Service Mesh, ELK/EFK, Vault 📩 Apply now by emailing your resume to a.malla@ajacs.in Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are looking for a Senior DevOps Engineer to join our Life Sciences & Healthcare DevOps team . This is an exciting opportunity to work on cutting-edge Life Sciences and Healthcare products in a DevOps environment. If you love coding in Python or any scripting language, have experience with Linux, and ideally have worked in a cloud environment, we’d love to hear from you! We specialize in container orchestration, Terraform, Datadog, Jenkins, Databricks, and various AWS services. If you have experience in these areas, we’d be eager to connect with you. About You – Experience, Education, Skills, And Accomplishments At least 7+ years of professional software development experience and 5+ years as DevOps Engineer or similar skillsets with experience on various CI/CD and configuration management tools e.g., Jenkins, Maven, Gradle, Jenkins, Spinnaker, Docker, Packer, Ansible, Cloudformation, Terraform, or similar CI/CD orchestrator tool(s). At least 3+ years of AWS experience managing resources in some subset of the following services: S3, ECS, RDS, EC2, IAM, OpenSearch Service, Route53, VPC, CloudFront, Glue and Lambda. 5+ years of experience with Bash/Python scripting. Wide knowledge in operating system administration, programming languages, cloud platform deployment, and networking protocols Be on-call as needed for critical production issues. Good understanding of SDLC, patching, releases, and basic systems administration activities It would be great if you also had AWS Solution Architect Certifications Python programming experience. What will you be doing in this role? Design, develop and maintain the product's cloud infrastructure architecture, including microservices, as well as developing infrastructure-as-code and automated scripts meant for building or deploying workloads in various environments through CI/CD pipelines. Collaborate with the rest of the Technology engineering team, the cloud operations team and application teams to provide end-to-end infrastructure setup Design and deploy secure, resilient, and scalable Infrastructure as Code per our developer requirements while upholding the InfoSec and Infrastructure guardrails through code. Keep up with industry best practices, trends, and standards and identifies automation opportunities, designs, and develops automation solutions that improve operations, efficiency, security, and visibility. Ownership and accountability of the performance, availability, security, and reliability of the product/s running across public cloud and multiple regions worldwide. Document solutions and maintain technical specifications Product you will be developing The Products rely on container orchestration (AWS ECS,EKS), Jenkins, various AWS services (such as Opensearch, S3, IAM, EC2, RDS,VPC, Route53, Lambda, Cloudfront), Databricks, Datadog, Terraform and you will be working to support the Development team build them. About The Team Life Sciences & HealthCare Content DevOps team mainly focus on DevOps operations on Production infrastructure related to Life Sciences & HealthCare Content products. Our team consists of five members and reports to the DevOps Manager. We as a team provides DevOps support for almost 40+ different application products internal to Clarivate and which are source for customer facing products. Also, responsible for Change process on production environment. Incident Management and Monitoring. Team also handles customer raised /internal user service requests. Hours of Work Shift timing 12PM to 9PM. Must provide On-call support during non-business hours per week based on team bandwidth At Clarivate, we are committed to providing equal employment opportunities for all qualified persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations. Show more Show less
Posted 1 day ago
55.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you’d like, where you’ll be supported and inspired by a collaborative community of colleagues around the world, and where you’ll be able to reimagine what’s possible. Join us and help the world’s leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Job Description Design, implement, and maintain reusable and efficient Ansible playbooks for automating infrastructure and application deployments. Automate provisioning, configuration management, and patching of systems to ensure scalability and consistency. Troubleshoot and resolve issues related to Ansible playbooks, infrastructure, and deployments. Monitor and optimize infrastructure and automation scripts for performance, security, and scalability. Document automated processes, systems architecture, and troubleshooting procedures. Promote DevOps best practices and coding standards across the team. Primary Skills Ansible (Playbooks, Roles, Modules) Ansible Tower / AWX Python (for scripting and automation) Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem. Show more Show less
Posted 1 day ago
12.0 - 18.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title / Role: Technical Architect Key Skills: Ansible, Terraform, YAML, Network Operations Experience: 12- 18 Years Location: Greater Noida We at Coforge are seeking “Technical Architect” with the following skill set: Key Responsibilities: Build new DC environments using Terraform/Ansible/YAML codebase Build and maintain CI/CD pipelines using GitLab. Develop Ansible playbooks for POAP and network configuration tasks. Use YAML for structured configuration and parameter-driven deployment. Collaborate with internal teams to align delivery with standards and timelines. Suggest and implement improvements to tooling, workflows, and automation strategies. Support US and UK hours Required Skills: Strong hands-on experience with Terraform and Ansible. Proficiency with GitLab CI/CD and pipeline automation. Experience working with YAML in config-as-code environments. Good understanding of networking fundamentals (e.g., VLANs, routing, device provisioning). Python scripting skills Self-driven with a contractor mindset and excellent communication skills. Show more Show less
Posted 1 day ago
4.0 years
0 Lacs
India
Remote
This role is for one of Weekday's clients Salary range: Rs 1200000 - Rs 1600000 (ie INR 12-16 LPA) Min Experience: 4 years Location: Remote (India) JobType: full-time Requirements About the Role: We are looking for a skilled and experienced Java Developer with a strong background in migration projects and hands-on experience with Microsoft Azure to join our dynamic development team. The ideal candidate will play a critical role in modernizing legacy systems and ensuring seamless migration to cloud-native environments. If you're passionate about designing robust, scalable applications and navigating cloud-based transformations, we'd love to hear from you. As part of this role, you will be involved in analyzing legacy Java applications , developing strategies for their migration, implementing enhancements, and deploying them on Azure cloud infrastructure . You will collaborate closely with DevOps, QA, and solution architects to ensure high-performance, secure, and scalable systems. Key Responsibilities: Lead or contribute to the migration of legacy systems to modern Java-based architectures on Microsoft Azure. Analyze existing monolithic or on-prem systems to plan and execute cloud migration strategies. Design and develop Java applications, APIs, and services using Spring Boot and modern frameworks. Ensure smooth integration with Azure cloud components such as Azure App Services, Azure SQL, Azure Storage, etc. Optimize code for performance and scalability across distributed systems. Collaborate with solution architects and stakeholders to define migration goals, timelines, and deliverables. Implement automation tools and pipelines to streamline migration and deployment processes. Work closely with QA and DevOps teams to establish continuous integration and deployment pipelines. Troubleshoot issues in migration and production environments, and provide root cause analysis. Create documentation, including technical specifications, migration runbooks, and architectural diagrams. Required Skills and Qualifications: 4+ years of experience in Java development, with strong hands-on expertise in Java 8+, Spring/Spring Boot, and object-oriented programming principles. Proven experience in legacy system modernization and application migration projects. Strong knowledge of Azure services and cloud-native development, especially in deploying Java apps on Azure. Experience with RESTful API design, microservices, and containerized environments (Docker/Kubernetes preferred). Familiarity with databases such as Azure SQL, PostgreSQL, or MySQL, including data migration and schema evolution. Understanding of CI/CD pipelines, source control (Git), and build tools (Maven/Gradle). Strong analytical, problem-solving, and communication skills. Experience working in Agile or Scrum development environments. Preferred Skills (Good to Have): Knowledge of other cloud platforms (AWS, GCP) is a plus. Familiarity with DevOps tools such as Azure DevOps, Terraform, or Ansible. Experience in performance tuning, system monitoring, and cost optimization on Azure. Exposure to container orchestration tools like Kubernetes. Show more Show less
Posted 1 day ago
0.0 years
0 Lacs
Udaipur, Rajasthan
On-site
Job Title: DevOps Intern Location: Udaipur, Rajasthan (Work from Office) Type: Internship (Full-time, In-office) Duration: 3–6 months (with potential for full-time conversion) Eligibility: Final-year students / Fresh graduates / Early career professionals-passout 2024-2025 About the Role: We are looking for a passionate and driven DevOps Intern to join our tech team in Udaipur. This is an exciting opportunity for individuals who have a foundational understanding of DevOps practices and hands-on exposure to AWS cloud services . If you are AWS Certified and eager to work in a real-world, collaborative environment, we’d love to hear from you! Key Responsibilities: Assist in designing, implementing, and maintaining CI/CD pipelines. Support the automation of infrastructure using tools like Terraform, CloudFormation, or similar. Monitor application performance and infrastructure health using tools like CloudWatch, Prometheus, or Grafana. Work closely with the development team to support code deployments and cloud environments. Participate in improving system reliability, scalability, and security on AWS. Document workflows, best practices, and setup procedures. Required Skills & Qualifications: Basic understanding of DevOps principles, CI/CD, and Infrastructure as Code (IaC). Exposure to AWS services such as EC2, S3, IAM, RDS, Lambda, etc. AWS Certified (Cloud Practitioner or higher – preferred). Familiarity with scripting languages like Bash, Python, or Shell. Comfortable working with Git, Docker, and Linux environments. Strong problem-solving skills and eagerness to learn. * Nice to Have: Hands-on experience with any DevOps tools like Jenkins, Ansible, Docker, Kubernetes, etc. Experience with version control systems like GitHub or GitLab. Exposure to Agile/Scrum environments. What We Offer: Exposure to live projects and real-world DevOps practices. Mentorship from experienced cloud and DevOps professionals. Certificate of Internship and potential Pre-Placement Offer (PPO). A dynamic and collaborative work culture in our Udaipur office. Job Types: Full-time, Permanent Pay: From ₹5,000.00 per month Benefits: Paid sick time Schedule: Day shift Monday to Friday Ability to commute/relocate: Udaipur, Rajasthan: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Completed Certificate or Course in Devops ? yes /no Education: Bachelor's (Preferred) Work Location: In person
Posted 1 day ago
10.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction At IBM, work is more than a job - it’s a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things you’ve never thought possible. Are you ready to lead in this new era of technology and solve some of the world’s most challenging problems? If so, lets talk. Your Role And Responsibilities In this position you will be working with a web development team (both frontend and backend) to build innovative products from scratch.What you'll do: You Will Collaborate With Teams Like Design, Content, And Product Management To Plan, Build, And Test Innovative AI-infused Products Through Various Projects. You'll Write Well-tested Code For APIs And Tools For Python, Jenkins, Or Travis, Driving The Product Forward With Quality In Mind. This Is a Perfect Fit For You If You Are Looking To Have Large Impact And Innovate With The Latest Technologies Like LLMs And Generative AI! How We'll Help You Grow You will have access to all the technical training courses you need to become the expert you want to be You will learn directly from senior members/leaders in this field You will have the opportunity to work directly with multiple clients. Preferred Education Master's Degree Required Technical And Professional Expertise 10+ year experience in software development using functional and/or object-oriented programming languages such as Javascript or Python. Candidates in the following: Deep knowledge working with back-end development with in Javascript or Python, REST API and Datatbase technologies (DB2) and/or SQL databases. Experience working in cloud native application, working on docker, Kubernetes, OpenShift, Sound knowledge of databases, handling APIs, network requests, and general data manipulation. Understanding of large-scale application development and cloud architecture, with work experience. Experience working in cloud deployment, with building CI/CD pipelines such as Jenkins, Travis, etc.. Solid knowledge on Agile methodology and practices, such as SCRUM, Test Driven Development (TDD), etc. Experience with modern frontend JavaScript frameworks, such as React or equivalent. Experience building restful APIs and Web services in NodeJS, and similar technologies. Experience building and scaling web applications Preferred Technical And Professional Experience You can mentor and guide junior developers Experience in infrastructure as code languages such as Terraform and Ansible Experience with Continuous Integration / Continuous Delivery (CI/CD) methodologies Experience using container management technologies such as Kubernetes and Docker Experience with any Public Cloud Services Show more Show less
Posted 1 day ago
3.0 - 5.0 years
0 Lacs
Kanayannur, Kerala, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 3 to 5 years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Family Development Operations (India) Travel Required Up to 10% Clearance Required None Guidehouse SEAS Platform Engineering team is seeking Fresher DevOps Infrastructure Engineers. The ideal candidate should be interested in learning new open source infrastructure and DevOps tools. This role is to support and develop infrastructure for Guidehouse internal projects. This position will be part of the Solutions Engineering and Architecture team and will require working with users across Business segments. What You Will Do Collaboratively build and maintain Infrastructure for the Internal stakeholders and external clients (using Terraform) Understanding of Cloud concepts Support internal Dockerized platforms for the Internal analytics users (Posit Containerized products) Administer Linux Servers (RedHat and Ubuntu) Ready to work in the 2 PM to 11 PM shift. What You Will Need Candidates should be from computer background (B. Tech Computer Science or B.Sc CS, BCA etc.) Basic Git version control knowledge Linux training (e.g. RedHat or Ubuntu) Understanding of cloud computing basics Proficiency in at least one scripting language (e.g. Bash, Python) What Would Be Nice To Have AZ900 or AWS CCP Certification Experience with Docker containers RedHat Certified Engineer (RHCE) certification Infra, CI/CD, or config management experience (e.g. Ansible, GitHub Actions, Jenkins, Puppet) System Administrator level experience with Linux (Red Hat/CentOS or Debian/Ubuntu preferred) Knowledge in DevOps tools such as Terraform, Docker, Kubernetes. What We Offer Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. About Guidehouse Guidehouse is an Equal Opportunity Employer–Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse’s Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant’s dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee. Show more Show less
Posted 1 day ago
3.0 - 5.0 years
0 Lacs
Trivandrum, Kerala, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 3 to 5 years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 day ago
14.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Requirements Description and Requirements Position Summary: A highly skilled Big Data (Hadoop) Administrator responsible for the installation, configuration, engineering, and architecture of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Strong expertise in DevOps practices, scripting, and infrastructure-as-code for automating and optimizing operations is highly desirable. Experience in collaborating with cross-functional teams, including application development, infrastructure, and operations, is highly preferred. Job Responsibilities: Manages the design, distribution, performance, replication, security, availability, and access requirements for large and complex Big Data clusters. Designs and develops the architecture and configurations to support various application needs; implements backup, recovery, archiving, conversion strategies, and performance tuning; manages job scheduling, application release, cluster changes, and compliance. Identifies and resolves issues utilizing structured tools and techniques. Provides technical assistance and mentoring to staff in all aspects of Hadoop cluster management; consults and advises application development teams on security, query optimization, and performance. Writes scripts to automate routine cluster management tasks and documents maintenance processing flows per standards. Implement industry best practices while performing Hadoop cluster administration tasks. Works in an Agile model with a strong understanding of Agile concepts. Collaborates with development teams to provide and implement new features. Debugs production issues by analyzing logs directly and using tools like Splunk and Elastic. Address organizational obstacles to enhance processes and workflows. Adopts and learns new technologies based on demand and supports team members by coaching and assisting. Education: Bachelor’s degree in computer science, Information Systems, or another related field with 14+ years of IT and Infrastructure engineering work experience. Experience: 14+ Years Total IT experience & 10+ Years relevant experience in Big Data database Technical Skills: Big Data Platform Management : Big Data Platform Management: Expertise in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr , Apache Hive, Apache Kafka, Apache NiFi , Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL . Data Infrastructure & Security : Proficient in designing and implementing robust data infrastructure solutions with a strong focus on data security, utilizing tools like Apache Ranger and Kerberos. Performance Tuning & Optimization : Skilled in performance tuning and optimization of big data environments, leveraging advanced techniques to enhance system efficiency and reduce latency. Backup & Recovery : Experienced in developing and executing comprehensive backup and recovery strategies to safeguard critical data and ensure business continuity. Linux & Troubleshooting : Strong knowledge of Linux operating systems , with proven ability to troubleshoot and resolve complex technical issues, collaborating effectively with cross-functional teams. DevOps & Scripting : Proficient in scripting and automation using tools like Ansible, enabling seamless integration and automation of cluster operations. Experienced in infrastructure-as-code practices and observability tools such as Elastic. Agile & Collaboration : Strong understanding of Agile SAFe for Teams, with the ability to work effectively in Agile environments and collaborate with cross-functional teams. ITSM Process & Tools : Knowledgeable in ITSM processes and tools such as ServiceNow. Other Critical Requirements: Automation and Scripting : Proficiency in automation tools and programming languages such as Ansible and Python to streamline operations and improve efficiency. Analytical and Problem-Solving Skills : Strong analytical and problem-solving abilities to address complex technical challenges in a dynamic enterprise environment. 24x7 Support : Ability to work in a 24x7 rotational shift to support Hadoop platforms and ensure high availability. Team Management and Leadership : Proven experience managing geographically distributed and culturally diverse teams, with strong leadership, coaching, and mentoring skills. Communication Skills : Exceptional written and oral communication skills, with the ability to clearly articulate technical and functional issues, conclusions, and recommendations to stakeholders at all levels. Stakeholder Management : Prior experience in effectively managing both onshore and offshore stakeholders, ensuring alignment and collaboration across teams. Business Presentations : Skilled in creating and delivering impactful business presentations to communicate key insights and recommendations. Collaboration and Independence : Demonstrated ability to work independently as well as collaboratively within a team environment, ensuring successful project delivery in a complex enterprise setting. About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Software Engineer In this role, you will: Work as a Java / Spring / Spring Boot engineer for DSP Foundation team. Work as devops engineer and own and deliver the automation scripts enabling Pipeline deployments. Requirements To be successful in this role, you should meet the following requirements: Relevant work experience of min 3 years in Java, J2EE, Spring, Spring MVC, Spring Boot. Java server-side development experience is essential. Good knowledge of GIT HUB (Version Control System). Good knowledge on tools like Maven, Jenkins, GitHUB, Ansible. Knowledge of shell scripting and python. Participate in intra-day and overnight prod support as part of DevSecOps practices. Must be able to debug the existing code, extend the functionality and/or fix issues if any. Good communication and interpersonal skills. The ability to work comfortably both within a team and independently as required. Able to adapt to working in different roles and on different technologies. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India Show more Show less
Posted 1 day ago
6.0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
On-site
Company Description Gfk is seeking a Middleware Engineer with hands-on Java & Python experience and proven analytical and problem-solving skills. The ideal candidate will be responsible for the configuration, deployment, and management of middleware systems to support enterprise applications. This role involves working closely with development, operations and infrastructure teams to ensure the seamless integration of applications and systems while optimizing performance and reliability Job Description Install, configure and maintain middleware technologies (experience with any of these: Websphere, Weblogic, Tomcat, JBoss, Kafka, RabbitMQ or similar) Ensure high availability, scalability and reliability of middleware systems Design and implement solutions for system and application integration Automate routine tasks, processes, legacy data fusion Optimize middleware performance and recommend improvements Design and development of middleware components Design and implement API necessary for the integration and or data consumption Work independently and collaboratively on a multi-disciplined project team in an Agile development environment Be actively involved in the design, development and testing activities for Big data product Provide feedback to development teams on code/architecture optimization Qualifications Education Bachelor of Science degree from an accredited university Required Skills And Experience 6+ years of hands-on experience developing Java, Spring, Python Hands-on experience with the Spring Tool Suite to include Spring Boot, Spring Boot Oauth, Spring Security, Spring Data JPA, and Spring Batch Understanding Relational Databases, such as Oracle, SQL Server, MySQL, Postgres or similar Fluency in Java/J2EE, JSP, Web Services. Must have experience with JAVA 8 or higher. Experience with JMS, Kafka, IBM MQ or similar Experience using software project tracking tools such as Jira Familiarity with Azure services Proven experience with CI/CD. Proven experience with Jenkins, Ansible, Docker, Kubernetes Proven experience with version control (Github, Bitbucket) Familiarity with Linux OS/concepts Strong written and verbal communication skills Self-motivated and ability to work well in a team Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms Recharge and revitalize with help of wellness plans made for you and your family Plan your future with financial wellness tools Stay relevant and upskill yourself with career development opportunities. Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion Show more Show less
Posted 1 day ago
8.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Introduction At IBM, work is more than a job - it's a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things you've never thought possible. Are you ready to lead in this new era of technology and solve some of the world's most challenging problems? If so, lets talk. Your Role And Responsibilities The IBM Storage Defender Support (IBM Storage Defender) team is supporting complex integrated storage products end to end, including Storage Defender, Spectrum Protect, Spectrum Protect Plus, Copy Data Management. This position involves working with our IBM customers remotely, which are some of the world's top research, automotive, banks, health care and technology providers. The candidates must be able to assist with operating systems (AIX,Linux, Unix, Windows), SAN, network protocols, clouds and storage devices. They will work in a virtual environment working with colleagues around the globe and will be exposed to many different types of technologies. Responsibilities: must include but not limited to Provide remote troubleshooting and analysis assistance for usage and configuration questions Review diagnostic information to assist in isolation of a problem cause (which could include assistance interpreting traces and dumps) Identify known defects and fixes to resolve problems Develops best practice articles and support utilities to improve support quality and productivity Respond to escalated customer calls, complaints, and queries The job will require flexible schedule to ensure 24x7 support operations and weekend on-call coverage, including extending/taking shift to cover North America working hours. Preferred Education Master's Degree Required Technical And Professional Expertise Excellent communication skills - both verbal and written Provide remote troubleshooting and analysis assistance for usage and configuration questions Preferred Professional and Technical Expertise: At least 8 years of in-depth experience with Spectrum Protect (Storage Protect) or its competition products in data protection domain Experience troubleshooting network, OS, or SaaS based application/software issues. Experience with Cohesity DataProtect, Storage Protect, and/or data protection products will be added advantage. Working knowledge of Python, Go, and Java programing languages. Working knowledge on RedHat, Openshift or Ansible administration will be preferred. Good in networking and troubleshooting. Cloud Certification will be added advantage. Knowledge about Object Storage and Cloud Storage will be preferred. Preferred Technical And Professional Experience Following minimum experience are required for the role - Must have worked in at least 8-10 years on data protection or storage software’s as administrator or solution architect or client server technologies. Debugging and analysis are performed via the telephone as well as electronically. So candidates must possess strong customer interaction skills and be able to clearly articulate solutions and options. Must be familiar with and able to interpret complex software problems that span across multiple client and server platforms including UNIX, Linux, AIX, and Windows. Focus on storage area networks (SAN), network protocols, Cloud, and storage devices is preferred. Hands on experience with storage virtualization is a plus. Candidates must be flexible in schedule and availability. Second shift and weekend Show more Show less
Posted 1 day ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Full-time Company Description Gfk is seeking a Middleware Engineer with hands-on Java & Python experience and proven analytical and problem-solving skills. The ideal candidate will be responsible for the configuration, deployment, and management of middleware systems to support enterprise applications. This role involves working closely with development, operations and infrastructure teams to ensure the seamless integration of applications and systems while optimizing performance and reliability Job Description Install, configure and maintain middleware technologies (experience with any of these: Websphere, Weblogic, Tomcat, JBoss, Kafka, RabbitMQ or similar). Ensure high availability, scalability and reliability of middleware systems. Design and implement solutions for system and application integration. Automate routine tasks, processes, legacy data fusion. Optimize middleware performance and recommend improvements. Design and development of middleware components. Design and implement API necessary for the integration and or data consumption. Work independently and collaboratively on a multi-disciplined project team in an Agile development environment. Be actively involved in the design, development and testing activities for Big data product. Provide feedback to development teams on code/architecture optimization. Qualifications Education Bachelor of Science degree from an accredited university Required Skills And Experience 6+ years of hands-on experience developing Java, Spring, Python. Hands-on experience with the Spring Tool Suite to include Spring Boot, Spring Boot Oauth, Spring Security, Spring Data JPA, and Spring Batch Understanding Relational Databases, such as Oracle, SQL Server, MySQL, Postgres or similar. Fluency in Java/J2EE, JSP, Web Services. Must have experience with JAVA 8 or higher. Experience with JMS, Kafka, IBM MQ or similar. Experience using software project tracking tools such as Jira. Familiarity with Azure services. Proven experience with CI/CD. Proven experience with Jenkins, Ansible, Docker, Kubernetes. Proven experience with version control (Github, Bitbucket). Familiarity with Linux OS/concepts Strong written and verbal communication skills. Self-motivated and ability to work well in a team. Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms. Recharge and revitalize with help of wellness plans made for you and your family. Plan your future with financial wellness tools. Stay relevant and upskill yourself with career development opportunities. Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion I'm interested I'm interested Privacy Policy Show more Show less
Posted 1 day ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Any Bachelor BE/BSc IT/BTech /MCA Must have experience administer and maintain various CI/CD tools like Jenkins, Artifactory, Docker, Ansible, OpenShift Must have experience working with builds and deployments of various programing languages like Java, Node. JS, Python. Nice to have experience with .Net &Go source code. Must have extensive experience working with build and package management tools like Maven, Gradle, Npm, Yarn & pip Good to have experience with scan tools like Nexus IQ, SonarQube / Check marx, Prisma Good to have experience with Functional, API and performance test automation tools. Must have experience with Git and GitHub Enterprise Administration. Must have experience working with enterprise standard GitHub workflows & Re-usable workflows Must have experience writing and understanding Custom GitHub Actions & re-usable workflows. Experience working closely with application developers help them on-board onto CI/CD tools and build and deploy their code through production Show more Show less
Posted 1 day ago
8.0 - 10.0 years
0 Lacs
Delhi, India
On-site
Location : Bengaluru / Delhi Reports To : Chief Revenue Officer Position Overview: We are looking for a highly motivated Pre-Sales Specialist to join our team at Neysa, a rapidly growing AI Cloud Platform company that's making waves in the industry. The role is a customer-facing technical position that will work closely with sales teams to understand client requirements, design tailored solutions and drive technical engagements. You will be responsible for presenting complex technology solutions to customers, creating compelling demonstrations, and assisting in the successful conversion of sales opportunities. Key Responsibilities: Solution Design & Customization : Work closely with customers to understand their business challenges and technical requirements. Design and propose customized solutions leveraging Cloud, Network, AI, and Machine Learning technologies that best fit their needs. Sales Support & Enablement : Collaborate with the sales team to provide technical support during the sales process, including delivering presentations, conducting technical demonstrations, and assisting in the development of proposals and RFP responses. Customer Engagement : Engage with prospects and customers throughout the sales cycle, providing technical expertise and acting as the technical liaison between the customer and the company. Conduct deep-dive discussions and workshops to uncover technical requirements and offer viable solutions. Proof of Concept (PoC) : Lead the technical aspects of PoC engagements, demonstrating the capabilities and benefits of the proposed solutions. Collaborate with the customer to validate the solution, ensuring it aligns with their expectations. Product Demos & Presentations : Deliver compelling product demos and presentations tailored to the customer’s business and technical needs, helping organizations unlock innovation and growth through AI. Simplify complex technical concepts to ensure that both business and technical stakeholders understand the value proposition. Proposal Development & RFPs : Assist in crafting technical proposals, responding to RFPs (Request for Proposals), and providing technical content that highlights the company’s offerings, differentiators, and technical value. Technical Workshops & Trainings : Facilitate customer workshops and training sessions to enable customers to understand the architecture, functionality, and capabilities of the solutions offered. Collaboration with Product & Engineering Teams : Provide feedback to product management and engineering teams based on customer interactions and market demands. Help shape future product offerings and improvements. Market & Competitive Analysis : Stay up-to-date on industry trends, new technologies, and competitor offerings in AI and Machine Learning, Cloud and Networking, to provide strategic insights to sales and product teams. Documentation & Reporting : Create and maintain technical documentation, including solution designs, architecture diagrams, and deployment plans. Track and report on pre-sales activities, including customer interactions, pipeline status, and PoC results. Key Skills and Qualifications: Experience : Minimum of 8-10 years of experience in a pre-sales or technical sales role, with a focus on AI, Cloud and Networking solutions. Technical Expertise : Solid understanding of Cloud computing, Data Center infrastructure, Networking (SDN, SD-WAN, VPNs), and emerging AI/ML technologies. Experience with architecture design and solutioning across these domains, especially in hybrid cloud and multi-cloud environments. Familiarity with tools such as Kubernetes, Docker, TensorFlow, Apache Hadoop, and machine learning frameworks. Sales Collaboration : Ability to work alongside sales teams, providing the technical expertise needed to close complex deals. Experience in delivering customer-focused presentations and demos. Presentation & Communication Skills : Exceptional ability to articulate technical solutions to both technical and non-technical stakeholders. Strong verbal and written communication skills. Customer-Focused Mindset : Excellent customer service skills with a consultative approach to solving customer problems. Ability to understand business challenges and align technical solutions accordingly. Having the mindset to build rapport with customers and become their trusted advisor. Problem-Solving & Creativity : Strong analytical and problem-solving skills, with the ability to design creative, practical solutions that align with customer needs. Certifications : Degree in Computer Science, Engineering, or a related field Cloud and AI / ML certifications are highly desirable Team Player : Ability to work collaboratively with cross-functional teams including product, engineering, and delivery teams. Preferred Qualifications: Industry Experience : Experience in delivering solutions in industries such as finance, healthcare, or telecommunications is a plus. Technical Expertise in AI/ML : A deeper understanding of AI/ML applications, including natural language processing (NLP), computer vision, predictive analytics, or data science use cases. Experience with DevOps Tools : Familiarity with CI/CD pipelines, infrastructure as code (IaC), and automation tools like Terraform, Ansible, or Jenkins. Show more Show less
Posted 1 day ago
6.0 years
0 Lacs
India
On-site
Job Title: Site Reliability Engineer About noon noon.com is a technology leader with a simple mission: to be the best place to buy and sell things. In doing this we hope to accelerate the digital economy of the Middle East, empowering regional talent and businesses to meet the full range of consumers' online needs. noon operates without boundaries; we are aggressively and voraciously ambitious. Starting in 2017 with noon.com, the region’s homegrown e-commerce platform and leading online shopping destination, noon is now a digital ecosystem of products and services - noon, noon Food, Noon in Minutes, NowNow, SIVVI, noon One, and noon Pay. At noon we have the courage to pursue what seems impossible, we work hard to get things done, we go to great lengths to ensure that the experience of everyone from our customers to our sellers or noon Bandidos is stellar but above all, we are grateful for the opportunities we have. If you feel the above values resonate with you – you will enjoy this incredible journey with us! Job Description As a Site Reliability Engineer (SRE) at noon payments, you will play a crucial role in maintaining and enhancing the reliability, availability, and performance of our cloud-based infrastructure and services. You will be responsible for automating deployments, optimizing systems, and ensuring seamless performance across our platforms. This position requires a strong foundation in cloud infrastructure management, particularly with Azure - AKS and GCP-GKE, alongside hands-on experience with Azure DevOps and monitoring tools like Datadog. You will: Cloud Infrastructure Management: Manage and optimize cloud environments across Azure and GCP, ensuring efficient resource utilization, high system availability, and scalability (AKS-GKE). Infrastructure as Code: Utilize Terraform for infrastructure provisioning, ensuring consistent and scalable deployments, and managing infrastructure via Azure DevOps pipelines. Configuration Management: Implement and manage system configurations using Ansible to ensure consistency and streamline updates across different environments. Continuous Integration/Continuous Deployment (CI/CD): Develop, maintain, and optimize CI/CD pipelines within Azure DevOps to automate testing and deployment processes, reducing time from development to production. Monitoring and Observability: Set up and maintain comprehensive monitoring and observability solutions using Datadog to track system health, performance, and proactively detect issues. Container Orchestration: Deploy, manage, and optimize Kubernetes clusters to support scalable and resilient application deployments. Incident Management: Participate in a 24/7 on-call or roster-based team to respond to incidents, conduct root cause analysis, and implement solutions to minimize downtime and ensure system reliability. Performance Tuning: Continuously monitor system performance, identify bottlenecks, and implement optimizations to improve efficiency and response times. Capacity Planning: Plan and manage system capacity to ensure resources meet current and future demands, enabling seamless service delivery. Collaboration: Work closely with Network Operations Center (NOC) and DevOps teams to troubleshoot issues, optimize deployment processes, and drive continuous improvement . Documentation: Create and maintain detailed documentation for system configurations, deployment processes, and incident reports. Skill Requirements Bachelor’s degree in computer science, Information Technology or any other related discipline or equivalent related experience. Cloud, ITIL, CKA certifications are a plus. 6+ years of directly related or relevant experience, preferably in information security. Extensive experience with cloud platforms such as Azure, GCP, and Huawei Cloud. Proficiency with Terraform for infrastructure automation and Ansible for configuration management. Hands-on experience with Kubernetes for container orchestration mainly AKS and GKE. Expertise in monitoring and observability tools such as Datadog. Familiarity with Azure VMSS, GCP MIG for virtual machine scaling and management. Experience in a 24/7 on-call or roster-based team environment, focusing on system uptime and incident response. Strong understanding of SRE processes and best practices for system reliability, availability, and performance. Excellent problem-solving skills and the ability to handle complex technical issues under pressure. Effective communication skills and a collaborative approach to working with diverse teams. Experience with payment gateway projects or similar high-transaction systems is preferred. Additional knowledge in advanced monitoring techniques, performance tuning, and capacity planning is a plus. Who will excel? We’re looking for candidates who thrive in a fast-paced, dynamic start-up environment. We’re searching for problem solvers, people who operate with a bias for action and have a deep understanding of the importance of resourcefulness over reliance. Candor is our only default. Demanding unequivocal high standards should be non-negotiable because quality matters. We want people who are radically candid, cohorts who commit to settling for nothing but the best - in hiring, in accepting work from colleagues, and in your own work. Ours is not an easy mission, but it is a meaningful one. Every hire must actively raise the bar of talent in the company to help us reach our vision. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are looking forward to hire FullStack (Java + Angular+AWS) professionals at the level of Sr.Software Engineer, who thrive on challenges and desire to make a real difference in the business world. With an environment of extraordinary innovation and unprecedented growth, this is an exciting opportunity for a self-starter who enjoys working in a fast-paced, quality-oriented, and team environment. You are required to have skills in the following areas : Minimum 5 years of experience in Java and related technologies Good understanding for Spring framework - Spring core, MVC, Boot, Microservices pattern. Working knowledge of building Micro Services, RESTful web Services using any framework (Spring Boot, JaxRS, Jersey) Hands on experience in web services development and solid understanding of Java web technologies using Java 8 Solid understanding of UI basics HTML, CSS, Java script, jQuery, Ajax Hands-on on Typescript and Angular 9+ with modular architecture. Good understanding of Message Queues and have worked upon any one of them (Kafka / RabbitMQ / ActiveMQ) Expertise in Relational database (MySQL / MS SQL /Oracle) o Working experience in Devops Build Tools – Maven / Gradle Version control - Git, GitHub / Bitbucket CI/CD - Jenkins, Ansible, Artifactory Good understanding in building & deploying application on the AWS cloud platform Understanding and expertise in maintaining Code quality (TDD, JUnit, Mockito, Power Mock, SonarQube, Sonar lint) Working knowledge of Agile process and tools – Scrum / Kanban, Jira, Confluence Proficiency in Interpersonal skills, Problem solving, Planning & execution and Impactful communication. Positive, flexible, learning and can do attitude. We are looking forward to hire Java Full-Stack (Java + Angular) professionals at the level of Sr. Software Engineer, who thrive on challenges and desire to make a real difference in the business world. With an environment of extraordinary innovation and unprecedented growth, this is an exciting opportunity for a self-starter who enjoys working in a fast-paced, quality-oriented, and team environment. You are required to have skills in the following areas: Minimum 5 years of experience in Java and related technologies Good understanding for Spring framework - Spring core, MVC, Boot, Microservices pattern. Working knowledge of building Micro Services, RESTful web Services using any framework (Spring Boot, JaxRS, Jersey) Hands-on experience in web services development and solid understanding of Java web technologies using Java 8 Solid understanding of UI basics HTML, CSS, Java script, jQuery, Ajax Typescript and Angular 9+ with modular architecture. Minimum 2 + of working experience in UI Designing using Angular Framework along with knowledge on Jasmine/Karma. Good understanding of Message Queues and have worked on any one of them (Kafka / RabbitMQ / ActiveMQ) Expertise in Relational databases (MySQL / MS SQL /Oracle) or NoSQL Database. Working experience in DevOps Build Tools – Maven / Gradle Version control - Git, GitHub / Bitbucket CI/CD - Jenkins, Ansible, Artifactory Good understanding of building & deploying applications on the AWS cloud platform Understanding and expertise in maintaining Code quality (TDD, JUnit, Mockito, Power Mock, SonarQube, Sonar lint) Working knowledge of Agile processes and tools – Scrum / Kanban, Jira, Confluence Proficiency in Interpersonal skills, Problem-solving, Planning & execution, and Impactful communication. Positive, flexible, learning, and can-do attitude. We are looking forward to hire FullStack (Java + Angular+AWS) professionals at the level of Sr.Software Engineer, who thrive on challenges and desire to make a real difference in the business world. With an environment of extraordinary innovation and unprecedented growth, this is an exciting opportunity for a self-starter who enjoys working in a fast-paced, quality-oriented, and team environment. You are required to have skills in the following areas : Minimum 5 years of experience in Java and related technologies Good understanding for Spring framework - Spring core, MVC, Boot, Microservices pattern. Working knowledge of building Micro Services, RESTful web Services using any framework (Spring Boot, JaxRS, Jersey) Hands on experience in web services development and solid understanding of Java web technologies using Java 8 Solid understanding of UI basics HTML, CSS, Java script, jQuery, Ajax Hands-on on Typescript and Angular 9+ with modular architecture. Good understanding of Message Queues and have worked upon any one of them (Kafka / RabbitMQ / ActiveMQ) Expertise in Relational database (MySQL / MS SQL /Oracle) o Working experience in Devops Build Tools – Maven / Gradle Version control - Git, GitHub / Bitbucket CI/CD - Jenkins, Ansible, Artifactory Good understanding in building & deploying application on the AWS cloud platform Understanding and expertise in maintaining Code quality (TDD, JUnit, Mockito, Power Mock, SonarQube, Sonar lint) Working knowledge of Agile process and tools – Scrum / Kanban, Jira, Confluence Proficiency in Interpersonal skills, Problem solving, Planning & execution and Impactful communication. Positive, flexible, learning and can do attitude. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
TCS present an excellent opportunity for Kubernetes / OpenShift Engineer Greetings from TCS !!! Job Title : OpenShift Engineer Location- Pan India Experience required: 8 to 12 yrs. Keywords - RedHat OpenShift, Kubernetes, ServiceMesh, Ansible, Docker, Helm Must-Have Minimum 5 mandate details are mandate with two or 3 liners Openshift, Kubernetes hands on experience Knowledge on Network policies, Routes, Services. Isitio ServiceMesh HC Vault Public Cloud Experience Container Migration Experience ELK stack deployment & configuration Agile methodology / Kanban board Linux background with bash Shell scripting knowledge Firewall configuration on REDHAT Helm configuration or UCD ELK Stack & Dynatrace, Kibana and Grafana Kubernetes / Openshift Cluster configuration and Patching. Good-to-Have Minimum 2 mandate details are mandate with two or 3 liners Design experience of the Openshift cluster Redhart openshift certified ARO or AKS **Mandatory Documents- Updated CV, Adhar or Pan Card Copy, Passport Size Photo** Note- Do not apply Freshers. and EX TCSers Thanks & Regards, Supriya Kashid Human Resource Team (TAG) Tata Consultancy Services Mailto: supriya.kashid@tcs.com Show more Show less
Posted 1 day ago
5.0 - 10.0 years
13 - 23 Lacs
Bangalore Rural
Work from Office
JOB DESCRIPTION : Strong embedded development expereince with good knowledge and hands-on experience in Design/Development/Debugging aspects of Board support package (BSP) on one or more of operating systems like in Linux/Android, QNX and Hypervisor embedded systems. The Main responsibility is to provide direct support to OEM customers with the design, development and debug of reference designs SW related issues and helping to customize/optimize software to meet the product requirements. The Candidate must quickly ramp-up onto an existing project, understand Automotive platform Software driver architecture, read/write technical specifications/requirements, demonstrate strong analytical and problem-solving abilities and work closely with external customers to customize and launch their new products. Skillset : C, C++. Linux/Android, QNX / RTOS , UART, SPI, I2C, V4L2, MIPI CSI, DSI, ALSA, Android Audio Framework, Camera HAL, Audio HAL, Codec2, DRM, Surface Flinger, HW Compositor
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Ansible is a popular automation tool widely used in the IT industry, and the demand for ansible professionals is on the rise in India. Job seekers with expertise in ansible can explore various opportunities in different sectors like IT services, product companies, and consulting firms.
Here are 5 major cities actively hiring for ansible roles in India: - Bangalore - Pune - Hyderabad - Chennai - Mumbai
The estimated salary range for ansible professionals in India varies based on experience: - Entry-level: ₹4-6 lakhs per annum - Mid-level: ₹8-12 lakhs per annum - Experienced: ₹15-20 lakhs per annum
In the ansible domain, a typical career progression may look like: - Junior Ansible Engineer - Ansible Developer - Senior Ansible Engineer - Ansible Architect - Ansible Consultant
Apart from ansible, professionals in this field are often expected to have or develop skills like: - Linux administration - Scripting languages (Python, Bash) - Configuration management tools (Puppet, Chef) - Cloud platforms (AWS, Azure)
Here are 25 interview questions for ansible roles: - What is Ansible and how does it work? (basic) - Explain the difference between Ansible and Puppet. (basic) - How do you define playbooks in Ansible? (basic) - What is an Ansible role? (basic) - How do you handle errors in Ansible playbooks? (medium) - Explain the concept of Ansible Tower. (medium) - How do you secure sensitive data in Ansible playbooks? (medium) - What are Ansible facts? (basic) - Explain the difference between Ansible ad-hoc command and playbook. (basic) - How do you create custom modules in Ansible? (advanced) - How do you integrate Ansible with version control systems like Git? (medium) - What is dynamic inventory in Ansible? (medium) - How do you handle dependencies between tasks in Ansible playbooks? (medium) - Explain the use of Ansible Vault. (medium) - How do you troubleshoot issues in Ansible automation? (medium) - What are some best practices for writing Ansible playbooks? (medium) - How do you scale Ansible for large infrastructure? (advanced) - Explain the concept of idempotency in Ansible. (basic) - How do you handle network devices with Ansible? (advanced) - What is the purpose of Ansible Galaxy? (basic) - How do you automate infrastructure provisioning using Ansible? (advanced) - Explain how Ansible communicates with remote servers. (basic) - How do you test Ansible playbooks? (medium) - What is Ansible Container and how is it used? (advanced) - How do you monitor Ansible tasks and jobs? (medium)
As the demand for ansible professionals continues to grow in India, job seekers should focus on enhancing their skills and preparing for interviews confidently. By mastering ansible and related technologies, you can open up exciting career opportunities in the IT industry. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2