Jobs
Interviews

1943 Bash Jobs - Page 9

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 8.0 years

11 - 15 Lacs

Bengaluru

Work from Office

Senior DevOps Engineer Experience :6-8 Years Location Bangalore Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. Job Overview: We are seeking a highly skilled Senior DevOps Engineer to join our dynamic team. The ideal candidate will have extensive experience in building, deploying, and maintaining scalable infrastructure using modern DevOps tools and methodologies. You will collaborate with cross-functional teams to enhance automation, improve deployment processes, and ensure the reliability and performance of cloud-based applications. Key Responsibilities: Design, implement, and maintain CI/CD pipelines using Jenkins, GitLab, and other relevant tools. Manage and scale containerized applications using Docker and orchestration tools like Kubernetes. Build and manage infrastructure as code using Terraform and Ansible. Develop and maintain scalable, secure, and cost-effective cloud infrastructure on GCP or any other Cloud providers. Automate repetitive tasks using Python and Bash scripting. Ensure high availability and performance of Linux-based systems. Collaborate with development teams to streamline build and release processes using tools such as Gradle and Maven. Monitor system performance, troubleshoot issues, and implement proactive solutions. Advocate and implement best practices for DevOps and cloud-native solutions. Maintain and enforce security and compliance standards across the infrastructure. Mandatory Skills and Qualifications: 6 to 8 years of hands-on experience in DevOps or related roles. Proficiency in containerization technologies like Docker and orchestration platforms like Kubernetes. Strong expertise in GCP services or any other cloud providers.. Advanced knowledge of Terraform and Ansible for infrastructure provisioning and configuration management. Experience with CI/CD tools like Jenkins and GitLab CI/CD. Solid scripting skills in Python and Bash. Deep understanding of Linux systems, including system administration and performance tuning. Working knowledge on build tools like Gradle and Maven. Strong version control experience using Git/Bitbucket. Excellent troubleshooting and problem-solving skills. Strong knowledge of system monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack). Preferred Skills: Familiarity with other cloud platforms such as Azure or Google Cloud Platform (GCP). Knowledge of security best practices and tools (e.g., secrets management, vulnerability scans). Experience in implementing DevSecOps practices. Knowledge of service mesh technologies like Istio, Envoy, etc. Soft Skills: Strong communication and collaboration skills. Ability to work in an agile, fast-paced environment. Attention to detail and ability to prioritize tasks effectively. Education: Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience).

Posted 1 week ago

Apply

5.0 - 10.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. Location - Bangalore/Mumbai Key skills and expertise: 5+yrs of experience working with Microsoft Power Platform and Dynamics and working with highly scalable and reliable servers Spend considerable time on Production management, including incident and problem management, capacity management, monitoring, event management, change management and plant hygiene. Troubleshooting issues across the entire Technology stackhardware, software, application and network. Participating in on-call rotation, periodic maintenance calls with other specialists from other timezones. Proactively identifying and addressing system reliability risks. Working closely with development teams to design, build and maintain systems from a reliability, stability and resiliency perspective. Identifying and driving opportunities to improve automation for our platforms, scope and create automation for deployment, management, and visibility of our services. Representing the RPE organization in the design reviews, operations readiness exercises for the new and existing products/services Technical Skills: Enterprise tools like Promethus, Grafana, Splunk and Apica UNIX/Linux Support and Cloud based services Ansible, Github, or any automation/configuration/release management tools Automation experience scripting languages such as Python, Bash, Perl and Ruby (one of the languages sufficient) Awareness of, ability to reason about modern software and systems architecture, including load balancing, databased, queueing, caching, distributed systems failure modes, microservices, cloud etc. Experience of Azure networks, ServiceBus, Azure Virtual machines, and AzureSQL will be an advantage. If you are keen to join us, you will be part of an organization that values your contributions, recognizes your potential, and provides ample opportunities for growth. For more information, visit www.capco.com. Follow us on Twitter, Facebook, LinkedIn, and YouTube.

Posted 1 week ago

Apply

6.0 - 10.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. Location-Pune Exp-6-10 Years : Engage with end users to determine requirement and use cases. Implement algorithms on the edge using Python Design, build and deploy apps for the Edge Gateway Build reusable code and libraries for future use Collaborate with other team members and stakeholders Qualifications and Experience: Bachelor of Science in Software Engineering, Computer Science or related field Masters or Post Graduate degree a plus 5+ years of experience in developing applications in Python Good understanding and hands on experience Software development lifecycle and principles of OOPs, SOLID, Abstraction, Encapsulation, Modularity. Good understanding and experience in communication protocols (MQTT, Kafka) Good understanding of container technologies such as Docker and Kubernetes Knowledge and experience in Continuous Integration and Continuous Experience in Agile methodologies, Git source code management, Test-Driven Development and integration testing Knowledge in code optimization to work in reduced resources in real-time (GPU, TPU) Knowledge of IoT architecture, network topologies, IoT security are a bonus. Experience working with Linux and Bash scripting Experience with small single-board computers (Raspberry Pi) is a plus Experience with virtual machines (Virtual Box, VMWare, etc.) is a plus Self-starter. Work independently in a highly dynamic start-up environment.

Posted 1 week ago

Apply

5.0 - 9.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Educational Requirements MCA,MSc,Bachelor of Engineering,BCA,BSc Service Line Data & Analytics Unit Responsibilities Setup sensible permission defaults for seamless access management for cloud resources using services like aws iam, aws policy management, aws kms, kube rbac, etc.Understanding of best practices for security, access management, hybrid cloud, etc. Additional Responsibilities: Knowledge of advance kube concepts and tools like service mesh, cluster mesh, karpenter, kustomize etcTemplatise infra IAC creation with pulumi and terraform, using advanced techniques for modularisation.Extend existing helm charts for repetitive stuff and orchestration, and write terraform/pulumi creation.Use complicated manual infrastructure setup with Ansible, Chef, etc.Certifications:AWS Certified Advanced Networking - SpecialtyAWS Certified DevOps Engineer - Professional (DOP-C02) Technical and Professional Requirements: should be able to write bash scripts for monitoring existing running infrastructure and report out.should be able to extend existing IAC code in pulumi typescriptability to debug and fix kubernetes deployment failures, network connectivity, ingress,volume issues etc with kubectlgood knowledge of networking basics to debug basic networking and connectivityissues with tools like dig, bash, ping, curl, ssh etcknowledge for using monitoring tools like splunk, cloudwatch, kube dashboard andcreate dashboards and alerts when and where needed.knowledge of aws vpc, subnetting, alb/nlb, egress/ingressknowledge of doing disaster recovery from prepared backups for dynamodb, kubevolume storage, keyspaces etc (AWS Backup, Amazon S3, Systems Manager Preferred Skills: Technology-Cloud Platform-Amazon Webservices DevOps-AWS DevOps

Posted 1 week ago

Apply

5.0 - 10.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Educational Requirements Master Of Engineering,Master Of Science,Master Of Technology,Bachelor Of Comp. Applications,Bachelor Of Science,Bachelor of Engineering,Bachelor Of Technology Service Line Cloud & Infrastructure Services Responsibilities As Infra AI Automation SME you will work on CI/CD implementations and Automation through ansible, Terraform, puppet, Kubernetes PowerShell, with hands on Infra technologies, On-prem and Cloud infra technologies. Design, develop, and implement Terraform configurations for provisioning and managing cloud infrastructure across various platforms (AWS, Azure, GCP etc.). Create modular and reusable Terraform modules for consistent and efficient infrastructure management. Automate infrastructure deployments and lifecycle management, including provisioning, configuration, and updates. Integrate Terraform with CI/CD pipelines for seamless and continuous deployments. Troubleshoot and debug Terraform code to identify and resolve issues. Stay up-to-date with the latest Terraform features and best practices. Collaborate with developers, DevOps engineers, and other stakeholders to understand requirements and design optimal infrastructure solutions. experience with Terraform, including writing and deploying infrastructure configurations. Strong understanding of IaC principles and best practices. Proficiency in scripting languages like Python or Bash for writing Terraform modules and custom functionality. Document Terraform configurations for clarity and maintainability. Additional Responsibilities: Good Communication skills Good analytical and problem-solving skills Technical and Professional Requirements: At least 6+ years of experience in Infra Automation Tools Proven experience in designing, developing, and implementing automation solutions for infrastructure tasks. Strong proficiency in one or more scripting languages relevant to infrastructure automation (e.g., Python, Bash, PowerShell). Hands-on experience with infrastructure-as-code (IaC) tools such as Ansible or Terraform. Familiarity with configuration management tools (e.g., Ansible, Chef, Puppet) is highly desirable. Good working knowledge on Dockers or Kubernetes Understanding of IT infrastructure components (servers, networking, storage, cloud). Experience with integrating automation with existing IT management tools. Preferred Skills: Technology-Microsoft Technologies-Windows PowerShell Technology-Infra_ToolAdministration-Infra_ToolAdministration-Others Technology-DevOps-Continuous delivery - Environment management and provisioning-Ansible Technology-Container Platform-Docker Technology-Container Platform-Kubernetes Generic Skills: Technology-Cloud Platform-AWS Core services Technology-Cloud Platform-Azure Core services

Posted 1 week ago

Apply

4.0 - 9.0 years

16 - 20 Lacs

Bengaluru

Work from Office

10- 12 years of experience in supporting IT infrastructure. Must have good understanding of OS (Win & Linux), Cloud, infra components like Backup/Storage and Network. Experience in design, build/setup and manage end to end infrastructure components. Cloud Architect certification is preferred or TOGA certified architect. Hands on knowledge to work with application team and build landing zones as per business/application requirements. Ability to write technical solution, high level architecture design, RFP responses. Should have good understanding of application functionality and some development experience for automation etc like Shell scripting, Terraform/Ansible. At least 7 years of hands-on experience in RedHat OpenShift platform (RHOS) and IT operational experience in a global enterprise environment. Cloud Architect / Lead Cloud Architect for RHOS Responsibilities:- This position will be responsible to consult with clients and propose architectural solutions to help move & improve infra from on-premise to cloud or help optimize cloud spend from one public cloud to the other. Be the first one to experiment on new age cloud offerings, help define the best practice as a thought leader for cloud, automation & Dev-Ops, be a solution visionary and technology expert across multiple channels. Good understanding of cloud design principles, sizing, multi-zone/cluster setup, resiliency and DR design. TOGAF or equivalent certifications Use your experience in AWS to build hybrid-cloud solutions for customers. Multicloud experience is a must Provide leadership to project teams, and facilitate the definition of project deliverables around core Cloud based technology and methods. Technical Skills: Proficiency in Linux, containerization, and Kubernetes. Experience with Red Hat OpenShift and its ecosystem, including Red Hat OpenShift Origin, Red Hat OpenShift Enterprise, and Red Hat OpenShift Online. CertificationsRed Hat Certified Engineer (RHCE) or Red Hat Certified Specialist in OpenShift Administration (RHOSA) certification is highly desirable. Cloud ComputingExperience with cloud computing concepts, including scalability, reliability, and security. ScriptingStrong scripting skills in languages such as Python, Bash, or Rub TroubleshootingStrong troubleshooting and analytical skills to identify and resolve complex issues. CommunicationExcellent communication and collaboration skills to work effectively with development teams. Linux AdministrationExperience with Linux administration, including configuration, deployment, and troubleshooting. ContainerizationExperience with containerization, including Docker, Kubernetes, and Red Hat OpenShift. VMware AdministrationExperience with VMWare administration, including vSphere, vSAN, and NSX. VMware CertificationVMware VCP or VCDX certification is highly desirable Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Linux, VMWare, Preferred technical and professional experience Kubernetes, OpenShift

Posted 1 week ago

Apply

2.0 - 5.0 years

5 - 9 Lacs

Hyderabad

Work from Office

AIX is the leading open standards-based UNIX operating system from IBM that provides scalable, secure, and robust infrastructure solution for enterprise customers. As aAIX backend developer, you will be responsible for Design, development & support of new feature functions, enabling new features for Image managment for AIX Operating System. Work with Product Managers, Senior leaders, and customers to understand the Business needs and implement the same in AIX. Adhere to the AIX development process and ensure successful delivery for the respective component Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5–10 years of experience in system-level software development or build engineering. Strong proficiency in C or C++, with a solid understanding of compilation processes, linking, and runtime behavior. Hands-on experience with modern compiler toolchains, particularly LLVM-based compilers, and familiarity with debugging tools like GDB. Experience working with large, complex codebases and optimizing build performance and reliability. Proficiency with build systems and tools such as Make, CMake, Ninja, and scripting languages (e.g., Bash, Python). Familiarity with enterprise operating systems such as AIX, Unix, and Linux. Ability to troubleshoot and resolve build and compilation issues across multiple platforms and architectures. Strong problem-solving skills and attention to detail in diagnosing low-level system or toolchain issues. Proven ability to collaborate effectively within globally distributed teams. Bachelor’s degree in Computer Science, Computer Engineering, or a related technical field. Preferred technical and professional experience Experience adapting existing codebases to work with evolving compiler technologies and toolchains. Exposure to cross-compilation environments and multi-target build configurations. Demonstrated adaptability and eagerness to learn new tools, frameworks, and technologies. Flexibility to contribute across development, testing, and support roles as needed.

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 8 Lacs

Bengaluru

Work from Office

A hands-on engineering position responsible for designing, automating, and maintaining robust build systems and deployment pipelines for AI/ML components, with direct development responsibilities in C++ and Python. The role supports both model training infrastructure and high-performance inference systems. Design and implement robustbuild automation systemsthat support large, distributed AI/C++/Python codebases. Develop tools and scripts that enable developers and researchers to rapidly iterate, test, and deploy across diverse environments. Integrate C++ components with Python-based AI workflows, ensuring compatibility, performance, and maintainability. Lead the creation ofportable, reproducible development environments, ensuring parity between development and production. Maintain and extend CI/CD pipelines for Linux and z/OS, implementing best practices in automated testing, artifact management, and release validation. Collaborate with cross-functional teams — including AI researchers, system architects, and mainframe engineers — to align infrastructure with strategic goals. Proactively monitor and improve build performance, automation coverage, and system reliability. Contribute to internal documentation, process improvements, and knowledge sharing to scale your impact across teams. Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise Strong programming skills in C++ and Python, with a deep understanding of both compiled and interpreted language paradigms. Hands-on experience building and maintainingcomplex automation pipelines(CI/CD) using tools likeJenkins, or GitLab CI. In-depth experience withbuild tools and systemssuch asCMake, Make, Meson, or Ninja, including custom script development and cross-compilation. Experience working onmulti-platform development, specifically onLinux and IBM z/OSenvironments, including understanding of their respective toolchains and constraints. Experience integratingnative C++ code with Python, leveragingpybind11,Cython, or similar tools for high-performance interoperability. Proven ability to troubleshoot and resolvebuild-time, runtime, and integration issuesin large-scale, multi-component systems. Comfortable withshell scripting(Bash, Zsh, etc.) and system-level operations. Familiarity withcontainerization technologieslike Docker for development and deployment environments. Preferred technical and professional experience Working knowledge of AI/ML frameworks such as PyTorch, TensorFlow, or ONNX, including understanding of how they integrate into production environments. Experience developing or maintaining software on IBM z/OS mainframe systems. Familiarity with z/OS build and packaging workflows, Understanding of system performance tuning, especially in high-throughput compute or I/O environments (e.g., large model training or inference). Knowledge of GPU computing and low-level profiling/debugging tools. Experience managing long-lifecycle enterprise systems and ensuring compatibility across releases and deployments. Background contributing to or maintaining open-source projects in the infrastructure, DevOps, or AI tooling space Proficiency in distributed systems, microservice architecture, and REST APIs. Experience in collaborating with cross-functional teams to integrate MLOps pipelines with CI/CD tools for continuous integration and deployment, ensuring seamless integration of AI/ML models into production workflows. Strong communication skills with the ability to communicate technical concepts effectively to non-technical stakeholders. Demonstrated excellence in interpersonal skills, fostering collaboration across diverse teams. Proven track record of ensuring compliance with industry best practices and standards in AI engineering. Maintained high standards of code quality, performance, and security in AI projects.

Posted 1 week ago

Apply

1.0 - 5.0 years

3 - 7 Lacs

Bengaluru

Work from Office

The Technical Support Engineer role of the z/OS support team involves supporting components of z/OS operating system in mainframe. This position will be to help design and execute productivity aids required for the infrastructure. Some programming skills are required and a basic engineering understanding to perform this role. Computer Science skills are highly desired. Technical Support Engineer will have the opportunity to experience firsthand what a mainframe customer expects from IBM as well as the technical complexity of this machine. Required education Bachelor's Degree Required technical and professional expertise Passion to pursue career path in Computer Engineering or Computer Science Fundamental education in software design and/or test Computer Architecture Knowledge of any programming languagesC, C++, Java, Assembly Good debugging skills Scripting knowledgePython, JavaScript, Perl, Bash, etc Strong Communication Skills Preferred technical and professional experience Development knowledge of Unix/Linux kernel functionality Knowledge of LAN drivers FPGA experience Experience in embedded systems development Knowledge of web and mobile application development Tools (Git/GitHub, IntelliJ, etc.)

Posted 1 week ago

Apply

4.0 - 8.0 years

0 - 0 Lacs

chennai

On-site

Job Description: DevOps Engineer On-Premise Kubernetes (K8s) Location: Chennai Experience: 4 - 8 years Employment Type: Full-Time Work Mode: Work from office. 15 days to immediate joiner Key Responsibilities: Kubernetes (K8s) On-Premise Setup and Management: Design, deploy, and manage highly available Kubernetes clusters in an on-premise environment. Handle day-to-day operations of Kubernetes including upgrades, patches, node scaling, storage provisioning, and networking. Develop and maintain Helm charts, operators, and K8s manifests for deployments. DevOps Tools & CI/CD: Create and maintain CI/CD pipelines using TeamCity and GitLab CI/CD for seamless application delivery. Integrate automated build, test, and deployment stages with artifact management tools like Nexus. Ensure robust rollback strategies and canary/blue-green deployments. Configuration Management & Automation: Use Ansible for server provisioning, configuration automation, and environment orchestration. Write and maintain reusable Ansible playbooks and roles for application and infrastructure deployment. Monitoring & Alerting: Set up and manage monitoring and alerting systems for infrastructure and application health using tools like Prometheus, Grafana, Nagios, or similar. Perform proactive issue identification, root cause analysis, and implement corrective actions. Scripting & Tooling: Develop internal tools, automation scripts, and monitoring utilities using Python and Bash. Manage cron jobs, scheduled tasks, and health-check tools to improve platform reliability. Source Code & Artifact Management: Manage and secure repositories in GitLab, enforce branching strategies and code review policies. Store and manage artifacts, Docker images, and packages in Nexus Repository Manager. Required Skills & Qualifications: Strong hands-on experience managing on-prem Kubernetes clusters. Deep understanding of DevOps principles, infrastructure as code (IaC), and automation. Experience with TeamCity/Jenkins, Ansible, GitLab, Nexus, and monitoring tools. Proficient in Python and Bash scripting for automation and tooling. Solid understanding of Linux system administration including networking, file systems, services, and security. Familiarity with containerization using Docker, and container orchestration using Kubernetes. Working knowledge of networking, load balancing, security policies, and Linux system administration. Good understanding of SDLC, CI/CD pipelines, and agile delivery processes. Good to Have: Exposure to Service Mesh (Istio/Linkerd), Ingress Controllers, and K8s Operators. Experience with secret management tools (e.g., Vault, Sealed Secrets). Understanding of RBAC, IAM, and audit logging in on-prem Kubernetes environments. Knowledge of observability stack: Prometheus, Alertmanager, Elasticsearch, Fluentd, Kibana (EFK stack).

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru

Work from Office

We are seeking a skilled Azure DevOps Engineer to design, implement, and manage the CI/CD pipelines, infrastructure automation, and secure deployment of microservices and AI components. The ideal candidate will ensure seamless integration across multiple Azure services, manage environments across dev/test/prod, and work closely with AI, frontend, and backend teams to enable iterative delivery and high availability. Key Responsibilities: Set up and manage CI/CD pipelines for backend APIs, Azure Functions, web frontends, and AI components using Azure DevOps. Manage infrastructure provisioning using Infrastructure-as-Code (IaC) tools like ARM templates or Terraform. Maintain environments across dev, QA, UAT, and production with secure, role-based access. Configure and manage Azure services including: Azure OpenAI Service Azure Functions & Logic Apps Azure Key Vault Azure Storage, App Services Azure SQL/NoSQL stores Azure Monitor and Application Insights Implement secure secrets and API key management through Azure Key Vault. Collaborate with developers to automate deployments, testing, and rollback strategies. Monitor and optimize infrastructure cost, performance, and uptime. Ensure code repositories (Git), branches, and build policies are aligned with project governance. Support data connectivity and scheduled ingestion from external APIs and internal sources (Power BI, OLAP cubes). Implement logging, alerting, and monitoring using Azure Monitor and Log Analytics. Requirements: 5+ years of experience in DevOps or Cloud Engineering with at least 2+ years on Azure. Strong experience with Azure DevOps pipelines, YAML-based workflows, and Git integration. Proficiency in Azure services like Functions, Logic Apps, Key Vault, Monitor, AD, etc. Experience setting up and managing LLM/AI workloads on Azure (e.g., OpenAI integration, vector databases). Strong scripting skills (PowerShell, Bash, Python, etc.). Familiarity with containerization (Docker) and orchestration (Kubernetes is a plus). Hands-on experience with IaC tools such as Terraform or Bicep. Knowledge of security and access control best practices in Azure environments. Ability to work in agile teams and collaborate with developers, data engineers, and QA. Good to have: Azure certifications: AZ-104, AZ-400, or AZ-305 preferred. Experience with hybrid integration scenarios (on-prem to cloud). Exposure to Power BI XMLA connectivity, OLAP cube gateways, or financial systems is a plus. Familiarity with cost management and FinOps practices on Azure.

Posted 1 week ago

Apply

10.0 - 15.0 years

8 - 12 Lacs

Greater Noida

Work from Office

Job Description: Build, provision and maintain Linux Operating system on bare-metal Cisco UCS blades for Oracle RAC infrastructure. Deploy and Manage Linux & Windows operating system hosts on VMware and RHV/OVM/oVirt/KVM infrastructure. Deploy VMs on cloud infrastructure like OCI/Azure and apply SEI compliance & hardening customizations. Create Terraform Code to deploy VMs and application in OCI/Azure/AWS cloud and on-prem VMWare/KVM environment Design, implement and manage the lifecycle of automation/orchestration software like Ansible, Terraform, Jenkins, Gitlab Strong automation expertise with scripting languages like bash, perl, Ansible, Python Act as last level of escalation on the Technical services team for all System related issues Work with other infrastructure engineers to resolve Networking and Storage subsystem related issues Analyze production environment to detect critical deficiencies and recommend solutions for improvement Ensure that the operating system adheres to SEI security and compliance requirements Monitor the infrastructure and respond to alerts and automate monitoring tools. Skills Bachelor’s Degree in Computer Science in related discipline or equivalent work experience. A minimum of 5 years supporting multiple environments such as production, pre-production and development. This role is a very hands-on role with shell, Python scripting and automation of Infrastructure using Ansible and Terraform. Working experience with Git for code management and follow the DevOps principles. Follow the infrastructure as code principal and develop an automation to daily and repeatable setup and configuration. Install and configure Linux on UCS blades, VMware, KVM/oVirt based virtualization platforms Bare metal installation of UCS nodes and good understanding in finding hardware related issues and work with datacenter team to address them Good Understanding of TCP/IP stack, Network vlans, subnets and troubleshoot firewall, network issues occur in environment and work with network team to resolve them Good understanding on SAN and NAS environment with multipath/powerpath support. Support ORACLE Database RAC environment. Work with L1 and L2 teams to troubleshoot

Posted 1 week ago

Apply

4.0 - 6.0 years

0 Lacs

Chennai

Hybrid

This job should support in deploying product updates, identifying production issues, and implementing integrations that meet our customers needs. The DevOps engineer will also help plan projects and be involved in project management decisions. ESSENTIAL RESPONSIBILITIES Building and implementing new development tools and infrastructure Understanding the needs of stakeholders and conveying them to developers Working on ways to automate and improve development and release processes Testing and examining code written by others and analyzing results Ensuring that systems are safe and secure against cybersecurity threats Identifying technical problems and developing software updates and fixes Working with software developers and software engineers to ensure that development follows established processes and works as intended Planning projects and being involved in project management decisions. Required Bachelor’s degree in engineering, Master of Computer Applications or related field 6-10 years of experience as a DevOps Engineer Experience with Devops tools and Technologies, experience in tool like UCD, Jenkins, Gitlab, application using Jenkins, Build tools like Ant, Maven, Binary Repository tools like Artifactory, automation tools like Junit, NUnit, Selenium JMeter, LoadRunner, Code analysis tools like SonarQube, Deployment tools like Ansible,OS experience in Mainframe/ AS400-/Micro focus, Windows, Linux, scripting knowledge in Python, Shell, and PowerShell. Excellent communication skills interested candidates please share your updated resume to Femina.periyanayagam@thryvedigital.com

Posted 1 week ago

Apply

3.0 - 5.0 years

3 - 7 Lacs

Gurugram

Work from Office

Grade Level (for internal use): 09 The Role: Platform Engineer Department overview PVR DevOps is a global team that provides specialized technical builds across a suite of products. DevOps members work closely with the Development, Testing and Client Services teams to build and develop applications using the latest technologies to ensure the highest availability and resilience of all services. Our work helps ensure that PVR continues to provide high quality service and maintain client satisfaction. Position Summary S&P Global is seeking a highly motivated engineer to join our PVR DevOps team in Noida. DevOps is a rapidly growing team at the heart of ensuring the availability and correct operation of our valuations, market and trade data applications. The team prides itself on its flexibility and technical diversity to maintain service availability and contribute improvements through design and development. Duties & accountabilities The role of Principal DevOps Engineer is primarily focused on building functional systems that improve our customer experience. Responsibilities include: Creating infrastructure and environments to support our platforms and applications using Terraform and related technologies to ensure all our environments are controlled and consistent. Implementing DevOps technologies and processes, e.g: containerisation, CI/CD, infrastructure as code, metrics, monitoring etc Automating always Supporting, monitoring, maintaining and improving our infrastructure and the live running of our applications Maintaining the health of cloud accounts for security, cost and best practices Providing assistance to other functional areas such as development, test and client services. Knowledge, Skills & Experience Strong background of At least 3 to 5 years of experience in Linux/Unix Administration in IaaS PaaS SaaS models Deployment, maintenance and support of enterprise applications into AWS including (but not limited to) Route53, ELB, VPC, EC2, S3, ECS, SQS Good understanding of Terraform and similar Infrastructure as Code technologies Strong experience with SQL and NoSQL databases such MySQL, PostgreSQL, DB/2, MongoDB, DynamoDB Experience with automation/configuration management using toolsets such as Chef, Puppet or equivalent Experience of enterprise systems deployed as micro-services through code pipelines utilizing containerization. Working knowledge, understanding and ability to write scripts using languages including Bash, Python and an ability to understand Java, JavaScript and PHP Personal competencies Personal Impact Confident individual able to represent the team at various levels Strong analytical and problem-solving skills Demonstrated ability to work independently with minimal supervision Highly organised with very good attention to detail Takes ownership of issues and drives through the resolution. Flexible and willing to adapt to changing situations in a fast moving environment Communication Demonstrates a global mindset, respects cultural differences and is open to new ideas and approaches Able to build relationships with all teams, identifying and focusing on their needs Ability to communicate effectively at business and technical level is essential. Experience working in a global-team Teamwork An effective team player and strong collaborator across technology and all relevant areas of the business. Enthusiastic with a drive to succeed. Thrives in a pressurized environment with a can do attitude Must be able to work under own initiative

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 11 Lacs

Noida

Work from Office

We are looking for a Senior DevOps Engineer to join our DevOps team in India. This is an amazing opportunity to work on key products in the Intellectual Property space, leveraging tools like Kubernetes, Terraform, Datadog, Jenkins, and a wide range of AWS services. We have a great skill set in Kubernetes, AWS, CI/CD automation, and monitoring, and we would love to speak with you if you have experience with Kubernetes, Terraform, and Pipelines. About You experience, education, skills, and accomplishments Bachelors degree in computer science, Engineering, or equivalent experience. 5 + years of overall experience in development, with 3 + years in a DevOps-focused role. 2 + years of AWS experience managing services like RDS, EC2, IAM, Route53, VPC, EKS, Beanstalk, WAF, CloudFront, and Lambda. Hands-on experience with Kubernetes, Docker, and Terraform, including writing Dockerfiles and Kubernetes manifests. Experience building pipelines using Jenkins, Bamboo, or similar tools. Hands on experience working with monitoring tools. Scripting knowledge in Linux/Bash. It would be great if you also have . . . Experience with Helm and other Kubernetes tools. Python programming experience. What will you be doing in this role? Upgrade and enhance Kubernetes clusters. Troubleshoot environment-related issues alongside development teams. Extend and improve monitoring capabilities. Develop and optimize CI/CD pipelines. Write infrastructure as code using Terraform. Share innovative ideas and contribute to team improvement. About the Team Our team supports several products within the Intellectual Property space, built on technologies like Kubernetes (EKS), Jenkins, AWS services (Beanstalk, IAM, EC2, RDS, Route53, Lambda, CloudFront, WAF, S3), Bamboo, and Kong. We work closely with Developers and QA to deploy and improve systems, and bring in-depth knowledge of AWS, networking, monitoring, security, and infrastructure configuration. Hours of Work This is a full time opportunity with Clarivate, 9 hours per day including 1 hour lunch break.

Posted 1 week ago

Apply

3.0 - 8.0 years

7 - 11 Lacs

Bengaluru

Work from Office

We are looking for Infrastructure Development Engineer to join our team. This is an exciting opportunity to work on Clarivate's Ex Libris project. This position will enable you to participate in highly professional Infrastructure engineering team that collaborates with the other engineering and operations teams to shape our emerging IT and Cloud management solutions using innovative technologies. About you At least 3 years of experience with system engineering in a cloud environment for tools, infrastructure and automation development Proven Development experience with PythonandBash scripting 3 years experience. Strong background in Linux/Unix Administration and System at scale (RedHat/CentOS/Oracle Linux) - 3 years experience. Experience with containers and k8s on the various levels: system administration and operations of multi-clusters 2 years experience. Experience with DevOps, and CI-CD concepts and tools 2 years experience. Understanding in systems architecture and infrastructure: Networking, Security, Storage and System - 3 years experience. It would be great if you also had... Experience with Configuration Management using Ansible/Terraform Experience with Log management tools using Elasticsearch Experience with virtualization platforms: Redhat KVM/VMware vSphere Experience with version control tools based on Git: GitHub/GitLab Experience with monitoring tools, and alerting systems based on Prometheus and Grafana Knowledge of Secrets Management tools: Hashicorp Vault What will you be doing in this role? Design and implement pragmatic, scalable solutions, favoring simplicity and flexibility to meet business needs. Research, evaluate, and recommend standards, tools, technologies, and services to support infrastructure strategy. Ensure application and infrastructure architectures are stable, highly available, secure, and compliant with internal policies and external regulations. Design, develop, and maintain automation solutions that support and optimize on-premises cloud infrastructure. Build, manage, and enhance CI/CD pipelines to support reliable, repeatable, and secure software delivery processes. Administer and support private cloud management tools, ensuring efficient orchestration, provisioning, and lifecycle management of infrastructure resources.

Posted 1 week ago

Apply

7.0 - 12.0 years

4 - 8 Lacs

Hyderabad, Bengaluru, Secunderabad

Work from Office

We are looking for a Senior DevOps Engineer to join our Life Sciences & Healthcare DevOps team . This is an exciting opportunity to work on cutting-edge Life Sciences and Healthcare products in a DevOps environment. If you love coding in Python or any scripting language, have experience with Linux, and ideally have worked in a cloud environment, wed love to hear from you! We specialize in container orchestration, Terraform, Datadog, Jenkins, Databricks, and various AWS services. If you have experience in these areas, wed be eager to connect with you. About You experience, education, skills, and accomplishments At least 7+ years of professional software development experience and 5+ years as DevOps Engineer or similar skillsets with experience on various CI/CD and configuration management tools e.g., Jenkins, Maven, Gradle, Jenkins, Spinnaker, Docker, Packer, Ansible, Cloudformation, Terraform, or similar CI/CD orchestrator tool(s). At least 3+ years of AWS experience managing resources in some subset of the following services: S3, ECS, RDS, EC2, IAM, OpenSearch Service, Route53, VPC, CloudFront, Glue and Lambda. 5+ years of experience with Bash/Python scripting. Wide knowledge in operating system administration, programming languages, cloud platform deployment, and networking protocols Be on-call as needed for critical production issues. Good understanding of SDLC, patching, releases, and basic systems administration activities It would be great if you also had AWS Solution Architect Certifications Python programming experience. . What will you be doing in this role? Design, develop and maintain the product's cloud infrastructure architecture, including microservices, as well as developing infrastructure-as-code and automated scripts meant for building or deploying workloads in various environments through CI/CD pipelines. Collaborate with the rest of the Technology engineering team, the cloud operations team and application teams to provide end-to-end infrastructure setup Design and deploy secure, resilient, and scalable Infrastructure as Code per our developer requirements while upholding the InfoSec and Infrastructure guardrails through code. Keep up with industry best practices, trends, and standards and identifies automation opportunities, designs, and develops automation solutions that improve operations, efficiency, security, and visibility. Ownership and accountability of the performance, availability, security, and reliability of the product/s running across public cloud and multiple regions worldwide. Document solutions and maintain technical specifications Product you will be developing The P roducts rely on container orchestration (AWS ECS, EKS ), Jenkins, various AWS services (such as Opensearch , S3 , IAM, EC2, RDS, VPC , Route53, Lambda, Cloudfront ), Databricks, Datadog , Terraform and you will be working to support the Development team build them. About the Team Life Sciences & HealthCare Content DevOps team mainly focus on Dev O ps operations on Production infrastructure related to Life Sciences & HealthCare Content products. Our team consists of five members and reports to the DevOps Manager. We as a team provides DevOps support for almost 40+ different application products internal to Clarivate and which are source for customer facing products. Also, responsible for Change process on production environment. Incident Management and Monitoring. Team also handles customer raised /internal user service requests. Hours of Work Shift timing 12PM to 9PM. Must provide On-call support during non-business hours per week based on team bandwidth

Posted 1 week ago

Apply

4.0 - 9.0 years

8 - 12 Lacs

Mumbai

Work from Office

We are looking for Role: AWS Infrastructure Engineer Experience: 4+ yrs Job Location: Bavdhan, Pune Work Mode: Remote Job Description: Skilled AWS Infrastructure Engineer with expertise in AWS services, Linux, and Windows systems. The ideal candidate will design, deploy, and manage scalable, secure cloud infrastructure while supporting hybrid environment. Key Responsibilities: Hands-on experience with multi-cloud environments (e.g., Azure, AWS, GCP) Design and maintain AWS infrastructure (EC2, S3, VPC, RDS, IAM, Lambda and other AWS services). Implement security best practices (IAM, GuardDuty, Security Hub, WAF). Configure and troubleshoot AWS networking and hybrid and url filtering solutions (VPC, TGW, Route 53, VPNs, Direct Connect). Experience managing physical firewall management (palo alto , cisco etc..) Manage , troubleshoot, Configure and optimize services like Apache, NGINX, and MySQL/PostgreSQL on Linux/Windows/ Ensure Linux/Windows server compliance with patch management and security updates. Provide L2/L3 support for Linux and Windows systems, ensuring minimal downtime and quick resolution of incidents Collaborate with DevOps, application, and database teams to ensure seamless integration of infrastructure solutions Automate tasks using Terraform, CloudFormation, or scripting (Bash, Python). Monitor and optimize cloud resources using CloudWatch, Trusted Advisor, and Cost Explorer Requirements: 4+ years of AWS experience and system administration in Linux & Windows. Proficiency in AWS networking, security, and automation tools. Certifications: AWS Solutions Architect (required), RHCSA/MCSE (preferred). Strong communication and problem-solving skills webserver - apache2 , nginx , IIS, OS- ubuntu , windows server certification : AWS solution architected associate level -- Muugddha Vanjarii 7822804824 mugdha.vanjari@sunbrilotechnologies.com

Posted 1 week ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

Mumbai

Work from Office

We are looking for an experienced DevOps Enginee r to join our infrastructure and platform team. You will play a key role in designing, implementing, and maintaining our CI/CD pipelines, automating infrastructure, ensuring system reliability, and improving overall developer productivity. The ideal candidate is well-versed in On-Prem, cloud platforms, infrastructure as code, and modern DevOps practices. Role & responsibilities Design, build, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI. Automate infrastructure provisioning and configuration using Terraform, Ansible, or CloudFormation. Manage and monitor production and staging environments across On-Prem and cloud platform (AWS). Implement containerization and orchestration using Docker and Kubernetes. Ensure system availability, scalability, and performance via monitoring, logging, and alerting tools (e.g., Prometheus, Grafana, ELK, Datadog). Maintain and improve infrastructure security, compliance, and cost optimization. Collaborate with development, QA, and security teams to streamline code deployment and feedback loops. Participate in on-call rotations and troubleshoot production incidents. Write clear and maintainable documentation for infrastructure, deployments, and processes. Preferred candidate profile 315 years of experience in DevOps, SRE, or infrastructure engineering. Proficiency in scripting languages like Bash, Python, or Go. Strong hands-on experience with cloud platforms (preferably AWS). Deep understanding of Docker and Kubernetes ecosystem. Experience with infrastructure automation tools such as Ansible, Terraform or Chef. Familiarity with source control (Git), branching strategies, and code review practices. Solid experience with Linux administration, system performance tuning, and troubleshooting. Knowledge of networking concepts, load balancers, VPNs, DNS, and firewalls. Experience with monitoring/logging tools like Prometheus, Grafana, ELK, Splunk, or Datadog, Nagios, Log Shippers like Filebeat ,Fluentd, Fluent Bit. Familiarity with security tools like Vault, AWS IAM, or cloud workload protection. Experience in high-availability, multi-region architecture design. Strong understanding in creation of RPM packages and Yum Repos. Strong ubnderstanding of Jmeter scripting and test case writing. Strong understanding of Artifact repository Manager (JFROG,Nexus,Maven,NPM,NVM) Installation of open source / enterprise tools from Source file or RPM Packages. Strong understanding of tech stack ( Redis, Mysql, Nginx, RabbitMQ, Tomcat, Apache, JBOSS) Implement cloud-native solutions including load balancers, VPCs, IAM, AutoScaling Group, CDNs, S3,Route 53 etc. SAST tools like SonarQube, CheckMarks, JFrog X-Ray. Expertise in configuring , upgrading the API Gateway Preferbly ( Google Apigee, Kong ) etc.

Posted 1 week ago

Apply

2.0 - 6.0 years

2 - 6 Lacs

Mumbai, Hyderabad

Work from Office

Objectives Of This Role Design and implement efficient, scalable backend services using Python. Work closely with healthcare domain experts to create innovative and accurate diagnostics solutions. Build APIs, services, and scripts to support data processing pipelines and front-end applications. Automate recurring tasks and ensure robust integration with cloud services. Maintain high standards of software quality and performance using clean coding principles and testing practices. Collaborate within the team to upskill and unblock each other for faster and better outcomes. Primary Skills Python Development Proficient in Python 3 and its ecosystem Frameworks: Flask / Django / FastAPI RESTful API development Understanding of OOPs and SOLID design principles Asynchronous programming (asyncio, aiohttp) Experience with task queues (Celery, RQ) Rust programming experience for systems-level or performance-critical components Testing & Automation Unit Testing: PyTest / unittest Automation tools: Ansible / Terraform (good to have) CI/CD pipelines DevOps & Cloud Docker, Kubernetes (basic knowledge expected) Cloud platforms: AWS / Azure / GCP GIT and GitOps workflows Familiarity with containerized deployment & serverless architecture Bonus Skills Data handling libraries: Pandas / NumPy Experience with scripting: Bash / PowerShell Functional programming concepts Familiarity with front-end integration (REST API usage, JSON handling) Other Skills Innovation and thought leadership Interest in learning new tools, languages, workflows Strong communication and collaboration skills Basic understanding of UI/UX principles Skills:- Rust, Python, Artificial Intelligence (AI), Machine Learning (ML), Data Science, Data Analytics and pandas

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

We are looking for a highly skilled Performance Testing Engineer with expertise in Apache JMeter to join our QA team. The ideal candidate will be responsible for designing and executing performance tests, as well as gathering performance requirements from stakeholders to ensure systems meet expected load, responsiveness, and scalability criteria. As a Performance Engineer at Boomi, you will validate and recommend performance optimizations in our computing infrastructure and software. You will collaborate with Product Development and Site Reliability Engineering teams on performance monitoring, tuning, and tooling. Your responsibilities will include analyzing software architecture, working on capacity planning, identifying KPIs, and designing scalability and resiliency tests using tools like JMeter, blazemeter, and Neoload. Essential requirements for this role include expertise in performance engineering fundamentals, monitoring performance using native Linux OS and APM tools, understanding AWS services for infrastructure analysis, and experience with tools like NewRelic and Splunk. You should also be skilled in analyzing heap dumps, thread dumps, and SQL slow query logs, and recommending optimal resource configurations in Cloud, Virtual Machine, and Container technologies. Desirable requirements include experience in writing custom monitoring tools using Java, Python, or similar languages, capacity planning using AI/ML, and performance tuning in Java or similar application code. At Boomi, we offer a culture of caring, continuous learning and development opportunities, interesting and meaningful work, balance and flexibility, and a high-trust environment. If you are passionate about solving challenging problems, working with cutting-edge technology, and making a real impact, we encourage you to explore a career with Boomi. Join us in Bangalore/Hyderabad, India, and be a part of our Performance, Scalability, and Resiliency(PSR) Engineering team to do the best work of your career and make a profound social impact.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As an AWS Cloud Engineer at Talent Worx, you will be responsible for designing, deploying, and managing AWS cloud solutions to meet our organizational objectives. Your expertise in AWS technologies will be crucial in building scalable, secure, and robust cloud architectures that ensure optimal performance and efficiency. Key Responsibilities: - Design, implement, and manage AWS cloud infrastructure following best practices - Develop cloud-based solutions utilizing AWS services such as EC2, S3, Lambda, RDS, and VPC - Automate deployment, scaling, and management of cloud applications using Infrastructure as Code (IaC) tools like AWS CloudFormation and Terraform - Implement security measures and best practices, including IAM, VPC security, and data protection - Monitor system performance, troubleshoot issues, and optimize cloud resources for cost and performance - Collaborate with development teams to set up CI/CD pipelines for streamlined deployment workflows - Conduct cloud cost analysis and optimization to drive efficiency - Stay updated on AWS features and industry trends for continuous innovation and improvement Required Skills and Qualifications: - 3+ years of experience in cloud engineering or related field, focusing on AWS technologies - Proficiency in AWS services like EC2, S3, EBS, RDS, Lambda, and CloudFormation - Experience with scripting and programming languages (e.g., Python, Bash) for automation - Strong understanding of networking concepts (VPC, subnetting, NAT, VPN) - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes - Knowledge of DevOps principles, CI/CD tools, and practices - AWS certification (e.g., AWS Certified Solutions Architect, AWS Certified Developer) preferred - Analytical skills, attention to detail, and excellent communication abilities - Bachelor's degree in Computer Science, Information Technology, or related field Join Talent Worx and enjoy benefits like global exposure, accelerated career growth, diverse learning opportunities, collaborative culture, cross-functional mobility, access to cutting-edge tools, a focus on purpose and impact, and mentorship for leadership development.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

You are an experienced and motivated DevOps and Cloud Engineer with a strong background in cloud infrastructure, automation, and continuous integration/delivery practices. Your role involves designing, implementing, and maintaining scalable, secure, and high-performance cloud environments on platforms like AWS, Azure, or GCP. You will collaborate closely with development and operations teams to ensure seamless workflow. Your key responsibilities include designing, deploying, and managing cloud infrastructure, building and maintaining CI/CD pipelines, automating infrastructure provisioning, monitoring system performance, managing container orchestration platforms, supporting application deployment, and ensuring security best practices in cloud and DevOps workflows. Troubleshooting and resolving infrastructure and deployment issues, along with maintaining up-to-date documentation for systems and processes, are also part of your role. To qualify for this position, you should have a Bachelor's degree in computer science, Engineering, or a related field, along with a minimum of 5 years of experience in DevOps, Cloud Engineering, or similar roles. Proficiency in scripting languages like Python or Bash, hands-on experience with cloud platforms, knowledge of CI/CD tools and practices, and familiarity with containerization and orchestration are essential. Additionally, you should have a strong understanding of cloud security and compliance standards, excellent analytical, troubleshooting, and communication skills. Preferred qualifications include certifications like AWS Certified DevOps Engineer, Azure DevOps Engineer Expert, or equivalent, as well as experience with GitOps, microservices, or serverless architecture. Join our technology team in Trivandrum and contribute to building and maintaining cutting-edge cloud environments while enhancing our DevOps practices.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

You will be working as a Tech Support Lead at Level AI's office in Noida, India. In this role, you will be responsible for overseeing technical support operations and contributing to the growth of the support team. Initially, you will handle customer tickets directly and later transition into a managerial position where you will lead and develop a team of support engineers. Your primary responsibilities will include automating processes using AI, gaining in-depth knowledge of product implementation, resolving complex customer support tickets during PST business hours, and documenting patterns and processes. You will also act as the main escalation point for urgent technical issues, build and expand the technical support team in Noida, and lead the hiring, onboarding, and training processes for support engineers. Additionally, you will be involved in developing operational processes, collaborating with cross-functional teams, and monitoring key support metrics to drive continuous improvement. To be successful in this role, you should have at least 5 years of experience in technical support roles within the SaaS or enterprise software industry, with a minimum of 2 years in a leadership position. Strong troubleshooting skills in areas such as APIs, cloud platforms, and log analysis are essential, along with proficiency in tools like Zendesk, Jira, or Salesforce. Excellent communication skills, the ability to work independently, and manage a globally distributed team are also required. Additionally, a Bachelor's degree in Computer Science or a related field, experience supporting enterprise clients, and familiarity with SQL, scripting languages, or ITIL certification are preferred qualifications. If you are someone who enjoys working in a dynamic startup environment, is ready to take on challenges, and aspires to grow into a senior leadership role, this opportunity at Level AI could be the perfect fit for you. Join us in shaping the future of AI-driven customer experience and be part of a team that is revolutionizing the contact center industry.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

At Capgemini Engineering, the world leader in engineering services, you will be part of a global team of engineers, scientists, and architects dedicated to helping the world's most innovative companies reach their full potential. Our digital and software technology experts are known for thinking outside the box and providing unique R&D and engineering services across various industries. Join us for a career filled with opportunities where you can truly make a difference, and where each day brings new challenges. You will be responsible for developing, maintaining, and optimizing automation scripts using Python and Bash. Your role will involve understanding cloud infrastructure using VMware and OpenStack, as well as designing and executing stability, capacity, and robustness test plans to assess software performance under different conditions. You will analyze test results and offer actionable insights to development teams to enhance performance. Collaboration with developers and product teams will be essential to grasp application requirements and define performance criteria. You will also develop and manage automated test scripts for performance testing, monitor application performance in production, and identify areas for enhancement. Documenting testing processes, results, and findings clearly and comprehensively will be a key part of your responsibilities. Staying up-to-date with industry trends and best practices in performance and robustness testing will be crucial. Additionally, you will manage the end-to-end lifecycle of Virtual Network Functions (VNFs), including deployment, scaling, upgrading, and retirement. Capgemini is a global business and technology transformation partner, committed to helping organizations accelerate their digital and sustainable journey while creating tangible impact for enterprises and society. With a diverse team of over 340,000 members in more than 50 countries, Capgemini leverages its 55-year heritage to unlock technology's value for clients. The company delivers end-to-end services and solutions, blending strengths from strategy and design to engineering, powered by leading capabilities in AI, generative AI, cloud, and data, alongside deep industry expertise and a strong partner ecosystem.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies