Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As an SMO OSS Integration Consultant with 4 to 8 years of experience based in Bangalore, you will be expected to demonstrate the following expertise: Technical Expertise: - Possess a strong knowledge of SMO platforms and their integration with OSS systems. - Show familiarity with OSS functions such as inventory management, fault management, and performance monitoring. - Have hands-on experience with O-RAN interfaces like A1, E2, and O1. Protocols and Standards: - Showcase in-depth knowledge of 5G standards, including 3GPP, O-RAN, and TM Forum. - Demonstrate familiarity with protocols such as HTTP/REST APIs, NETCONF, and YANG. Programming and Scripting: - Exhibit proficiency in Python, Bash, or similar languages for scripting and automation. - Display experience with AI/ML frameworks and their application in network optimization. Tools and Platforms: - Possess experience working with tools like Prometheus, Grafana, Kubernetes, Helm, and Ansible for monitoring and deployment. - Show familiarity with cloud-native deployments, including OpenShift, AWS, and Azure. If you meet the requirements mentioned above and are prepared to contribute effectively in a fast-paced environment, please reach out to Bala at bala@cssrecruit.com for further details.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a skilled OS Engineer at Viraaj HR Solutions, you will play a crucial role in designing and implementing OS solutions for network forwarding systems. Your responsibilities will include monitoring performance, optimizing network traffic, debugging OS forwarding functionalities, and collaborating with the engineering team for system architecture design. It will be your duty to maintain system security, document configurations, conduct system reviews, and provide technical support to junior engineers. Your qualifications should include a Bachelor's degree in Computer Science, Engineering, or a related field, along with proven experience in OS engineering and network management. A strong understanding of TCP/IP, DNS, Linux-based operating systems, and virtualization tools like VMware or KVM is essential. You should also have proficiency in scripting languages such as Python or Bash, experience with cloud services like AWS or Azure, and knowledge of security mechanisms and best practices. In this role, you will need excellent problem-solving and analytical skills, the ability to work both independently and in a team environment, and strong documentation and reporting capabilities. Your communication skills should be effective for technical and non-technical audiences, and you should be detail-oriented with a focus on quality outcomes. Experience with configuration management tools like Ansible or Puppet, as well as skills in rabbitmq, gnmi, automation tools, and performance tuning will be advantageous. Your adaptability to changing technologies and environments, along with your proficiency in skills related to network forwarding systems, configuration management, security management, and cloud services, will be key in ensuring the success of our projects. Join our dynamic team and contribute to our mission of empowering businesses with the right people while promoting a culture of growth and collaboration.,
Posted 2 days ago
4.0 - 8.0 years
0 Lacs
chandigarh
On-site
Adventus.io is a B2B2C SaaS-based marketplace supporting institutions, recruiters, and students within the international student placement sector. Our innovative platform allows institutions, recruiters, and students to directly connect with one another, resulting in matching the right international study experience with students across the world. Founded in 2018, we are on a mission to change the way the world accesses international education. Behind the technology, we have over 500 amazingly talented humans making it all happen. We are looking for ambitious self-starters who want to be part of our vision and create a positive legacy. You will work in an agile environment alongside application developers on a vast array of initiatives as we deploy exciting new application features to AWS hosted environments. A portion of your time will be spent assisting the Data Analytics team in building our big data collection and analytics capabilities to uncover customer, product, and operational insights. Collaborate with other Software Engineers & Data Engineers to evaluate and identify optimal cloud architectures for custom solutions. You will design, build, and deploy AWS applications at the direction of other architects including data processing, statistical modeling, and advanced analytics. Design for scale, including systems that auto-scale and auto-heal. Via automation, you will relentlessly strive to eliminate manual toil. Maintain cloud stacks utilized in running our custom solutions, troubleshoot infrastructure-related issues causing solution outage or degradation, and implement necessary fixes. You will implement monitoring tools and dashboards to evaluate health, usage, and availability of custom solutions running in the cloud. Assist with building, testing, and maintaining CI/CD pipelines, infrastructure, and other tools to allow for the speedy deployment and release of solutions in the cloud. Consistently improve the current state by regularly reviewing existing cloud solutions and making recommendations for improvements (such as resiliency, reliability, autoscaling, and cost control), and incorporating modern infrastructure as code deployment practices using tools such as CloudFormation, Terraform, Ansible, etc. Identify, analyze, and resolve infrastructure vulnerabilities and application deployment issues. You will collaborate with our Security Guild members to implement company-preferred security and compliance policies across the cloud infrastructure running our custom solutions. Build strong cross-functional partnerships. This role will interact with business and engineering teams, representing many different types of personalities and opinions. Minimum 4+ years of work experience as a DevOps Engineer building AWS cloud solutions. Strong experience in deploying infrastructure as code using tools like Terraform and CloudFormation. Strong experience working with AWS services like ECS, EC2, RDS, CloudWatch, Systems Manager, EventBridge, ElastiCache, S3, and Lambda. Strong scripting experience with languages like Bash and Python. Understanding of Full Stack development. Proficiency with GIT. Experience in container orchestration (Kubernetes). Implementing CI/CD pipeline in the project. Sustained track record of making significant, self-directed, and end-to-end contributions to building, monitoring, securing, and maintaining cloud-native solutions, including data processing and analytics solutions through services such as Segment, BigQuery, and Kafka. Exposure to the art of ETL, automation tools such as AWS Glue, and presentation layer services such as Data Studio and Tableau. Knowledge of web services, API, and REST. Exposure to deploying applications and microservices written in programming languages such as PHP and NodeJS to AWS. A belief in simple solutions (not easy solutions) and can accept consensus even when you may not agree. Strong interpersonal skills, you communicate technical details articulately and have demonstrable creative thinking and problem-solving abilities with a passion for learning new technologies quickly.,
Posted 2 days ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a Platform Support Engineer in Pune, India, your primary responsibility will be to ensure the successful design, implementation, and resolution of issues within the system. You should possess expertise in AWS, Azure, Kubernetes, and Terraform, along with proficiency in managing tools like Docker, RabbitMQ, PostgreSQL, and scripting in Bash and Python. Your role will involve working closely with AWS and Azure environments, as well as supporting technologies such as Terraform, Helm, and Kubernetes. You will be responsible for both the design and implementation of systems to effectively address product issues and resolve cloud or hardware-related problems. Furthermore, you will be involved in the development of platform and installation tools using a combination of Bash and Python. The platform you will be working on utilizes services like Kubernetes, Docker, RabbitMQ, PostgreSQL, and various cloud-specific services. This position offers you the opportunity to gain in-depth knowledge and experience in managing these technologies. With a minimum of 4 years of experience, you will contribute to the seamless operation and continuous improvement of the platform while also staying updated on emerging trends and best practices in the field of platform support engineering.,
Posted 2 days ago
1.0 - 5.0 years
0 Lacs
punjab
On-site
As a Part-Time DevOps Engineer at Ditinus, you will be joining a dynamic team dedicated to delivering cutting-edge solutions in IT services. Your role will involve maintaining and enhancing our infrastructure and deployment processes to ensure reliability, scalability, and efficiency of our systems. Collaborating with development teams, you will implement and manage CI/CD pipelines, optimize cloud infrastructure on platforms like AWS, Azure, and Google Cloud, and automate deployment processes using tools like Ansible, Terraform, or similar. Monitoring system performance, troubleshooting issues, and implementing security best practices will be crucial aspects of your responsibilities. To excel in this role, you should have proven experience as a DevOps Engineer, a strong understanding of cloud platforms, proficiency in scripting languages such as Python and Bash, and experience with containerization technologies like Docker and Kubernetes. Familiarity with CI/CD tools like Jenkins, GitLab CI, or CircleCI, as well as version control systems like Git, will be beneficial. Your problem-solving skills, attention to detail, and ability to communicate effectively and work collaboratively will be essential in ensuring the success of our systems. This part-time position requires 22 hours per week, Monday to Friday, with the work location being in person at Sohana, Mohali, Punjab. If you are an experienced DevOps Engineer looking for a part-time opportunity to contribute to a collaborative and innovative team, we encourage you to apply. Please note that the final round of interviews will be conducted face-to-face. We look forward to welcoming you to our team at Ditinus. Contact no: 8264166124 Job Type: Part-time Experience: - Total work: 1 year (Preferred) - DevOps: 1 year (Preferred),
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
Job Description: You are a skilled and experienced Site Reliability Engineering (SRE) Consultant with over 7 years of experience. As an SRE Consultant, you will be responsible for implementing, maintaining, and enhancing the reliability, scalability, and performance of systems. Your role will involve collaborating closely with development teams to design and deploy robust and scalable solutions. Your responsibilities will include implementing best practices for reliability, scalability, and performance, collaborating with development teams to meet SRE standards, monitoring system performance, troubleshooting issues, implementing automation for process optimization, planning and executing system upgrades and migrations, providing on-call support for critical incidents, documenting processes and procedures, and staying updated on industry trends and best practices. To excel in this role, you should have a Bachelor's degree in Computer Science or a related field, at least 3 years of experience in Site Reliability Engineering or a related field, strong knowledge of cloud technologies and platforms such as AWS, GCP, and Azure, experience with monitoring and alerting tools like Prometheus, Grafana, and Datadog, proficiency in scripting and automation using tools like Python and Bash, strong problem-solving skills, attention to detail, excellent communication and teamwork skills, and the ability to work independently and collaborate effectively with cross-functional teams. Key Skills: SRE Engineer, Site Reliability, Resiliency, Cloud Technologies, AWS, GCP, Azure, Monitoring Tools, Automation, Problem Solving, Communication Skills, Teamwork, Computer Science, Reliability, Performance, Scalability, On-Call Support.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a MuleSoft Admin with over 5 years of experience, you will be joining a dynamic and collaborative team at SheWork.in, a shared employment platform that focuses on hiring India's best Tech Professionals. Our platform promotes personal and professional growth in an inclusive ecosystem with diversity and inclusion as core values, empowering more women in tech to create the future workforce through innovation, creativity, and sustainability. We are looking for an expert in MuleSoft Runtime Fabric (RTF) administration who can take ownership of RTF environments and work hands-on with Red Hat OpenShift (ROSA), Anypoint FlexGateway, and more. Immediate joiners are preferred for this PAN India position. Key Responsibilities: - Install, configure & manage RTF instances on Red Hat OpenShift (ROSA) - Handle deployment, scaling, cluster management & load balancing - Configure and maintain FlexGateway, Secrets, CI/CD pipelines - Monitor application & RTF health with alert systems - Collaborate with Dev & Infra teams to ensure smooth integration & performance - Ensure compliance with security best practices, access control & encryption Required Skills: - Deep knowledge of MuleSoft RTF, Anypoint Platform, API Manager - Hands-on with Redhat OpenShift, Linux Administration - Strong understanding of networking, load balancing, firewalls - Proficiency in scripting Python, Bash - Strong troubleshooting & performance optimization skills,
Posted 2 days ago
6.0 - 10.0 years
0 Lacs
chennai, tamil nadu
On-site
You are a skilled AWS Developer sought to provide part-time, offline freelance support for our expanding tech team. Your role involves leveraging your hands-on experience with AWS services to tackle technical challenges in cloud infrastructure and development. It is essential to work on-site in Medavakkam, Chennai, collaborating closely with our team to deliver effective solutions. Your responsibilities will encompass designing, implementing, and managing scalable and secure cloud infrastructure solutions utilizing AWS services like EC2, S3, RDS, Lambda, among others. You will also be tasked with developing and deploying applications on AWS to ensure optimal performance and scalability. Identifying and resolving issues related to AWS services and integrating these services with existing systems will be crucial aspects of your role. Furthermore, maintaining clear and comprehensive documentation for implemented solutions, configurations, and processes, as well as providing technical guidance to support project goals, are key responsibilities. To excel in this position, you must have proven experience as an AWS Developer, a strong understanding of AWS services and cloud computing concepts, proficiency in AWS tools and services such as EC2, S3, IAM, RDS, Lambda, CloudFormation, and scripting languages like Python and Bash. Your problem-solving skills, analytical mindset, and ability to troubleshoot complex technical issues are vital. Excellent communication skills are also required to collaborate effectively with team members and provide technical support. You should hold a degree in Computer Science, Engineering, or a related field, or possess equivalent practical experience. This part-time job opportunity based in Medavakkam, Chennai, requires 7.5 hours per week, and necessitates a minimum of 6 years of experience in AWS and 7 years of total work experience.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As a member of the Enterprise Data Platform (EDP) team at Macquarie, you will play a crucial role in managing Macquarie's Corporate Data Platform. The businesses supported by the platform rely heavily on it for various use cases such as data science, self-service analytics, operational analytics, and reporting. At Macquarie, we believe in leveraging the strengths of diverse individuals and empowering them to explore endless possibilities. With a global presence in 31 markets and a history of 56 years of unbroken profitability, you will join a collaborative and supportive team where every member contributes ideas and drives positive outcomes. In this role, your responsibilities will include delivering new platform capabilities utilizing AWS and Kubernetes to enhance resilience and capabilities that redefine how the business leverages the platform. You will be involved in deploying tools, introducing new technologies, and automating processes to enhance efficiency. Additionally, you will focus on improving CI/CD pipelines, supporting platform applications, and ensuring smooth operations. To be successful in this role, you should have at least 3 years of experience in Cloud, DevOps, or Data Engineering with hands-on proficiency in AWS and Kubernetes. You should also possess expertise in Big Data technologies like Hive, Spark, and Presto, along with strong scripting skills in Python and Bash. A background in DevOps, Agile, Scrum, and Continuous Delivery environments is essential, along with excellent communication skills to collaborate effectively with cross-functional teams. Your passion for problem-solving, continuous learning, and keen interest in Big Data and Cloud technologies will be invaluable in this role. At Macquarie, we value individuals who are enthusiastic about building a better future with us. If you are excited about this opportunity and working at Macquarie, we encourage you to apply. As part of Macquarie, you will have access to a wide range of benefits such as wellbeing leave, paid parental leave, company-subsidized childcare services, volunteer leave, comprehensive medical and life insurance cover, employee assistance programs, learning and development opportunities, and flexible working arrangements. Technology plays a critical role at Macquarie, enabling every aspect of our operations and driving innovation in connecting people and data, building platforms, and designing future technology solutions. Our commitment to diversity, equity, and inclusion is unwavering, and we aim to provide reasonable adjustments to support individuals who may require assistance during the recruitment process and in their working arrangements. If you need additional support, please inform us during the application process.,
Posted 2 days ago
5.0 - 10.0 years
0 Lacs
karnataka
On-site
You should have at least 7 years of experience in the Information Security field, specifically with direct experience in SOAR or other automation solutions. Your expertise should include Palo Alto XSOAR with SOC Operations understanding, with a focus on resolving Security Incidents and automating related tasks. A minimum of 5 years of hands-on experience in SOC / Incident Response is required. Additionally, you should possess experience with SOAR or other automation solutions (e.g., IT automation, SIEM, case management) and have a strong background in triaging security events using various tools like SIEM, SOAR, and XDR in a security operations environment. Proficiency in scripting and development skills (e.g., BASH, Perl, Python, or Java) along with a solid understanding of regular expressions is crucial for this role. This position falls under the Others category and is a Full-Time role located in Bangalore/Pune. The ideal candidate should have 7-10 years of relevant experience and be available to start immediately.,
Posted 2 days ago
0.0 - 4.0 years
0 Lacs
dehradun, uttarakhand
On-site
As a software development intern at VIZON, you will have the exciting opportunity to work with cutting-edge technologies like XR, AI, and cloud computing. Your role requires knowledge of AWS, Bash, Linux, and C++/Python. Your day-to-day responsibilities will include customizing Linux utilizing Bash and automating scripts for tasks. You will also be optimizing and deploying applications of AR/VR, AI, and computer vision for Linux, Windows, and WSL on x86, ARM systems, and custom hardware devices. Additionally, you will collaborate with team members to troubleshoot and resolve software issues. VIZON is a deep-tech startup building a futuristic & immersive ecosystem of Indian tech products powered by emerging technologies like XR (AR/VR) and AI (ML/CV) to redefine the way humans interact with technology and with each other. To learn more about VIZON, visit our website at https://vizon.tech.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Lead / Staff Software Engineer in Black Duck SRE team, you will play a key role in transforming our R&D products through the adoption of advanced cloud, Containerization, Microservices, modern software delivery and other cutting edge technologies. You will be a key member of the team, working independently to develop tools and scripts, automated provisioning, deployment, and monitoring. The position is based in Bangalore (Near Dairy Circle Flyover) with a Hybrid work mode. Key Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - Minimum of 5-7 years of experience in Site Reliability Engineering / DevOps Engineering. - Strong hands-on experience with Containerization & Orchestration using Docker, Kubernetes (K8s), Helm to Secure, optimize, and scale K8s. - Deep understanding of Cloud Platforms & Services in AWS / GCP / Azure (Preferably GCP) cloud to Optimize cost, security, and performance. - Solid experience with Infrastructure as Code (IaC) using Terraform / CloudFormation / Pulumi (Preferably Terraform) - Write modules, manage state. - Proficient in Scripting & Automation using Bash, Python / Golang - Automate tasks, error handling. - Experienced in CI/CD Pipelines & GitOps using Git / GitHub / GitLab / Bitbucket / ArgoCD, Harness.io - Implement GitOps for deployments. - Strong background in Monitoring & Observability using Prometheus / Grafana / ELK Stack / Datadog / New Relic - Configure alerts, analyze trends. - Good understanding in Networking & Security using Firewalls, VPN, IAM, RBAC, TLS, SSO, Zero Trust - Implement IAM, TLS, logging. - Experience with Backup & Disaster Recovery using Velero, Snapshots, DR Planning - Implement backup solutions. - Basic Understanding messaging concepts using RabbitMQ / Kafka / Pub,Sub / SQS. - Familiarity with Configuration Management using Ansible / Chef / Puppet / SaltStack - Run existing playbooks. Key Responsibilities: - Design and develop scalable, modular solutions that promote reuse and are easily integrated into our diverse product suite. - Collaborate with cross-functional teams to understand their needs and incorporate user feedback into the development. - Establish best practices for modern software architecture, including Microservices, Serverless computing, and API-first strategies. - Drive the strategy for Containerization and orchestration using Docker, Kubernetes, or equivalent technologies. - Ensure the platform's infrastructure is robust, secure, and compliant with industry standards. What We Offer: - An opportunity to be a part of a dynamic and innovative team committed to making a difference in the technology landscape. - Competitive compensation package, including benefits and flexible work arrangements. - A collaborative, inclusive, and diverse work environment where creativity and innovation are valued. - Continuous learning and professional development opportunities to grow your expertise within the industry.,
Posted 2 days ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
A career at HARMAN Automotive is an opportunity to be part of a global, multi-disciplinary team dedicated to leveraging the innovative power of technology to shape the future. At HARMAN Automotive, we provide you with the platform to accelerate your career growth by engineering audio systems and integrated technology platforms that enhance the driving experience. By combining creativity, thorough research, and a collaborative spirit with design and engineering excellence, we aim to advance in-vehicle infotainment, safety, efficiency, and enjoyment. As a member of the Build and Integration (B&I) team in India, you will play a crucial role as the resident expert on DevOps and CI/CD practices. Your expertise in tools like git/Gerrit, Jenkins, and Python will be unmatched as you skillfully build and manage our CI/CD toolchain. Your proficiency will be a valuable resource for your colleagues when they face challenges in implementing unit testing or delta static code analysis. Your passion for working in an international team and contributing to our shared vision of making HARMAN a leading software company will be evident. Your commitment to refining our software integration processes to enhance efficiency and maintain exceptional quality aligns perfectly with our organizational goals. In this role, you will: - Implement best-in-class CI/CD with automated L1/L2 testing. - Demonstrate knowledge on YOCTO builds, creating layers, and recipes. - Install and manage our Jenkins setup. - Implement and integrate continuous delivery pipelines with Jenkins. - Manage permissions on our build environment. - Set up, configure, and monitor our version control system (git/Gerrit). - Define and run a multi-container environment using Docker Compose. - Continuously measure and monitor CI/CD performance. - Develop and enhance scripts to automate the integration flow. - Manage day-to-day software integration issues. To be successful in this role, you should have: - A Bachelor's degree in computer science or a related field. - Over 10 years of experience in DevOps and CI/CD. - Extensive experience in automating CI/CD pipelines. - Strong understanding of cutting-edge technologies for automation infrastructure deployment. - Proficiency in scripting languages such as Python, Groovy, and Bash. - Technical understanding and familiarity with continuous build and version control systems. - Good knowledge of containerization & orchestration technologies (Docker, Kubernetes). What makes you eligible for this role: - Intrinsic motivation, achievement orientation, and a deep passion for technology. - Quick grasp of concepts and structured reasoning ability. - Strong attention to detail and analytical skills. - Fast learner with the ability to synthesize information accurately. - Demonstrated critical thinking and problem-solving skills. At HARMAN, we offer a flexible work environment, employee discounts on renowned products, extensive training opportunities through HARMAN University, competitive wellness benefits, tuition reimbursement, access to fitness center and cafeteria, and an inclusive and diverse work environment that supports professional and personal development. HARMAN is dedicated to creating a welcoming, empowering, and inclusive environment for all employees. We encourage you to share your ideas, voice your perspective, and be yourself within a culture that values uniqueness. We promote continuous learning and provide opportunities for training, development, and continuing education to help you thrive in your career. HARMAN is a pioneer in unleashing next-level technology since the 1920s, amplifying the sense of sound and creating integrated technology platforms that make the world smarter, safer, and more connected. Through innovative automotive, lifestyle, and digital transformation solutions, we aim to turn ordinary moments into extraordinary experiences. With a portfolio under 16 iconic brands like JBL, Mark Levinson, and Revel, we set high engineering and design standards to exceed customer expectations. If you are ready to innovate and make a lasting impact, join our talent community at HARMAN today.,
Posted 2 days ago
7.0 - 11.0 years
0 Lacs
haryana
On-site
As a Cloud DevOps Engineer at Oracle's CGIU Enterprise Communications Platform engineering team, you will play a crucial role in supporting the development team throughout their DevOps life cycle. Your expertise in cloud technologies, DevOps practices, and best methodologies will be essential in helping the team achieve their business goals and maintain a competitive edge. Your responsibilities will include designing, implementing, and managing automated build, deployment, and testing systems. You will lead initiatives to enhance build and deployment processes for high-volume, high availability systems while monitoring production systems for performance and availability, proactively resolving any issues that arise. Developing and maintaining infrastructure as code using tools like Terraform and creating CI/CD pipelines with GitLab CI/CD will be key tasks. Continuous improvement of system scalability and security, adherence to standard methodologies, and collaboration with multi-functional teams for successful project delivery will be part of your daily activities. You will also work closely with the security team to ensure compliance with industry standards and implement security measures to safeguard against threats. Mandatory Skills: - 7+ years of experience as a DevOps Engineer - Bachelor's degree in engineering or Computer Science - Proficiency in Java/Python programming and experience with AWS or other public Cloud platforms - Hands-on experience with Terraform, GitLab CI, Jenkins, Docker, Kubernetes, and troubleshooting within Kubernetes environment - Scripting skills in Bash/Python, familiarity with REST APIs, and a strong background in Linux - Expertise in developing and maintaining CI/CD pipelines and a solid understanding of DevOps culture and Agile Methodology Good to have: - Experience in SaaS and multi-tenant development - Knowledge of cloud security and cybersecurity in a cloud context - Familiarity with Java, ELK stack, and prior experience in telecom and networking Soft Skills: - Excellent command of spoken and written English - Ability to multitask and adapt to changing priorities - Strong team skills, proactive attitude, focus on quality, and drive to make a difference in a fast-paced environment Joining Oracle's dynamic engineering division will involve active participation in defining and evolving standard practices and procedures. You will be responsible for software development tasks associated with designing, developing, and debugging software applications or operating systems. Oracle, a global leader in cloud solutions, thrives on innovation and inclusivity. With a commitment to fostering an inclusive workforce that empowers everyone to contribute, Oracle offers a diverse range of global opportunities with a focus on work-life balance, competitive benefits, and support for employee well-being. At Oracle, we value diversity and inclusion, supporting employees with disabilities throughout the employment process. If you require accessibility assistance or accommodation due to a disability, please reach out to us at accommodation-request_mb@oracle.com or call +1 888 404 2494 in the United States.,
Posted 2 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
You should possess a minimum of 4+ years of experience in DevOps, with a background in IT/Computer Science Engineering, encompassing both Windows and Linux environments. Your responsibilities will include the maintenance and enhancement of Fortran/C++ based libraries along with associated system libraries. Additionally, you will be tasked with creating make and cmake files for the compilation of tools. Your expertise should extend to programming languages such as Bash, Perl, and GIT, while proficiency in programming and debugging in C/C++ and/or Fortran will be highly valued. An understanding of developing, coding, and deploying distribution mechanisms is essential for this role. Strong cross-functional collaboration and communication skills are also key requirements for effective coordination within IT and Engineering teams.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a Linux System Administrator at Marvell Bangalore Office, with the option for a Hybrid work model, your primary responsibility will be the deployment, configuration, and maintenance of Linux systems and associated infrastructure. You are expected to have a profound technical understanding of Linux environments, possess strong problem-solving capabilities, and excel in collaborative team environments. Your key responsibilities will include: Server Management: Installing, configuring, and maintaining various Linux operating systems like Ubuntu, CentOS, Red Hat, Suse, and Debian. Performance Monitoring: Monitoring server performance, diagnosing issues, and implementing performance tuning for optimal system operations. Security Management: Implementing security measures, including firewall configurations, access controls, and regular patching to ensure compliance with security policies and best practices. Backup and Recovery: Developing and managing backup strategies, performing regular backups, and testing recovery procedures for data integrity and availability. User Support: Providing technical support to end-users and internal teams for Linux server access, performance, and connectivity related issues. Documentation: Maintaining accurate and up-to-date documentation for system configurations, procedures, and changes. Project Involvement: Participating in IT projects such as system upgrades, migrations, and new technology implementations, collaborating with cross-functional teams to achieve project goals. Automation: Utilizing scripting and automation tools to streamline administrative tasks and enhance system efficiency. Troubleshooting: Diagnosing and resolving complex technical issues related to Linux systems, applications, and infrastructure. Qualifications: Education: Bachelors degree in Computer Science, Information Technology, or a related field, or equivalent experience. Experience: Minimum of 5-8 years of experience in Linux system administration or similar roles. Certifications: Preferred certifications include Red Hat Certified System Administrator (RHCSA), CompTIA Linux+, or similar. Technical Skills: Proficiency with Linux operating systems such as Ubuntu, CentOS, Red Hat, Suse, and Debian. Experience with virtualization technologies like KVM, VMware, or Docker. Knowledge of system monitoring tools, performance tuning, and networking concepts. Familiarity with configuration management tools like Ansible, Puppet, or Chef is a plus. Proficiency in scripting languages such as Bash, Python, or Perl. Soft Skills: Strong analytical and problem-solving skills with attention to detail. Excellent communication and interpersonal skills. Ability to manage multiple tasks effectively and prioritize in a fast-paced environment. Proactive, self-motivated, and a strong sense of responsibility. Join us at Marvell Bangalore Office as a Linux System Administrator and contribute your expertise to our team for a rewarding experience in system administration.,
Posted 2 days ago
6.0 - 10.0 years
0 Lacs
kochi, kerala
On-site
As a Database Administrator at P Square Solutions LLC, a part of Neology Inc, you will be responsible for designing, creating, and managing efficient database tables, schemas, and relationships based on business requirements. Your role will involve ensuring that data models are scalable, secure, and optimized for performance while participating in data modeling activities to support data integrity and normalization. Your key responsibilities will include developing and maintaining complex PostgreSQL functions, stored procedures, and triggers, as well as implementing business logic within the database using SQL, PL/pgSQL, and other scripting languages. Collaboration with application developers to integrate databases into system architecture will also be a significant part of your role. You will be expected to analyze slow-running queries and optimize them to improve database performance, along with implementing indexing strategies, partitioning, and other performance enhancement techniques. Continuous monitoring, troubleshooting, and improvement of query execution times and database efficiency will be essential aspects of your work. Data management and integrity will be crucial, and you will need to ensure the accuracy and consistency of data by implementing proper data validation, referential integrity, and constraints. Additionally, performing data migrations and transformations to meet changing business requirements and maintaining updated documentation for database structures, procedures, and standards will be part of your daily tasks. To excel in this role, you should have a strong proficiency in PostgreSQL, including PL/pgSQL and SQL, and proven experience in designing and managing relational database schemas and tables. Expertise in creating efficient PostgreSQL functions, stored procedures, views, and triggers, as well as hands-on experience with database performance tuning, query optimization, and indexing, will be required. Experience working with Linux-based environments for hosting and managing PostgreSQL databases and scripting experience with Python, Bash, or other automation tools for database operations are also preferred technical skills. Knowledge of version control systems like Git and experience working within Agile development teams will be beneficial. Hands-on experience with query optimization, indexing, statistics, and familiarity with NoSQL databases like Redis, cloud-based database services such as AWS RDS, AWS Aurora, Azure Database for PostgreSQL, performance tuning tools like Toad, PEM (Postgres, enterprise manager), Redgate, and data archival and purging processes will be considered an added advantage for this role.,
Posted 2 days ago
8.0 - 12.0 years
30 - 45 Lacs
Coimbatore
Work from Office
We are seeking a highly skilled Lead Platform Engineer with 7+ years of experience to drive innovation at the intersection of DevOps, cloud automation, and artificial intelligence. The ideal candidate will have deep expertise in generative AI, machine learning, and AIOps, coupled with advanced knowledge of cloud infrastructure automation and modern engineering practices. This role involves leading the design, development, and implementation of transformative automation solutions using AI/ML and generative AI technologies. Responsibilities Architect automated workflows for cloud infrastructure provisioning and management using IaC tools like Terraform Build and optimize automation frameworks to enable scalable multi-cloud infrastructure deployment and management Develop and enhance service catalog components with integration into platforms such as Backstage, leveraging GenAI models for code generation Implement CI/CD pipelines to streamline code builds, testing, and deployments, ensuring continuous delivery across diverse cloud environments Write and maintain automation scripts using Python, Bash, or similar scripting languages Act as deployment orchestrator, driving smooth, automated deployments across cloud ecosystems Design and implement generative AI models such as RAG and agentic workflows using frameworks like Langchain or platforms like Bedrock, Vertex, Azure AI Build and manage vector document sources and vector databases (e.g., Amazon Kendra, Opensearch) for AI-driven applications Prepare datasets, apply feature engineering, and optimize inputs for AI/ML models to enhance training and inference outcomes Create and integrate agentic workflows using approaches like ReAct patterns or Langraph engineering with cloud GenAI platforms Evaluate model performance and select appropriate large language models (LLMs) for specific use cases while preventing model decay through prompt/flow engineering Develop MLOps pipelines to deploy RAG or agentic flows, monitoring and iterating to ensure long-term operational performance Collaborate with cross-functional teams to develop innovative cloud automation and AIOps capabilities, driving operational efficiency Requirements Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field 7+ years of experience in cloud infrastructure automation, DevOps, and scripting Expertise in IaC tools such as Terraform, CloudFormation, or similar technologies Strong proficiency with Python and generative AI frameworks (RAG, agentic workflows) Proven experience working with GenAI platforms like Bedrock, Vertex AI, or Azure AI Competency in building and managing vector databases like Opensearch or Amazon Kendra Proficiency in data preparation, feature engineering, and dataset optimization for AI model development Background in designing and operating CI/CD pipelines and automating deployment workflows Knowledge of cloud automation tools, service catalogs, and integration platforms (e.g., Backstage) Nice to have Familiarity with data streaming solutions and data lake architectures for real-time AI insights Understanding of ReAct patterns and Langraph engineering for agentic workflows Skills in integrating GenAI models into existing operational platforms for enhanced automation Showcase of experience driving AIOps initiatives in large-scale environments Flexibility to adapt and utilize emerging AI/ML technologies in solving complex operational challenges
Posted 3 days ago
4.0 - 8.0 years
9 - 14 Lacs
Chennai
Work from Office
Job Description Immediate A deep understanding of Observability Dynatrace preferably (or other tools if they are well versed), Provisioning and setup metric in any observability tool Dynatrace, Prometheus, Thanos, or Grafana, alerts and silences Development work (not just support and running scripts but actual development) done on: Chef (basic syntax, recipes, cookbooks) or Ansible (basic syntax, tasks, playbooks) or Terraform basic syntax and GitLab CI/CD configuration, pipelines, jobs Proficiency in scripting Python, PowerShell, Bash etc This becomes the enabler for automation, Proposes ideas and solutions within the Infrastructure Department to reduce the workload through automation, Cloud resources provisioning and configuration through CLI/API specially Azure and GCP AWS experience is also ok Troubleshooting SRE approach, SRE mindset, Provides emergency response either by being on-call or by reacting to symptoms according to monitoring and escalation when needed Improves documentation all around, either in application documentation, or in runbooks, explaining the why, not stopping with the what, Root cause analysis and corrective actions Strong Concepts around Scale & Redundancy for design, troubleshooting, implementation Mid Term Kubernetes basic understanding, CLI, service re-provisioning Operating system (Linux) configuration, package management, startup and troubleshooting System Architecture & Design Plan, design and execute solutions to reach specific goals agreed within the team, Long Term Block and object storage configuration Networking VPCs, proxies and CDNs At DXC Technology, we believe strong connections and community are key to our success Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances Were committed to fostering an inclusive environment where everyone can thrive, Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf More information on employment scams is available here, Show
Posted 3 days ago
2.0 - 6.0 years
6 - 9 Lacs
Hyderabad
Work from Office
OPENTEXT - THE INFORMATION COMPANY OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. AI-First. Future-Driven. Human-Centered. At OpenText, AI is at the heart of everything we do powering innovation, transforming work, and empowering digital knowledge workers. Were hiring talent that AI cant replace to help us shape the future of information management. Join us. Your Impact An OpenText Content Server Consultant is responsible for the technical delivery of the xECM based solutions. Such delivery activities encompass development, testing, deployment and documentation of specific software components either providing extensions to specific items of core product functionality or implementing specific system integration components. This role has a heavy deployment and administration emphasis. Engagements are usually long term, but some relatively short ones requiring only specific services like an upgrade or a migration also happen. The nature of work may include full application lifecycle activities right from development, deployment/provisioning, testing, migration, decommissioning and ongoing run & maintain (upgrades, patching etc.) support. The role is customer facing and requires excellent interpersonal skills with the ability to communicate to a wide range of stake holders (internally and externally), both verbally and in writing. What the Role offers Work within an OpenText technical delivery team in order to Participate and contribute to deployment activities. Participate in the day to day administration of the systems, including Incident & Problem Management Participate in planning and execution of new implementations, upgrades and patching activities. Participate in the advanced configuration of ECM software components, in line with project and customer time scales. Actively contribute in automating provisioning, patching and upgrade activities where possible to achieve operational efficiencies. Perform code reviews and periodic quality checks to ensure delivery quality is maintained. Prepare, maintain and submit activity/progress reports and time recording/management reports in accordance with published procedures. Keep project managers informed of activities and alert of any issues promptly. Provide inputs as part of engagement closure on project learnings and suggest improvements. Utilize exceptional written and verbal communication skills while supporting customers via web, telephone, or email, while demonstrating a high level of customer focus and empathy. Respond to and solve customer technical requests, show an understanding of the customers managed hosted environment and applications within the Open Text enabling resolution of complex technical issues. Document or Implement proposed solutions. Respond to and troubleshoot alerts from monitoring of applications, servers and devices sufficient to meet service level agreements Collaborating on cross-team and cross-product technical issues with a variety of resources including Product support, IT, and Professional Services. What you need to succeed Well versed with deployment, administration and troubleshooting of the OpenText xECM platform and surrounding components (Content Server, Archive Center, Brava, OTDS, Search & Indexing) and integrations with SAP, SuccessFactors, Salesforce. Good experience/knowledge on following Experience working in an ITIL aligned service delivery organisation. Knowledge of Windows, UNIX, and Application administration skills in a TCP/IP networked environment. Experience working with relational DBMS (PostgreSQL/Postgres, Oracle, MS SQL Server, mySQL). Independently construct moderate complexity SQL s without guidance. Programming/scripting is highly desirable, (ie. Oscript, Java, JavaScript, PowerShell, Bash etc.) Familiarity with configuration and management of web/application servers (IIS, Apache, Tomcat, JBoss, etc.). Good understanding of object-oriented programming, Web Services, LDAP configuration. Experience in installing and configuring xECM in HA and knowledge in DR setup/drill. Experince in patching, major upgrades and data migration activities. Candidate should possess Team player Customer Focus and Alertness Attention to detail Always learning Critical Thinking Highly motivated Good Written and Oral Communication Knowledge sharing, blogs OpenTexts efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. . Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenTexts vibrant workplace.
Posted 3 days ago
3.0 - 8.0 years
13 - 14 Lacs
Hyderabad
Work from Office
OPENTEXT - THE INFORMATION COMPANY OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. AI-First. Future-Driven. Human-Centered. At OpenText, AI is at the heart of everything we do powering innovation, transforming work, and empowering digital knowledge workers. Were hiring talent that AI cant replace to help us shape the future of information management. Join us. Your Impact An OpenText Content Server Consultant is responsible for the technical delivery of the xECM based solutions. Such delivery activities encompass development, testing, deployment and documentation of specific software components either providing extensions to specific items of core product functionality or implementing specific system integration components. This role has a heavy deployment and administration emphasis. Engagements are usually long term, but some relatively short ones requiring only specific services like an upgrade or a migration also happen. The nature of work may include full application lifecycle activities right from development, deployment/provisioning, testing, migration, decommissioning and ongoing run & maintain (upgrades, patching etc.) support. The role is customer facing and requires excellent interpersonal skills with the ability to communicate to a wide range of stake holders (internally and externally), both verbally and in writing. What the Role offers Work within an OpenText technical delivery team in order to Participate and contribute to deployment activities. Participate in the day to day administration of the systems, including Incident & Problem Management Participate in planning and execution of new implementations, upgrades and patching activities. Participate in the advanced configuration of ECM software components, in line with project and customer time scales. Actively contribute in automating provisioning, patching and upgrade activities where possible to achieve operational efficiencies. Perform code reviews and periodic quality checks to ensure delivery quality is maintained. Prepare, maintain and submit activity/progress reports and time recording/management reports in accordance with published procedures. Keep project managers informed of activities and alert of any issues promptly. Provide inputs as part of engagement closure on project learnings and suggest improvements. Utilize exceptional written and verbal communication skills while supporting customers via web, telephone, or email, while demonstrating a high level of customer focus and empathy. Respond to and solve customer technical requests, show an understanding of the customers managed hosted environment and applications within the Open Text enabling resolution of complex technical issues. Document or Implement proposed solutions. Respond to and troubleshoot alerts from monitoring of applications, servers and devices sufficient to meet service level agreements Collaborating on cross-team and cross-product technical issues with a variety of resources including Product support, IT, and Professional Services. What you need to succeed Well versed with deployment, administration and troubleshooting of the OpenText xECM platform and surrounding components (Content Server, Archive Center, Brava, OTDS, Search & Indexing) and integrations with SAP, SuccessFactors, Salesforce. Good experience/knowledge on following Experience working in an ITIL aligned service delivery organisation. Knowledge of Windows, UNIX, and Application administration skills in a TCP/IP networked environment. Experience working with relational DBMS (PostgreSQL/Postgres, Oracle, MS SQL Server, mySQL). Independently construct moderate complexity SQL s without guidance. Programming/scripting is highly desirable, (ie. Oscript, Java, JavaScript, PowerShell, Bash etc.) Familiarity with configuration and management of web/application servers (IIS, Apache, Tomcat, JBoss, etc.). Good understanding of object-oriented programming, Web Services, LDAP configuration. Experience in installing and configuring xECM in HA and knowledge in DR setup/drill. Experince in patching, major upgrades and data migration activities. Candidate should possess Team player Customer Focus and Alertness Attention to detail Always learning Critical Thinking Highly motivated Good Written and Oral Communication Knowledge sharing, blogs
Posted 3 days ago
5.0 - 8.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
We are seeking a skilled Platform Engineer to join our Automation Engineering team, bringing expertise in cloud infrastructure automation, DevOps, scripting, and advanced AI/ML practices. The role focuses on integrating generative AI into automation workflows, enhancing operational efficiency, and supporting cloud-first initiatives. Responsibilities Design cloud automation workflows using Infrastructure-as-Code tools such as Terraform or CloudFormation Build scalable frameworks to manage infrastructure provisioning, deployment, and configuration across multiple cloud platforms Create service catalog components compatible with automation platforms like Backstage Integrate generative AI models to improve service catalog functionalities, including automated code generation and validation Architect CI/CD pipelines for automated build, test, and deployment processes Maintain deployment automation scripts utilizing technologies such as Python or Bash Implement generative AI models (e.g., RAG, agent-based workflows) for AIOps use cases like anomaly detection and root cause analysis Employ AI/ML tools such as LangChain, Bedrock, Vertex AI, or Azure AI for advanced generative AI solutions Develop vector databases and document sources using services like Amazon Kendra, OpenSearch, or custom solutions Engineer data pipelines to stream real-time operational insights that support AI-driven automation Build MLOps pipelines to deploy and monitor generative AI models, ensuring optimal performance and avoiding model decay Select appropriate LLM models for specific AIOps use cases and integrate them effectively into workflows Collaborate with cross-functional teams to design and refine automation and AI-driven processes Research emerging tools and technologies to enhance operational efficiency and scalability Requirements Bachelor's or Master's degree in Computer Science, Engineering, or related field 3-8 years of experience in cloud infrastructure automation, DevOps, and scripting Proficiency with Infrastructure-as-Code tools such as Terraform or CloudFormation Expertise in Python and generative AI frameworks like RAG and agent-based workflows Knowledge of cloud-based AI services, including Bedrock, Vertex AI, or Azure AI Familiarity with vector databases like Amazon Kendra, OpenSearch, or custom database solutions Competency in data engineering tasks such as feature engineering, labeling, and real-time data streaming Proven track record in creating and maintaining MLOps pipelines for AI/ML models in production environments Nice to have Background in Flow Engineering tools such as Langraph or platform-specific workflow orchestration tools Understanding of comprehensive AIOps processes to refine cloud-based automation solutions
Posted 3 days ago
5.0 - 8.0 years
15 - 30 Lacs
Chennai
Work from Office
We are seeking a skilled Platform Engineer to join our Automation Engineering team, bringing expertise in cloud infrastructure automation, DevOps, scripting, and advanced AI/ML practices. The role focuses on integrating generative AI into automation workflows, enhancing operational efficiency, and supporting cloud-first initiatives. Responsibilities Design, build, and maintain cloud automation workflows using Infrastructure-as-Code tools such as Terraform or CloudFormation Develop scalable frameworks for managing infrastructure provisioning, deployment, and configuration across multiple cloud platforms Create and integrate service catalog components with automation platforms like Backstage Leverage generative AI models to enhance service catalog capabilities, including automated code generation and validation Architect and implement CI/CD pipelines for automated build, test, and deployment processes Build and maintain deployment automation scripts using technologies such as Python or Bash Design and implement generative AI models (e.g., RAG, agent-based workflows) for AIOps use cases like anomaly detection and root cause analysis Utilize AI/ML tools such as LangChain, Bedrock, Vertex AI, or Azure AI for building advanced generative AI solutions Develop vector databases and document sources using services like Amazon Kendra, OpenSearch, or custom solutions Engineer data pipelines for streaming real-time operational insights to support AI-driven automation Create MLOps pipelines to deploy and monitor generative AI models, ensuring optimal performance and avoiding model decay Evaluate and select appropriate LLM models for specific AIOps use cases, integrating them efficiently into workflows Collaborate with cross-functional teams to design and improve automation and AI-driven processes Continuously research emerging tools and technologies to improve operational efficiency and scalability Requirements Bachelor's or Master's degree in Computer Science, Engineering, or related field 3-8 years of experience in cloud infrastructure automation, DevOps, and scripting Proficiency with Infrastructure-as-Code tools such as Terraform or CloudFormation Expertise in Python and generative AI frameworks like RAG and agent-based workflows Knowledge of cloud-based AI services, including Bedrock, Vertex AI, or Azure AI Familiarity with vector databases like Amazon Kendra, OpenSearch, or custom database solutions Competency in data engineering tasks such as feature engineering, labeling, and real-time data streaming Proven experience in creating and maintaining MLOps pipelines for AI/ML models in production environments Nice to have Familiarity with Flow Engineering tools such as Langraph or platform-specific workflow orchestration tools Understanding of end-to-end AIOps processes to enhance cloud-based automation solutions
Posted 3 days ago
5.0 - 8.0 years
15 - 30 Lacs
Coimbatore
Work from Office
We are seeking a skilled Platform Engineer to join our Automation Engineering team, bringing expertise in cloud infrastructure automation, DevOps, scripting, and advanced AI/ML practices. The role focuses on integrating generative AI into automation workflows, enhancing operational efficiency, and supporting cloud-first initiatives. Responsibilities Design, build, and maintain cloud automation workflows using Infrastructure-as-Code tools such as Terraform or CloudFormation Develop scalable frameworks for managing infrastructure provisioning, deployment, and configuration across multiple cloud platforms Create and integrate service catalog components with automation platforms like Backstage Leverage generative AI models to enhance service catalog capabilities, including automated code generation and validation Architect and implement CI/CD pipelines for automated build, test, and deployment processes Build and maintain deployment automation scripts using technologies such as Python or Bash Design and implement generative AI models (e.g., RAG, agent-based workflows) for AIOps use cases like anomaly detection and root cause analysis Utilize AI/ML tools such as LangChain, Bedrock, Vertex AI, or Azure AI for building advanced generative AI solutions Develop vector databases and document sources using services like Amazon Kendra, OpenSearch, or custom solutions Engineer data pipelines for streaming real-time operational insights to support AI-driven automation Create MLOps pipelines to deploy and monitor generative AI models, ensuring optimal performance and avoiding model decay Evaluate and select appropriate LLM models for specific AIOps use cases, integrating them efficiently into workflows Collaborate with cross-functional teams to design and improve automation and AI-driven processes Continuously research emerging tools and technologies to improve operational efficiency and scalability Requirements Bachelor's or Master's degree in Computer Science, Engineering, or related field 3-8 years of experience in cloud infrastructure automation, DevOps, and scripting Proficiency with Infrastructure-as-Code tools such as Terraform or CloudFormation Expertise in Python and generative AI frameworks like RAG and agent-based workflows Knowledge of cloud-based AI services, including Bedrock, Vertex AI, or Azure AI Familiarity with vector databases like Amazon Kendra, OpenSearch, or custom database solutions Competency in data engineering tasks such as feature engineering, labeling, and real-time data streaming Proven experience in creating and maintaining MLOps pipelines for AI/ML models in production environments Nice to have Familiarity with Flow Engineering tools such as Langraph or platform-specific workflow orchestration tools Understanding of end-to-end AIOps processes to enhance cloud-based automation solutions
Posted 3 days ago
5.0 - 7.0 years
5 - 5 Lacs
Mumbai, Chennai, Gurugram
Work from Office
We are seeking a skilled Site Reliability Engineer to support the administration of Azure Kubernetes Service (AKS) clusters running critical, always-on middleware that processes thousands of transactions per second (TPS). The ideal candidate will operate with a mindset aligned to achieving 99.999% (five-nines) availability. Key Responsibilities: Own and manage AKS cluster deployments, cutovers, base image updates, and daily operational tasks. Test and implement Infrastructure as Code (IaC) changes using best practices. Apply software engineering principles to IT operations for maintaining scalable and reliable production environments. Write and maintain IaC as well as automation code for: Monitoring and ing Log analysis Disaster recovery testing Incident response Documentation-as-code Mandatory Skills: Strong experience with Terraform In-depth knowledge of Azure Cloud Proficiency in Kubernetes cluster creation and lifecycle management (deployment-only experience is not sufficient) Hands-on experience with CI/CD tools (GitHub Actions preferred) Bash and Python scripting skills Desirable Skills: Exposure to Azure Databricks and Azure Data Factory Experience with secret management using HashiCorp Vault Familiarity with monitoring tools (any) Required Skills Azure, Kubernetes, Terraform, DevOps
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough