Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 11.0 years
12 - 22 Lacs
Hyderabad
Work from Office
ideyaLabs is seeking a highly skilled and motivated DevOps Lead to drive our CI/CD pipelines, infrastructure automation, and cloud operations. The ideal candidate will lead a team of DevOps engineers and collaborate closely with development, QA, and IT teams to enhance our deployment processes, scalability, and system reliability. Key Responsibilities- Lead and mentor a team of DevOps engineers to deliver scalable and secure infrastructure. Design, build, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI, GitHub Actions, etc. Architect and automate infrastructure provisioning using Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Pulumi. Manage and optimize cloud infrastructure (AWS/Azure/GCP) to ensure high availability and performance. Implement monitoring, logging, and alerting systems using tools like Prometheus, Grafana, ELK, Datadog, or New Relic. Champion DevSecOps practices and integrate security at every stage of the DevOps lifecycle. Collaborate with software development teams to ensure smooth deployments and fast recovery from incidents. Define and enforce SRE/DevOps best practices, SLAs, and disaster recovery procedures Continuously evaluate emerging tools and technologies to improve system efficiency and team productivity. Required Skills & Qualifications Bachelor's or Master's in Computer Science, Engineering, or a related field. Proven experience leading DevOps/SRE/Platform teams. Strong expertise in cloud platforms (AWS, Azure, or GCP). Proficient with containerization and orchestration (Docker, Kubernetes, Helm). Hands-on experience with IaC and configuration management tools (Terraform, Ansible, Chef, Puppet). Excellent scripting skills (Python, Bash, or Go preferred). Familiarity with GitOps and modern release strategies (Blue/Green, Canary deployments) Deep understanding of network, system, and security principles. Having an understanding of PCI DSS and SOC 2 procedures. Strong communication and stakeholder management skills. Preferred Qualifications Certifications like AWS Certified DevOps Engineer, CKAD/CKA, and Azure DevOps Expert. Experience with service mesh technologies (Istio, Linkerd).- Exposure to FinOps or cost optimization practices in the cloud
Posted 3 weeks ago
2.0 - 4.0 years
4 - 6 Lacs
Bengaluru
Work from Office
Job Title: Jr. DevOps Engineer Location: Bangalore (Work from Office 5 days a week) Experience: 2+ Years Joining: Immediate Joiners Preferred Key Responsibilities: Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitHub Actions, or GitLab CI. Manage cloud infrastructure (preferably AWS) with a focus on scalability, reliability, and security. Deploy and manage containerized applications using Docker and Kubernetes. Automate infrastructure provisioning using tools like Terraform or Ansible. Monitor system performance and troubleshoot issues using tools like Prometheus, Grafana, ELK, etc. Collaborate with development, QA, and operations teams to ensure seamless deployments. Technical Skills Required: CI/CD: Jenkins, Git, GitHub/GitLab Cloud: AWS (EC2, S3, IAM, CloudWatch) Containers: Docker, Kubernetes IaC: Terraform / Ansible Scripting: Bash / Python Monitoring: Prometheus, Grafana, ELK Stack Eligibility: Minimum 2 years of DevOps experience. Strong troubleshooting and communication skills. Willing to work full-time from our Bangalore office.
Posted 3 weeks ago
4.0 - 6.0 years
6 - 8 Lacs
Bengaluru
Work from Office
We are seeking a skilled DevOps Engineer with strong experience in Google Cloud Platform (GCP) to support AI/ML project infrastructure. The ideal candidate will work closely with data scientists, ML engineers, and developers to build and manage scalable, secure, and automated pipelines for AI/ML model training, testing, and deployment. Responsibilities: Design and manage cloud infrastructure to support AI/ML workloads on GCP. Develop and maintain CI/CD pipelines for ML models and applications. Automate model training, validation, deployment, and monitoring processes using tools like Kubeflow, Vertex AI, Cloud Composer, Airflow, etc. Set up and manage infrastructure as code (IaC) using tools such as Terraform or Deployment Manager. Implement robust security, monitoring, logging, and alerting systems using Cloud Monitoring, Cloud Logging, Prometheus, Grafana, etc. Collaborate with ML engineers and data scientists to optimize compute environments (e.g., GPU/TPU instances, notebooks). Manage and maintain containerized environments using Docker and Kubernetes (GKE). Ensure cost-efficient cloud resource utilization and governance. Required Skills Bachelor's degree in engineering or relevant field Must have 4 years of proven experience as DevOps Engineer with at least 1 year on GCP Strong experience with DevOps tools and methodologies in production environments Proficiency in scripting with Python, Bash, or Shell Experience with Terraform, Ansible, or other IaC tools. Deep understanding of Docker, Kubernetes, and container orchestration Knowledge of CI/CD pipelines, automated testing, and model deployment best practices. Familiarity with ML lifecycle tools such as MLflow, Kubeflow Pipelines, or TensorFlow Extended (TFX). Experience in designing conversational flows for AI Agents/chatbot
Posted 3 weeks ago
4.0 - 6.0 years
6 - 8 Lacs
Bengaluru
Work from Office
Bachelors or masters degree in computer science, Information Technology, Data Science, or a related field. Must have minimum 4 years of relevant experience Proficient in Python with hands-on experience building ETL pipelines for data extraction, transformation, and validation. Strong SQL skills for working with structured data. Familiar with Grafana or Kibana for data visualization and monitoring/dashboards. Experience with databases such as MongoDB, Elasticsearch, and MySQL. Comfortable working in Linux environments using common Unix tools. Hands-on experience with Git, Docker and virtual machines.
Posted 3 weeks ago
7.0 - 12.0 years
0 - 1 Lacs
Dhule
Work from Office
Key Responsibilities AI Model Deployment & Integration: Deploy and manage AI/ML models, including traditional machine learning and GenAI solutions (e.g., LLMs, RAG systems). Implement automated CI/CD pipelines for seamless deployment and scaling of AI models. Ensure efficient model integration into existing enterprise applications and workflows in collaboration with AI Engineers. Optimize AI infrastructure for performance and cost efficiency in cloud environments (AWS, Azure, GCP). Monitoring & Performance Management: Develop and implement monitoring solutions to track model performance, latency, drift, and cost metrics. Set up alerts and automated workflows to manage performance degradation and retraining triggers. Ensure responsible AI by monitoring for issues such as bias, hallucinations, and security vulnerabilities in GenAI outputs. Collaborate with Data Scientists to establish feedback loops for continuous model improvement. Automation & MLOps Best Practices: Establish scalable MLOps practices to support the continuous deployment and maintenance of AI models. Automate model retraining, versioning, and rollback strategies to ensure reliability and compliance. Utilize infrastructure-as-code (Terraform, CloudFormation) to manage AI pipelines. Security & Compliance: Implement security measures to prevent prompt injections, data leakage, and unauthorized model access. Work closely with compliance teams to ensure AI solutions adhere to privacy and regulatory standards (HIPAA, GDPR). Regularly audit AI pipelines for ethical AI practices and data governance. Collaboration & Process Improvement: Work closely with AI Engineers, Product Managers, and IT teams to align AI operational processes with business needs. Contribute to the development of AI Ops documentation, playbooks, and best practices. Continuously evaluate emerging GenAI operational tools and processes to drive innovation. Qualifications & Skills Education: Bachelors or Masters degree in Computer Science, Data Engineering, AI, or a related field. Relevant certifications in cloud platforms (AWS, Azure, GCP) or MLOps frameworks are a plus. Experience: 3+ years of experience in AI/ML operations, MLOps, or DevOps for AI-driven solutions. Hands-on experience deploying and managing AI models, including LLMs and GenAI solutions, in production environments. Experience working with cloud AI platforms such as Azure AI, AWS SageMaker, or Google Vertex AI. Technical Skills: Proficiency in MLOps tools and frameworks such as MLflow, Kubeflow, or Airflow. Hands-on experience with monitoring tools (Prometheus, Grafana, ELK Stack) for AI performance tracking. Experience with containerization and orchestration tools (Docker, Kubernetes) to support AI workloads. Familiarity with automation scripting using Python, Bash, or PowerShell. Understanding of GenAI-specific operational challenges such as response monitoring, token management, and prompt optimization. Knowledge of CI/CD pipelines (Jenkins, GitHub Actions) for AI model deployment. Strong understanding of AI security principles, including data privacy and governance considerations.
Posted 3 weeks ago
6.0 - 10.0 years
12 - 22 Lacs
Gurugram
Work from Office
We are seeking a Senior Solution Consultant with 6-10 years of experience in API integration, chatbot development, and data analysis tools. The role involves designing solutions, managing virtual assistants' architecture, gathering business requirements, and leading end-to-end delivery. The candidate should have experience in APIs like OData, REST, SOAP, as well as tools like Postman, Excel, and Google Sheets. Familiarity with NoSQL databases such as MongoDB, natural language processing, chatbot platforms (Dialogflow, Microsoft Bot Framework), and analytical tools like Power BI, Tableau, or Grafana is essential. The consultant will work closely with cross-functional teams, mentor other consultants, and ensure seamless integration with customer systems.
Posted 3 weeks ago
5.0 - 8.0 years
12 - 18 Lacs
Mumbai, Hyderabad, Chennai
Work from Office
We are seeking an experienced AWS Platform Engineer Developer to architect and manage secure, scalable AWS environments in compliance with industry regulations such as GDPR, FCA, and PRA. The role involves deploying and maintaining EKS clusters, Istio service mesh, and Kong API Gateway, implementing robust security measures using Dynatrace, Fortigate, and AWS-native security services (Security Hub, GuardDuty, WAF), and automating infrastructure provisioning with Terraform and CloudFormation. Responsibilities also include enforcing Privileged Access Management (PAM) policies, integrating observability tools (Dynatrace, Grafana, Prometheus), and collaborating with teams on container orchestration using Kubernetes and Docker. Experience in serverless technologies like AWS Lambda and API Gateway, as well as container security scanning tools such as Trivy and Aqua Security, is preferred.
Posted 3 weeks ago
5.0 - 7.0 years
7 - 9 Lacs
Bengaluru
Work from Office
A skilled DevOps Engineer to manage and optimize both on-premises and AWS cloud infrastructure. The ideal candidate will have expertise in DevOps tools, automation, system administration, and CI/CD pipeline management while ensuring security, scalability, and reliability. Key Responsibilities: 1. AWS & On-Premises Solution Architecture: o Design, deploy, and manage scalable, fault-tolerant infrastructure across both on-premises and AWS cloud environments. o Work with AWS services like EC2, IAM, VPC, CloudWatch, GuardDuty, AWS Security Hub, Amazon Inspector, AWS WAF, and Amazon RDS with Multi-AZ. o Configure ASG and implement load balancing techniques such as ALB and NLB. o Optimize cost and performance leveraging Elastic Load Balancing and EFS. o Implement logging and monitoring with CloudWatch, CloudTrail, and on-premises monitoring solutions. 2. DevOps Automation & CI/CD: o Develop and maintain CI/CD pipelines using Jenkins and GitLab for seamless code deployment across cloud and on-premises environments. o Automate infrastructure provisioning using Ansible, and CloudFormation. o Implement CI/CD pipeline setups using GitLab, Maven, Gradle, and deploy on Nginx and Tomcat. o Ensure code quality and coverage using SonarQube. o Monitor and troubleshoot pipelines and infrastructure using Prometheus, Grafana, Nagios, and New Relic. 3. System Administration & Infrastructure Management: o Manage and maintain Linux and Windows systems across cloud and on-premises environments, ensuring timely updates and security patches. o Configure and maintain web/application servers like Apache Tomcat and web servers like Nginx and Node.js. o Implement robust security measures, SSL/TLS configurations, and secure communications. o Configure DNS and SSL certificates. o Maintain and optimize on-premises storage, networking, and compute resources. 4. Collaboration & Documentation: o Collaborate with development, security, and operations teams to optimize deployment and infrastructure processes. o Provide best practices and recommendations for hybrid cloud and on-premises architecture, DevOps, and security. o Document infrastructure designs, security configurations, and disaster recovery plans for both environments. Required Skills & Qualifications: Cloud & On-Premises Expertise: Extensive knowledge of AWS services (EC2, IAM, VPC, RDS, etc.) and experience managing on-premises infrastructure. DevOps Tools: Proficiency in SCM tools (Git, GitLab), CI/CD (Jenkins, GitLab CI/CD), and containerization. Code Quality & Monitoring: Experience with SonarQube, Prometheus, Grafana, Nagios, and New Relic. Operating Systems: Experience managing Linux/Windows servers and working with CentOS, Fedora, Debian, and Windows platforms. Application & Web Servers: Hands-on experience with Apache Tomcat, Nginx, and Node.js. Security & Networking: Expertise in DNS configuration, SSL/TLS implementation, and AWS security services. Soft Skills: Strong problem-solving abilities, effective communication, and proactive learning. Preferred Qualifications: AWS certifications (Solutions Architect, DevOps Engineer) and a bachelors degree in Computer Science or related field. Experience with hybrid cloud environments and on-premises infrastructure automation.
Posted 3 weeks ago
3.0 - 8.0 years
5 - 10 Lacs
Bengaluru
Work from Office
• Primary Skills: Prometheus, Grafana, Datadog ,Alerting Techniques, Alert Triage and Incident Management, Application Issues RCA/Debugging, SQL. • Proven L3 level experience in managing large-scale, distributed systems in production environments. Required Candidate profile Drive SRE transformations by building frameworks and migrating traditional IT support to modern SRE practices. Collaborate closely with development and operations teams to improve system observability
Posted 3 weeks ago
3.0 - 8.0 years
3 - 8 Lacs
Chennai
Work from Office
Job Description: Shift Timing - 5PM IST - 2AM IST We are seeking a System Administrator to join our team. The ideal candidate will have a solid foundation in networking, Linux & Windows experience, and strong English communication skills. This role offers the opportunity to gain hands-on training in advanced monitoring tools, firewall management, and SSL certificate platforms. Key Responsibilities: Monitor network infrastructure and services to ensure uptime and performance. Respond to alerts and escalate issues as per defined procedures. Perform Linux & Windows system checks and log analysis. Work closely with senior engineers and customer teams for issue resolution. Maintain documentation related to network incidents, changes, and monitoring procedures. Participate in regular training sessions to develop skills in the following areas: BEST Monitoring Platform End User Troubleshooting Firewall Management IT Infra Security Incident Management DigiCert SSL Certificate Management Required Skills & Qualifications: Proficient English communication skills (both verbal and written). Basic hands-on experience with Linux (any distribution). Basic hands-on experience with Windows OSs Strong understanding of networking fundamentals, including: TCP/IP, DNS, DHCP, Subnetting Routers, switches, firewalls (concepts) Network troubleshooting tools (ping, traceroute, netstat, etc.) Strong analytical and problem-solving abilities. Willingness to learn and work in a customer-focused environment. Flexible to work in shifts (Eastern Time Zone) Preferred (Good to Have): Experience in using or exposure to any network monitoring tools. Basic understanding of firewall configurations and SSL certificates. Any certification such as CompTIA Network+, CCNA, or RHCSA is a plus. Interested can reach us at careers.tag@techaffinity.com
Posted 3 weeks ago
1.0 - 3.0 years
3 - 5 Lacs
Chennai
Work from Office
Design and develop backend components and RESTful APIs using Java (11+) and Spring Boot Build and maintain scalable microservices with strong emphasis on clean architecture Write reliable and efficient SQL queries; work with relational and optionally NoSQL (MongoDB) databases Apply DSA fundamentals in solving problems, optimizing code, and building performant features Follow and advocate for SOLID principles, clean code, and test-driven development Collaborate across product, design, and QA to build meaningful, high-quality features Contribute to internal tools or AI-powered enhancements to accelerate workflows Participate in code reviews, peer discussions, and technical design sessions What Were Looking For : 1-2 years of backend development experience using Java and Spring Boot Solid understanding and application of Data Structures and Algorithms in real-world scenarios Strong foundation in Object-Oriented Programming and adherence to SOLID principles Hands-on experience with SQL databases and understanding of performance tuning Familiarity with MongoDB or other NoSQL databases (good to have) Curiosity or exposure to AI/ML, generative APIs, or automation use cases Good communication skills, debugging ability, and a mindset for continuous learning Bonus Points For: Familiarity with cloud environments (AWS) Experience with Git, CI/CD pipelines (e g , GitHub Actions, Jenkins) Exposure to monitoring/logging tools like Prometheus, Grafana, or ELK Past experience in competitive programming, hackathons, or personal projects
Posted 3 weeks ago
2.0 - 7.0 years
4 - 9 Lacs
Pune, Coimbatore
Work from Office
Job Summary : We are seeking a skilled Erlang Developer to join our backend engineering team. The ideal candidate will have a strong background in Erlang, with working experience in Elixir and RabbitMQ. You will play a key role in designing, building, and maintaining scalable, fault-tolerant systems used in high-availability environments. Key Responsibilities : - Design, develop, test, and maintain scalable Erlang-based backend applications. - Collaborate with cross-functional teams to understand requirements and deliver efficient solutions. - Integrate messaging systems such as RabbitMQ to ensure smooth communication between services. - Write reusable, testable, and efficient code in Erlang and Elixir. - Monitor system performance and troubleshoot issues in production. - Ensure high availability and responsiveness of services. - Participate in code reviews and contribute to best practices in functional programming. Required Skills : - Proficiency in Erlang with hands-on development experience. - Working knowledge of Elixir and the Phoenix framework. - Strong experience with RabbitMQ and messaging systems. - Good understanding of distributed systems and concurrency. - Experience with version control systems like Git. - Familiarity with CI/CD pipelines and containerization (Docker is a plus). Preferred Qualifications : - Experience working in telecom, fintech, or real-time systems. - Knowledge of OTP (Open Telecom Platform) and BEAM VM internals. - Familiarity with monitoring tools like Prometheus, Grafana, etc.
Posted 3 weeks ago
3.0 - 8.0 years
11 - 21 Lacs
Noida
Hybrid
About us- We are a California based business and IT consulting firm founded in 2013. Our head office is in Dublin, CA, and our Global Delivery Centre is in Noida & Bangalore, India. Over the last eleven years, we have managed over a thousand solution engagements where we helped our clients transform their businesses through better analytics, organizational process changes, and cloud CRM solutions. We have worked with companies of all sizes and types, including Start-Ups, Fortune 100, and government agencies. Over 80% of our business comes from our existing customers. We provide business and technology solutions using big data and cloud technologies, including Salesforce.com, Microsoft Dynamics, Marketo, and other platforms. Our business solutions include business modeling/business architecture and business analytics services for sales, marketing, human resources, and customer service areas. We provide implementation services for cloud CRMs (Salesforce.com and Microsoft Dynamics) and development services for Force.com and .Net platforms. We also set up business intelligence solutions for both traditional and big data technologies. We use the distributed agile delivery model for our development and configuration projects. Our U.S. Team executes product discovery, business, and technical architecture, user experience design, and project management. Our Offshore Team focuses on the development, testing, and configuration. URL - https://www.mirketa.com/ Job Title: Software Engineer Experience 3+ years Location: Noida Work mode - Hybrid If you crave impact and live for the edge of what's possible in enterprise AI, join us to redefine the world of support engineering. You Bring 3+ years experience developing large-scale applications in cloud production environments (AWS, GCP, Azure, or private cloud). Strong programming skills in Python (preferred), or other OOP ( e.g. Java/C++). Familiarity with NoSQL DBs, Message Queues, Vector DBs, API servers, Jupyter Notebooks. Preferably, a good understanding of different stages of complex ML/AI solutions (search and recommendation systems): data science, distributed data pipelines, LLM orchestration. Knowledge of docker, Kubernetes, telemetry (OTLP, Grafana, Prometheus) would be a plus. Excellent problem-solving skills. Ability to find creative solutions to real-world problems and take end-to-end ownership of building and shipping. Thrive in fast-paced development cycles for full-stack solutions. A customer-obsessed mindset. Expect Novel challenges and the opportunity to re-invent and simplify complex systems. A fast-paced environment where a bias for action and ownership are paramount. To strive for the highest standards and think big, delivering groundbreaking solutions. We offer: A chance to define the future of AI in the enterprise. Recognition and opportunity for significant equity and growth with the company. Competitive compensation and benefits. What we value Customer obsession: Always starting with the customer and working backward. Ownership and Long-term thinking: Taking initiative to solve problems in scalable and extensible ways. Pride in Craftsmanship: Attention to detail is as much as building reliable, maintainable, and secure systems that scale. Continuous Learning: Curiosity, and the desire to raise the performance bar Trust, Respect, and Care: Earning trust and care for everyone we work with.
Posted 3 weeks ago
3.0 - 7.0 years
5 - 15 Lacs
Noida, Gurugram, Bengaluru
Hybrid
Our Exciting Opportunity This Job Role is responsible for the coordination, management and execution of proactive and reactive maintenance activities that require a higher level of support that the one offered by the 2nd Level Operations. This shall ensure that the services provided to customers are continuously available and performing to Service Level Agreement (SLA) performance levels! We believe in trust we trust each other to do the right things! • We believe in taking decisions as close to the product and technical expertise as possible. • We believe in creativity trying new things and learning from our mistakes. • We believe in sharing our insights and helping one another to build an even better user plane. • We truly believe in happiness, we enjoy and feel passionate about what we do and value each others technical competence deeply. What you will do • Providing Operational support for MITO and DevOps tool • Handling customer configuration activities • Handling Regular Maintenance activities. • Managing stakeholder communication for ticket handling. • To adhere the Operational SLAs • Should be able to support 24x7 on need basis. • Analyze customer reported/ application issues and deliver the resolution • Propose solution scenarios with identified components You will bring • Extensive maintenance/support experience with leading technical role and a deep understanding of the underlying processes, methods and tools. • Working knowledge with Java Technologies & python. • Experience working with tools like SVN/GIT, Jenkins, Docker, Kubernetes and Zabbix. • Extensive experience working in RedHat Linux environment and MySQL database. • Working knowledge of IIS (web Server), Citrix, RDP and Windows Servers • Software development life-cycle (SDLC) exposure with technical leadership ability to Design, coding, testing and Integration phases. • An ability to learn new technologies/systems and assimilate new information quickly in a fast-paced and constantly changing environment • Strategic thinker with a strong service orientation. Minimum experience required- 3 to 7 years Qualification- BE/B.Tech or MCA regular
Posted 3 weeks ago
3.0 - 8.0 years
11 - 15 Lacs
Mumbai
Work from Office
About the Job: The Red Hat India Services team is looking for a Consultant to join us in Mumbai, India. In this role, you will help us ensure that our engagements are not just a technology implementation, but an organisational transformation. As a consultant, you will work with our lead architect in our engagements, cocreating innovative software solutions using emerging open source technology and modern software design methods in an agile environment. Youll be coached by the team to facilitate the design and technical delivery of our solutions. As you do so, youll createenthusiasm for building great software using principles of open source and agile culture. You'll support everything from the scoping to delivering the engagements. Successful applicants must reside in a city where Red Hat has mentioned the location. What will you do Participate in all aspects of agile software development, including design, implementation, and deployment Design client-side and server-side architecture Develop and manage well-functioning databases and applications Write effective APIs Architect and provide guidance on building end-to-end systems optimized for speed and scale Engage with inspiring designers and front-end engineers, and collaborate with leading back-end engineers to create reliable APIs Collaborate across time zones via Slack, GitHub comments, documents, and frequent videoconferences This position requires frequent on-site work with clients and availability to travel up to 50-80%. What will you bring At least 3 years of experience in building large-scale software applications Extensive experience in Openshift, Kafka, 3scale Hands-on experience with Service Mesh technologies, including Istio , for traffic management, security, and observability in microservices architectures. Knowledge of devops (CI/CD, GIT, ArgoCD) Monitoring using Prometheus, Grafana Experience in building web applications Experience in designing and integrating RESTful APIs Excellent debugging and optimisation skills Experience in unit/integration testing Familiarity with databases (e.g. MySQL, MongoDB), web servers (e.g. Apache) and Familiarity with UI/UX design About Red Hat Red Hat is the worlds leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Inclusion at Red Hat Red Hats culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com . General inquiries, such as those regarding the status of a job application, will not receive a reply.
Posted 3 weeks ago
3.0 - 8.0 years
30 - 35 Lacs
Bengaluru
Work from Office
The IT AI Application Platform team is seeking a Senior Site Reliability Engineer (SRE) to develop, scale, and operate our AI Application Platform based on Red Hat technologies, including OpenShift AI (RHOAI) and Red Hat Enterprise Linux AI (RHEL AI). As an SRE you will contribute to running core AI services at scale by enabling customer self-service, making our monitoring system more sustainable, and eliminating toil through automation. On the IT AI Application Platform team, you will have the opportunity to influence the complex challenges of scale which are unique to Red Hat IT managed AI platform services, while using your skills in coding, operations, and large-scale distributed system design. We develop, deploy, and maintain Red Hats next-generation Ai application deployment environment for custom applications and services across a range of hybrid cloud infrastructures. We are a global team operating on-premise and in the public cloud, using the latest technologies from Red Hat and beyond. Red Hat relies on teamwork and openness for its success. We are a global team and strive to cultivate a transparent environment that makes room for different voices. We learn from our failures in a blameless environment to support the continuous improvement of the team. At Red Hat, your individual contributions have more visibility than most large companies, and visibility means career opportunities and growth. What you will do The day-to-day responsibilities of an SRE involve working with live systems and coding automation. As an SRE you will be expected to Build and manage our large scale infrastructure and platform services, including public cloud, private cloud, and datacenter-based Automate cloud infrastructure through use of technologies (e.g. auto scaling, load balancing, etc.), scripting (bash, python and golang), monitoring and alerting solutions (e.g. Splunk, Splunk IM, Prometheus, Grafana, Catchpoint etc) Design, develop, and become expert in AI capabilities leveraging emerging industry standards Participate in the design and development of software like Kubernetes operators, webhooks, cli-tools.. Implement and maintain intelligent infrastructure and application monitoring designed to enable application engineering teams Ensure the production environment is operating in accordance with established procedures and best practices Provide escalation support for high severity and critical platform-impacting events Provide feedback around bugs and feature improvements to the various Red Hat Product Engineering teams Contribute software tests and participate in peer review to increase the quality of our codebase Help and develop peers capabilities through knowledge sharing, mentoring, and collaboration Participate in a regular on-call schedule, supporting the operation needs of our tenants Practice sustainable incident response and blameless postmortems Work within a small agile team to develop and improve SRE methodologies, support your peers, plan and self-improve What you will bring A bachelor's degree in Computer Science or a related technical field involving software or systems engineering is required. However, hands-on experience that demonstrates your ability and interest in Site Reliability Engineering are valuable to us, and may be considered in lieu of degree requirements. You must have some experience programming in at least one of these languagesPython, Golang, Java, C, C++ or another object-oriented language. You must have experience working with public clouds such as AWS, GCP, or Azure. You must also have the ability to collaboratively troubleshoot and solve problems in a team setting. As an SRE you will be most successful if you have some experience troubleshooting an as-a-service offering (SaaS, PaaS, etc.) and some experience working with complex distributed systems. We like to see a demonstrated ability to debug, optimize code and automate routine tasks. We are Red Hat, so you need a basic understanding of Unix/Linux operating systems. Desired skills 3+ years of experience of using cloud providers and technologies (Google, Azure, Amazon, OpenStack etc) 1+ years of experience administering a kubernetes based production environment 2+ years of experience with enterprise systems monitoring 2+ years of experience with enterprise configuration management software like Ansible by Red Hat, Puppet, or Chef 2+ years of experience programming with at least one object-oriented language; Golang, Java, or Python are preferred 2+ years of experience delivering a hosted service Demonstrated ability to quickly and accurately troubleshoot system issues Solid understanding of standard TCP/IP networking and common protocols like DNS and HTTP Demonstrated comfort with collaboration, open communication and reaching across functional boundaries Passion for understanding users needs and delivering outstanding user experiences Independent problem-solving and self-direction Works well alone and as part of a global team Experience working with Agile development methodologies #LI-SH4 About Red Hat Red Hat is the worlds leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Inclusion at Red Hat Red Hats culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com . General inquiries, such as those regarding the status of a job application, will not receive a reply.
Posted 3 weeks ago
4.0 - 8.0 years
15 - 19 Lacs
Bengaluru
Work from Office
We are seeking a DevOps & Odoo Tech Lead (India) to spearhead the rollout and support of the Services QT tool within Odoo, ensuring robust infrastructure, configuration, and operational excellence. Youll design and implement APIs for seamless integration with Nokias service automation platforms and external systems, architect and manage the DevOps environmentincluding CI/CD pipelines, containerization, infrastructure as code, high-availability Odoo deployments, monitoring, and automationand resolve complex performance and integration issues. As team leader, you will coach and mentor DevOps staff, manage agile release cycles, and drive best practices for operational stability, scalability, and security. You have: Deep understanding of Odoo architecture (frontend, backend, database structure) and proficiency in Linux (Ubuntu, Debian, CentOS) as Odoo primarily runs on Linux-based environments Experience in installing, configuring, and optimizing Odoo (Community and Enterprise editions) and system monitoring using tools like Prometheus, Grafana, or ELK stack Knowledge of Odoo modules, customization, and development (Python, XML, JavaScript) and ability to manage Odoo scaling (multi-instance, multi-database) Expertise in Odoo performance tuning (load balancing, caching, database optimization) Experience with Git, GitHub/GitLab CI/CD for version control and deployment automation Experience in setting up and managing virtual machines (VMs), bare-metal servers, and containers and automation of deployments using Ansible, Terraform, or shell scripting It would be nice if you also had: Expertise in PostgreSQL (Odoos database) Experience in AWS, Google Cloud, Azure, or DigitalOcean for cloud-based Odoo hosting Expertise in network security, firewalls, and VPNs Define, design, and oversee the development of APIs required from Nokia products (and other new-tech vendors) to enable seamless integration with Nokias service automation platforms. Act as the primary technical liaison for both internal and external service software teams, guiding effective integration with Nokias service automation components. Diagnose and resolve complex performance and reliability issues within service operations automation using deep expertise in DevOps, infrastructure, and Odoo tuning. Use in-depth business domain knowledge to align architectural and DevOps strategies with service automation goals and customer objectives. Provide structured mentoring, best practices, and real-time guidance to Managed Services DevOps staff, taskforces, and workteams. Coordinate task allocation, monitor progress, and coach team members, contributing feedback for formal performance evaluations. Lead release management within Scrum/Agile cycles, including planning, execution, regression testing, and post-release reviews to meet customer requirements. Administer and optimize Odoo deployments on Linux or cloud platformshandling installation, configuration, performance tuning, HA, backupswhile implementing CI/CD pipelines, containerization, infrastructure automation, monitoring, and security best practices.
Posted 3 weeks ago
4.0 - 8.0 years
20 - 25 Lacs
Mumbai
Work from Office
Required Qualification: BE/ B Tech/ MCA Skill, Knowledge &Trainings: Own and manage the CI/CD pipelines for automated build, test, and deployment. Design and implement robust deployment strategies for microservices and web applications. Set up and maintain monitoring, alerting, and logging frameworks (e.g., Prometheus, Grafana, ELK) Build automations which will help optimize software delivery. Improve reliability, quality, and time-to-market of our suite of software solutions. Will be responsible for availability, latency, performance efficiency, change management, monitoring, emergency response and capacity planning. Will create services that will do automatic provisioning of test environments, automation of release management process, setting up pre-emptive monitoring of logs and creating dashboards for metrics visualisations Partner with development teams to improve services through rigorous testing and release procedures. Run our infrastructure with Gitlab CI/CD, Kubernetes, Kafka, NGINX and ELK stack. Co-ordinate with infra teams and developers to improvise the incident management process. Responsible for L1 support as well. Good Communication and Presentation skills Core Competencies(Must Have): Elastic, Logstash, Kibana or AppDynamics CI/CD Gitlab/Jenkins Other KeySkills SSO technologies Ansible Python Linux Administration Additional Competencies (Nice to have): Kubernetes Kafka, MQ NGINX or APIGEE Redis Experience in working with outsourced vendor teams for application development Appreciation of Enterprise Functional Architecture in Capital Markets Job Purpose: We are looking for a skilled and proactive Site Reliability Engineer (SRE) with strong expertise in deployment automation, monitoring, and infrastructure reliability. The ideal candidate will be responsible for managing the end-to-end deployment lifecycle, ensuring the availability, scalability, and performance of our production and non-production environments. Area of Operations Key Responsibility Deployment & Release Management Own and manage the CI/CD pipelines for automated build, test, and deployment. Design and implement robust deployment strategies for microservices and web applications. Monitor and troubleshoot deployment issues and rollbacks, ensuring zero-downtime deployment where possible System Reliability & Performance Set up and maintain monitoring, alerting, and logging frameworks (e.g., Prometheus, Grafana, ELK) Any Other Requirement: Should be a good team player. Would be required to work with multiple projects / teams concurrently
Posted 3 weeks ago
8.0 - 10.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Company Overview: Maximus is a leading innovator in the government space, providing transformative solutions in the management and service delivery of government health and human services programs. We pride ourselves in our commitment to excellence, innovation, and a customer-first approach, driven by our core values. This has fostered our continual support of public programs and improving access to government services for citizens. Maximus continues to grow its Digital Solutions organization to better serve the needs of our organization, our customers in the government, health, and human services space, while improving access to government services for citizens. We use an approach grounded in design thinking, lean, and agile to help solve complicated problems and turn bold ideas into delightful solutions. Job Description: We are seeking a hands-on and strategic Lead DevOps Engineer to architect, implement, and lead the automation and CI/CD practices across our cloud infrastructure. This role demands deep expertise in cloud-native technologies and modern DevOps tooling with a strong emphasis on AWS, Kubernetes, ArgoCD, and Infrastructure Code. The ideal candidate is also expected to be a motivated self-starter with a proactive approach to resolving problems and issues with minimal supervision Key Responsibilities: Design and manage scalable infrastructure across AWS and Azure using Terraform (IaC) Define and maintain reusable Terraform modules to enforce infrastructure standards and best practices Implement secrets management , configuration management, and automated environment provisioning Architect and maintain robust CI/CD pipelines using Jenkins and ArgoCD Implement GitOps workflows for continuous delivery and environment promotion Automate testing, security scanning, and deployment processes across multiple environments Design and manage containerized applications with Docker Deploy and manage scalable, secure workloads using Kubernetes (EKS/ECS/GKE/AKS/self-managed) Create and maintain Helm charts , C ustomize configs, or other manifest templating tools Manage Git repositories, branching strategies, and code review workflows Promote version control best practices including commit hygiene and semantic release tagging Set up and operate observability stacks: Prometheus , Grafana , ELK , Loki , Alertmanager any of those. Define SLAs, SLOs, and SLIs for critical services Lead incident response , perform root cause analysis, and publish post-mortems documentations Integrate security tools and checks directly into CI/CD workflows Manage access control, secrets, and ensure compliance with standards such as FedRamp Mentor and guide DevOps engineers to build a high-performing team Collaborate closely with software engineers, QA, product managers, and security teams Promote a culture of automation , reliability , and continuous improvement Roles and Responsibilities Qualifications: Bachelor's degree in computer science, Information Security, or a related field (or equivalent experience). 8+ years of experience in DevOps or a similar role, with a strong security focus. Preferred AWS Certified Cloud Practitioner certification or AWS Certified Devops Engineer – Professional or AWS Certified Solution Architect or similar. Knowledge of cloud platforms (AWS) (Azure – Good to have) and containerization technologies (Docker, Kubernetes) with a key focus on AWS and EKS, ECS. Experience with infrastructure such as code (IaC) tools such as Terraform. Proficiency in CI/CD tools like AWS CodePipeline, Jenkins, Azure DevOps Server Familiarity with programming and scripting languages (e.g., Python, Bash, Go, Bash). Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment. Strong communication skills, with the ability to convey complex security concepts to technical and non-technical stakeholders. Preferred Qualifications: Strong understanding and working experience with enterprise applications and containerized application workloads. Knowledge of networking concepts Knowledge of network security principles and technologies (e.g., Firewalls, VPNs, IDS/IPS).
Posted 3 weeks ago
2.0 - 5.0 years
4 - 7 Lacs
Mumbai
Work from Office
Are you someone who jumps into action when something breaksDo you enjoy digging into dashboards, logs, and alerts to find out what went wrong Were looking for a Product Support Engineer who loves solving problems and working closely with teams to keep systems running smoothly. In this role, you'll be the first point of contact for production issues. You'll monitor system health, investigate alerts, and work with Engineering, DevOps, Product, and Customer Support teams to fix problems fast. From resolving customer support tickets to setting up alerts and dashboards, your work will directly impact the stability and reliability of our services. If you're hands-on with monitoring tools, understand cloud basics, can dig into logs, and are eager to take ownership of production support, we'd love to connect with you. What You'll Do: Monitor system health and performance using tools like Grafana, New Relic, Datadog, Sumo Logic, Dynatrace, etc. Create and maintain dashboards, alerts, and log queries to improve visibility and issue detection. Respond to and resolve support tickets by working closely with customer support and engineering teams. Use Jira to track issues, bugs, and tasks; keep them updated with clear status and progress. Document processes, known issues, and solutions in Confluence and maintain operational playbooks. Troubleshoot and analyze production issues using logs and monitoring data. Support root cause analysis and contribute to post-incident reviews. Assist in automating routine tasks and improving support workflows. Communicate effectively with both technical and non-technical stakeholders. Apply basic SQL and programming knowledge for debugging and data checks. Collaborate with engineering, DevOps, product, and customer support teams to ensure fast resolution and continuous improvement. What Were Looking For: B.Tech / MCA in Computer Science, IT, or a related field 3+ years of experience in a technical support, product support, or site operations role Hands-on experience with monitoring and observability tools like Grafana, New Relic, Datadog, Sumo Logic, Dynatrace, etc. Experience in creating dashboards, setting up alerts, and analyzing logs Working knowledge of Jira (issue tracking) and Confluence (documentation) Basic understanding of cloud platforms such as AWS or GCP Strong problem-solving skills with a proactive mindset Familiarity with SQL for basic querying and troubleshooting Basic programming or scripting experience (e.g., Python, Bash) Good communication skills and ability to collaborate across teams (engineering, DevOps, product, and support)
Posted 3 weeks ago
6.0 - 10.0 years
7 - 11 Lacs
Mumbai
Work from Office
We are looking for an experienced DevOps Engineer (Level 2 3) to design, automate, and optimize cloud infrastructure. You will play a key role in CI/CD automation, cloud management, observability, and security, ensuring scalable and reliable systems. Key Responsibilities: Design and manage AWS environments using Terraform/Ansible. Build and optimize deployment pipelines (Jenkins, ArgoCD, AWS CodePipeline). Deploy and maintain EKS, ECS clusters. Implement OpenTelemetry, Prometheus, Grafana for logs, metrics, and tracing. Manage and scale cloud-native microservices efficiently. Required Skills: Proven experience in DevOps, system administration, or software development. Strong knowledge of AWS. Programming languages: Python, Go, Bash, are good to have Experience with IAC tools like Terraform, Ansible Solid understanding of CI/CD tools (Jenkins, ArgoCD , AWS CodePipeline). Experience in containers and orchestration tools like Kubernetes (EKS) Understanding of OpenTelemetry observability stack (logs, metrics, traces Good to have: Experience with container orchestration platforms (e.g., EKS, ECS). Familiarity with serverless architecture and tools (e.g., AWS Lambda). Experience using monitoring tools like DataDog/ NewRelic, CloudWatch, Prometheus/Grafana Experience with managing more than 20+ cloud-native microservices. Previous experience of working in a startup Education Experience: Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent work experience). Years of relevant experience in DevOps or a similar role.
Posted 3 weeks ago
3.0 - 7.0 years
4 - 8 Lacs
Chennai
Work from Office
Role & responsibilities Collaborate with stakeholders to clarify problem statements, narrow down the analysis scope, and use findings to improve product outcomes. Create replicable data analyses using open-source technologies, summarize findings effectively, and communicate assumptions clearly. Build data pipelines to transition ad-hoc analyses into production-ready dashboards for fellow engineers' use. Develop, deploy, and maintain metrics, applications, and tools to empower engineers to access and utilize data insights independently. Write well-structured and thoroughly tested code, ensuring maintainability and scalability for future enhancements. Stay updated on relevant technologies and propose new ones for team adoption.
Posted 3 weeks ago
5.0 - 8.0 years
7 - 10 Lacs
Pune
Work from Office
BMC is looking for an Experienced Full Stack Developer to join our amazing Information Experience (IX) team! In this role, you will develop and maintain SaaS web application as a part of the IX team, to ensure seamless user experiences. You'll collaborate closely with UI/UX designers & IX team to bring design concepts to life and be a part of an amazing global team. Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Develop and maintain SaaS web applications using Java, Spring Boot, Microservices, React/Angular, etc., while writing clean, maintainable, and efficient code. Ensure web applications are fully responsive and provide an excellent user experience across various devices and screen sizes. Collaborate with UI/UX designers to implement visually appealing web applications based on design concepts. Utilize Kubernetes and containerization tools (e.g., Docker, helm) for microservices deployment, scaling, and management in a cloud-native environment. Design and optimize databases (e.g., MySQL, PostgreSQL) for efficient data storage and retrieval in a microservices environment. Conduct thorough testing and debugging to ensure high-quality, bug-free software. Use version control systems (e.g., Git) to manage codebase and collaborate effectively with team members. Work agile within a Scrum team to meet deadlines and deliver high-quality features. Foster effective collaboration with other teams for joint feature development. To ensure youre set up for success, you will bring the following skillset & experience: 5-8 years of experience as a UI Developer with Angular must have, preferably in enterprise software companies Experience in designing, building, and maintaining complex microservices-based applications. Experience in Java (including Spring Boot framework) Experience front-end technologies, such as HTML, CSS, and JavaScript In-depth proficiency in modern frameworks like React & Angular. Experience with relational and/or NoSQL databases and ability to design efficient database schemas for microservices applications. Good analytical and problem-solving abilities to tackle complex technical challenges and provide effective solutions. Excellent communication, and interpersonal skills. Whilst these are nice to have, our team can help you develop in the following skills: Experience working in enterprise software companies. Working knowledge of cloud platforms like AWS, Azure, or Google Cloud for deploying and managing applications. Experience with Atlassian products (Jira, Confluence). Familiarity with DevOps practices and experience in setting up continuous integration and continuous deployment pipelines. Knowledge of Grafana and BMC Suite applications. B.Sc. in Computer Science, or related field
Posted 3 weeks ago
3.0 - 8.0 years
8 - 18 Lacs
Mumbai, Thane, Ahmedabad
Work from Office
Department: App Modernization Job Overview: We are looking for skilled Node.js Developers to join our team supporting TATA AIAs digital initiatives. The selected candidate(s) will be involved in designing and developing scalable microservices-based applications with a focus on Node.js, JavaScript, PostgreSQL, and cloud-native DevOps practices. The role offers opportunities to work on a modern technology stack including Prometheus, Grafana, Azure, and CI/CD pipelines, while contributing to real-time digital transformation projects. Key Responsibilities: Backend Development Design and implement RESTful APIs and backend services using Node.js (Express/Fastify). Develop scalable microservices with performance and reliability in mind. Monitoring and Performance Implement observability using Prometheus and Grafana for metrics collection, alerting, and monitoring. Database & Data Handling Work with PostgreSQL or other relational databases for data modeling and efficient query handling. CI/CD & DevOps Participate in Azure DevOps workflows, including build, deploy, and testing pipelines. Ensure zero-touch deployment and follow modern CI/CD practices. Collaboration & Version Control Collaborate using JIRA, Bitbucket, GIT, and contribute to an Agile development environment. Qualifications: Education: Bachelors degree in Computer Science, Information Technology, or a related field. Experience: 3 to 10 years of hands-on experience in Node.js development based on the role band. Skills: Primary: Node.js Prometheus Microservices Grafana Good Knowledge In: PostgreSQL Azure DevOps Additional Skills: Proficiency in project management tools like JIRA Experience with version control systems such as Bitbucket and GIT Familiarity with CI/CD pipelines and automation tools
Posted 3 weeks ago
5.0 - 10.0 years
8 - 18 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
SRE (Site Reliability Engineer Node.js Focus) Location: Hyderabad Interview Date: 10th July: 5:00 PM – 6:00 PM Core Skills Required: Node.js backend expertise Site Reliability Engineering best practices Monitoring: Prometheus, Grafana, Azure Monitor CI/CD Automation, Infrastructure as Code Incident Management & Production Support Good to Have: Experience in Kubernetes / Docker Load testing and capacity planning Scripting (Python, PowerShell, Bash) Additional Info Interviews will be conducted virtually. Candidates must be B.E. / B.Tech graduates from recognized institutions. Strong communication and client-facing skills are essential. Immediate joiners or short notice candidates will be prioritized. How to Apply: Send your CV to: careers@gigaswartechnologies.com Or apply here: tinyurl.com/2kansjhe
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France