Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12.0 - 16.0 years
0 Lacs
pune, maharashtra
On-site
As the Director of Technology for Supply-chain, Logistics, Omni, and Corporate Systems at Williams-Sonoma's Technology Center in Pune, India, you will play a crucial role in leading the engineering teams to develop high-value and high-quality features for the industry-leading engineering delivery. Your responsibilities will encompass attracting, recruiting, and retaining top engineering talent, influencing architectural discussions, strategic planning, and decision-making processes to ensure the creation of impactful, compelling, and scalable solutions incrementally. Your success in this role will be driven by your agility, results orientation, strategic thinking, and innovative approach to delivering software products at scale. You will be responsible for overseeing engineering project delivery, defining and executing the engineering strategy aligned with the company's business goals, and ensuring high-quality deliverables through robust processes for code reviews, testing, and deployment. Collaboration will be a key aspect of your role, as you will actively engage with Product Management, Business Stakeholders, and other Engineering Teams to define project requirements and deliver customer-centric solutions. You will also focus on talent acquisition and development, building a strong and diverse engineering team, implementing an onboarding program, coaching team members for technical expertise and leadership abilities, and maintaining a strong talent pipeline. Your role will involve performance management, technology leadership, continuous education and domain expertise, resource planning and execution, organizational improvement, system understanding and technical oversight, innovation and transformation, as well as additional responsibilities as required. Your expertise in managing projects, technical leadership, analytical skills, business relationships, communication excellence, and execution and results orientation will be critical for success in this role. To qualify for this position, you should have extensive industry experience in developing and delivering Supply Chain and Logistics solutions, leadership and team management experience, project lifecycle management skills, project and technical leadership capabilities, analytical and decision-making skills, business relationships and conflict management expertise, communication excellence, interpersonal effectiveness, execution and results orientation, vendor and stakeholder management proficiency, as well as self-motivation and independence. Additionally, you should hold a Bachelor's degree in computer science, Engineering, or a related field, and possess core technical criteria such as expertise in Java frameworks, RESTful API design, microservices architecture, database management, cloud platforms, CI/CD pipelines, containerization, logging and monitoring tools, error tracking mechanisms, event-driven architectures, Git workflows, and Agile tools. Join Williams-Sonoma, Inc., a premier specialty retailer with a rich history dating back to 1956, and be part of a dynamic team dedicated to delivering high-quality products for the kitchen and home. Take on the challenge of driving innovation and transforming the organization into a leading technology entity with cutting-edge solutions that enhance customer experiences and maintain a competitive edge in the global market.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Cloud Engineer at AVP level in Bangalore, India, you will be responsible for designing, implementing, and managing cloud infrastructure and services on Google Cloud Platform (GCP). Your key responsibilities will include designing, deploying, and managing scalable, secure, and cost-effective cloud environments on GCP, developing Infrastructure as Code (IaC) using tools like Terraform, ensuring security best practices, IAM policies, and compliance with organizational and regulatory standards, configuring and managing VPCs, subnets, firewalls, VPNs, and interconnects for secure cloud networking, setting up CI/CD pipelines for automated deployments, implementing monitoring and alerting using tools like Stackdriver, optimizing cloud spending, designing disaster recovery and backup strategies, deploying and managing GCP databases, and managing containerized applications using GKE and Cloud Run. You will be part of the Platform Engineering Team, which is responsible for building and maintaining foundational infrastructure, tooling, and automation to enable efficient, secure, and scalable software development and deployment. The team focuses on creating a self-service platform for developers and operational teams, ensuring reliability, security, and compliance while improving developer productivity. To excel in this role, you should have strong experience with GCP services, proficiency in scripting and Infrastructure as Code, knowledge of DevOps practices and CI/CD tools, understanding of security, IAM, networking, and compliance in cloud environments, experience with monitoring tools, strong problem-solving skills, and Google Cloud certifications would be a plus. You will receive training, development, coaching, and support to help you excel in your career, along with a culture of continuous learning and a range of flexible benefits tailored to suit your needs. The company strives for a positive, fair, and inclusive work environment where employees are empowered to excel together every day. For further information about the company and its teams, please visit the company website: https://www.db.com/company/company.htm. The Deutsche Bank Group welcomes applications from all individuals and promotes a culture of shared successes and collaboration.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a Lead DevOps Engineer at GrowExx, you will collaborate with cross-functional teams to define, design, and implement DevOps infrastructure while adhering to best practices of Infrastructure as Code (IAC). Your primary goal will be to ensure a robust and stable CI/CD process that maximizes efficiency and achieves 100% automation. You will be responsible for analyzing system requirements comprehensively to develop effective Test Automation Strategies for applications. Additionally, your role will involve designing infrastructure using cloud platforms such as AWS, GCP, Azure, or others. You will also manage Code Repositories like GitHub, GitLab, or BitBucket, and automate software quality gateways using Sonarqube. In this position, you will design branching and merging strategies, create CI pipelines using tools like Jenkins, CircleCI, or Bitbucket, and establish automated build & deployment processes with rollback mechanisms. Identifying and mitigating infrastructure security and performance risks will be crucial, along with designing Disaster Recovery & Backup policies and Infrastructure/Application Monitoring processes. Your role will also involve formulating DevOps Strategies for projects with a focus on Quality, Performance, and Cost considerations. Conducting cost/benefit analysis for proposed infrastructures, automating software delivery processes for distributed development teams, and promoting software craftsmanship will be key responsibilities. You will be expected to identify new tools and processes, and train teams on their adoption. Key Skills: - Hands-on experience with LLM models and evaluation metrics for LLMs. - Proficiency in managing infrastructure on cloud platforms like AWS, GCP, or Azure. - Expertise in Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Pulumi. - Managing code repositories using GitHub, GitLab, or Bitbucket, and implementing effective branching and merging strategies. - Designing and maintaining robust CI/CD pipelines with tools like Jenkins, CircleCI, or Bitbucket Pipelines. - Automating software quality checks using SonarQube. - Understanding of automated build and deployment processes, including rollback mechanisms. - Knowledge of infrastructure security best practices and risk mitigation. - Designing disaster recovery and backup strategies. - Experience with monitoring tools like Prometheus, Grafana, ELK, Datadog, or New Relic. - Defining DevOps strategies aligned with project goals. - Conducting cost-benefit analyses for optimal infrastructure solutions. - Automating software delivery processes for distributed teams. - Passion for software craftsmanship and evangelizing DevOps best practices. - Strong leadership, communication, and training skills. Education and Experience: - B Tech or B. E./BCA/MCA/M.E degree. - 8+ years of relevant experience with team-leading experience. - Experience in Agile methodologies, Scrum & Kanban, project management, planning, risk identification, and mitigation. Analytical and Personal Skills: - Strong logical reasoning and analytical skills. - Effective communication in English (written and verbal). - Ownership and accountability in work. - Interest in new technologies and trends. - Multi-tasking and team management abilities. - Coaching and mentoring skills. - Managing multiple stakeholders and resolving conflicts diplomatically. - Forward-thinking mindset.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
You should have hands-on experience with Ingress controllers such as Traefik, Nginx, Envoy. Familiarity with configuration & release management tools like Concourse, Ansible, Chef, or Puppet would be beneficial. Additionally, you should have practical exposure to tools like Prometheus, Grafana, ELK, Sleuth. Proficiency in utilizing various unit, integration, and end-to-end testing frameworks is also required.,
Posted 1 week ago
2.0 - 6.0 years
0 - 0 Lacs
rajasthan
On-site
As a Staff Software Engineer at SpoonLabs, you will be responsible for contributing to the architecture of Spoon and Vigloo. You will play a key role in designing and implementing scalable and efficient architecture solutions. Your primary focus will be on XP (eXtreme Programming) practices such as Simple Design, Small Release, TDD, and Pair Programming to ensure high-quality deliverables. You will collaborate with the team to continuously improve the architecture and maintain a sustainable codebase. In this role, you will work closely with the Spoon and Vigloo teams to drive innovation and deliver cutting-edge solutions. Spoon can be accessed at https://www.spooncast.net/kr, while Vigloo can be accessed at https://www.vigloo.com/ko. Key responsibilities include participating in CI/CD processes, leveraging technologies like Spring Boot and Kotlin/Java, and working with AWS, Kubernetes, and Docker. Additionally, you will have the opportunity to explore Reactive Programming and Kotlin Coroutines, along with monitoring tools such as Datadog, Prometheus, and Sentry. You will be involved in the continuous improvement of architecture by identifying and implementing best practices. Collaboration with cross-functional teams is essential to ensure seamless integration and deployment. You will also be responsible for ensuring the scalability and performance of the applications. The ideal candidate should have a deep understanding of XP practices and be proficient in Spring Boot, Kotlin/Java. Experience with AWS, Kubernetes, Docker, and CI/CD DevOps practices is highly desirable. Knowledge of Reactive Programming and Kotlin Coroutines will be an added advantage. If you are passionate about building robust and scalable software architectures and enjoy working in a dynamic environment, we would love to hear from you. Please send your resume to recruit@spoonlabs.com. Join us at SpoonLabs to be part of a forward-thinking team that values innovation, collaboration, and excellence. Don't miss the opportunity to participate in industry events like AWS re:Invent, Digital Marketing Summit, and MAU Conference. Enhance your skills and grow your career with us!,
Posted 1 week ago
1.0 - 4.0 years
4 - 8 Lacs
Bengaluru
Work from Office
About SpotDraft SpotDraft is an end-to-end CLM for high-growth companies We are building a product to ensure convenient, fast and easy contracting for businesses We know the potential to be unlocked if legal teams are equipped with the right kind of tools and systems So here we are, building them Currently, customers like PhonePe, Chargebee, Unacademy, Meesho and Cred use SpotDraft to streamline contracting within their organisations On average, SpotDraft saves legal counsels within the company 10 hours per week and helps close deals 25% faster Job Summary As a Jr DevOps Engineer, you will be responsible for planning, building and optimizing the Cloud Infrastructure and CI/CD pipelines for the applications which power SpotDraft You will be closely working with Product Teams across the organization and help them ship code and reduce manual processes You will directly work with the Engineering Leaders including the CTO to deliver the best experience for users by ensuring high availability of all systems We follow the GitOps pattern to deploy infrastructure using Terraform and ArgoCD We leverage tools like Sentry, DataDog and Prometheus to efficiently monitor our Kubernetes Cluster and Workload Key Responsibilities Developing and maintaining CI/CD workflows on Github Provisioning and maintaining cloud infrastructure on GCP and AWS using Terraform Set up logging, monitoring and alerting of applications and infrastructure using DataDog and GCP Automate deployment of applications to Kubernetes using ArgoCD, Helm, Kustomize and Terraform Design and promote efficient DevOps process and practices Continuously optimize infrastructure to reduce cloud costs Requirements Proficiency with Docker and Kubernetes Proficiency in git Proficiency in any scripting language (bash, python, etc ) Experience with any of the major clouds Experience working on linux based infrastructure Experience with open source monitoring tools like Prometheus Experience with any ingress controllers (nginx, traefik, etc ) Working at SpotDraft When you join SpotDraft, you will be joining an ambitious team that is passionate about creating a globally recognized legal tech company We set each other up for success and encourage everyone in the team to play an active role in building the company An opportunity to work alongside one of the most talent-dense teams An opportunity to build your professional network through interacting with influential and highly sought-after founders, investors, venture capitalists and market leaders Hands-on impact and space for complete ownership of end-to-end processes We are an outcome-driven organisation and trust each other to drive outcomes whilst being audacious with our goals ? Our Core Values Our business is to delight Customers Be Transparent Be Direct Be Audacious Outcomes over everything else Be 1% better every day Elevate each other Be passionate Take Ownership
Posted 1 week ago
7.0 - 12.0 years
7 - 11 Lacs
Mumbai, Bengaluru
Work from Office
Location PAN India As per companys designated LTIM locations Shift Type Rotational Shifts including Night Shift and Weekend Availability Experience 7 Years of Exp Job Summary We are looking for a skilled and adaptable Site Reliability Engineer SRE Observability Engineer to join our dynamic project team The ideal candidate will play a critical role in ensuring system reliability scalability observability and performance while collaborating closely with development and operations teams This position requires strong technical expertise problemsolving abilities and a commitment to 247 operational excellence Key Responsibilities Site Reliability Engineering Design build and maintain scalable and reliable infrastructure Automate system provisioning and configuration using tools like Terraform Ansible Chef or Puppet Develop tools and scripts in Python Go Java or Bash for automation and monitoring Administer and optimize LinuxUnix systems with a strong understanding of TCPIP DNS load balancers and firewalls Implement and manage cloud infrastructure across AWS or Kubernetes Maintain and enhance CICD pipelines using tools like Jenkins ArgoCD Monitor systems using Prometheus Grafana Nagios or Datadog and respond to incidents efficiently Conduct postmortems and define SLAsSLOs for system reliability and performance Plan for capacity and performance using benchmarking tools and implement autoscaling and failover systems Observability Engineering Instrument services with relevant metrics logs and traces using OpenTelemetry Prometheus Jaeger Zipkin etc Build and manage observability pipelines using Grafana ELK Stack Splunk Datadog or Honeycomb Work with timeseries databases eg InfluxDB Prometheus and log aggregation platforms Design actionable s and dashboards to improve system observability and reduce fatigue Partner with developers to promote observability best practices and define key performance indicators KPIs Required Skills Qualifications Proven experience as an SRE or Observability Engineer in complex production environments Handson expertise in LinuxUnix systems and cloud infrastructure AWSKubernetes Strong programming and scripting skills in Python Go Bash or Java Deep understanding of monitoring logging and ing systems Experience with modern Infrastructure as Code and CICD practices Ability to analyze and troubleshoot production issues in realtime Excellent communication skills to collaborate with crossfunctional teams and stakeholders Flexibility to work in rotational shifts including night shifts and weekends as required by project demands A proactive mindset with a focus on continuous improvement and reliability Additional Requirements Excellent communication skills to collaborate with crossfunctional teams and stakeholders Flexibility to work in rotational shifts including night shifts and weekends as required by project demands A proactive mindset with a focus on continuous improvement and reliability
Posted 1 week ago
4.0 - 6.0 years
9 - 12 Lacs
Mumbai
Work from Office
Candidate will be responsible for the maintenance, optimization, and day-to-day operations of open-source observability platform. Must have expertise in Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), and OpenTelemetry to ensure the health, performance, and reliability of our critical systems and applications. Will play a key role in designing, configuring, and implementing observability solutions, triaging and resolving observability-related issues, developing custom dashboards and alerts, and collaborating with development and operations teams to enhance our monitoring capabilities. Key Responsibilities: Platform Management & Maintenance: * Administer, maintain, and optimize existing Prometheus, Grafana, and ELK Stack deployments, ensuring high availability and performance. * Perform regular upgrades, patching, and configuration management of observability tools. * Monitor the health and performance of the observability infrastructure itself, proactively identifying and resolving issues. * Manage data retention, storage, and archiving strategies for metrics, logs, and traces. Monitoring & Alerting: * Design, configure, and implement monitoring solutions using Prometheus and Grafana for various applications, services, and infrastructure components. * Develop and refine PromQL queries to extract meaningful insights from time-series data. * Configure and manage alerting rules in Prometheus Alertmanager and Grafana to ensure timely notification of critical events. * Collaborate with development teams to define appropriate metrics, logging standards, and tracing instrumentation. Logging & Tracing: * Manage and optimize ELK Stack for centralized log aggregation, analysis, and visualization. * Configure and implement Logstash pipelines for efficient data ingestion and transformation. * Develop Kibana dashboards and searches for effective log correlation and troubleshooting. * Design, implement, and manage distributed tracing solutions using OpenTelemetry, ensuring end-to-end visibility across microservices. * Assist development teams in adopting OpenTelemetry for comprehensive application instrumentation. Troubleshooting & Support (L2 Focus): * Serve as an L2 escalation point for observability-related incidents, performing root cause analysis and implementing solutions. * Debug and resolve issues related to data collection, processing, visualization, and alerting. * Provide guidance and support to development and operations teams on how to effectively use observability tools for troubleshooting and performance analysis. * Create and maintain comprehensive documentation, runbooks, and troubleshooting guides. Dashboarding & Visualization: * Develop, customize, and maintain Grafana dashboards to provide actionable insights into system performance, application health, and business metrics. * Create meaningful visualizations and reports for various stakeholders. Collaboration & Improvement: * Work closely with SRE, DevOps, Development, and Infrastructure teams to integrate observability best practices throughout the software development lifecycle. * Participate in on-call rotations as needed to support critical observability infrastructure. * Continuously research and evaluate new open-source observability tools and technologies to improve our capabilities. * Contribute to the automation of observability tasks and workflows. Required Skills and Experience: * Communication: * Strong communication and interpersonal skills, with the ability to collaborate effectively with technical and non-technical stakeholders. Nice to Have: * Experience with Infrastructure as Code (IaC) tools like Terraform or Ansible for managing observability infrastructure. * Familiarity with other observability tools like Loki, Tempo, Jaeger, or similar. * Understanding of ITIL processes and incident management. * Experience with CI/CD pipelines and integrating observability into the deployment process.
Posted 1 week ago
1.0 - 6.0 years
3 - 6 Lacs
Pune
Work from Office
We are looking for a highly skilled Observability Engineer with 1 to 6 years of experience to join our team in the IT Services & Consulting industry. The ideal candidate will have expertise in designing and implementing monitoring solutions, ensuring high availability and scalability. Roles and Responsibility Design and implement scalable monitoring solutions using various tools and technologies. Collaborate with cross-functional teams to identify and prioritize monitoring requirements. Develop and maintain dashboards and reports to provide insights into system performance. Troubleshoot issues and optimize system performance for improved reliability. Ensure compliance with security and regulatory standards. Participate in on-call rotations for 24/7 support. Job Requirements Strong understanding of monitoring tools and technologies such as Prometheus, Grafana, and ELK stack. Experience with cloud-based platforms such as AWS or Azure. Knowledge of programming languages such as Python or Java. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills.
Posted 1 week ago
5.0 - 7.0 years
15 - 18 Lacs
Pune
Hybrid
So, what’s the role all about? Seeking a skilled and experienced DevOps Engineer in designing, producing, and testing high-quality software that meets specified functional and non-functional requirements within the time and resource constraints given. How will you make an impact? Design, implement, and maintain CI/CD pipelines using Jenkins to support automated builds, testing, and deployments. Manage and optimize AWS infrastructure for scalability, reliability, and cost-effectiveness. To streamline operational workflows and develop automation scripts and tools using shell scripting and other programming languages. Collaborate with cross-functional teams (Development, QA, Operations) to ensure seamless software delivery and deployment. Monitor and troubleshoot infrastructure, build failures, and deployment issues to ensure high availability and performance. Implement and maintain robust configuration management practices and infrastructure-as-code principles. Document processes, systems, and configurations to ensure knowledge sharing and maintain operational consistency. Performing ongoing maintenance and upgrades (Production & non-production) Occasional weekend or after-hours work as needed Have you got what it takes? Experience: 5-8 years in DevOps or a similar role. Cloud Expertise: Proficient in AWS services such as EC2, S3, RDS, Lambda, IAM, CloudFormation, or similar. CI/CD Tools: Hands-on experience with Jenkins pipelines (declarative and scripted). Scripting Skills: Proficiency in either shell scripting or powershell Programming Knowledge: Familiarity with at least one programming language (e.g., Python, Java, or Go). IMP: Scripting/Programming is integral to this role and will be a key focus in the interview process. Version Control: Experience with Git and Git-based workflows. Monitoring Tools: Familiarity with tools like CloudWatch, Prometheus, or similar. Problem-solving: Strong analytical and troubleshooting skills in a fast-paced environment. CDK Knowledge in AWS DevOps. You will have an advantage if you also have: Prior experience in Development or Automation is a significant advantage. Windows system administration is a significant advantage. Experience with monitoring and log analysis tools is an advantage. Jenkins pipeline knowledge What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7876 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 1 week ago
2.0 - 7.0 years
15 - 20 Lacs
Bengaluru
Work from Office
As a AI Ops Expert , Responsible and full ownership for the deliverables with greater defined quality standards with defined timeline and budget Design, implement, and manage AIops solutions to automate and optimize AI/ML workflows. Collaborate with data scientists, engineers, and other stakeholders to ensure seamless integration of AI/ML models into production. Monitor and maintain the health and performance of AI/ML systems. Develop and maintain CI/CD pipelines for AI/ML models. Implement best practices for model versioning, testing, and deployment. Troubleshoot and resolve issues related to AI/ML infrastructure and workflows. Stay up-to-date with the latest AIops, MLOps, and Kubernetes tools and technologies. Requirements and skills Bachelors or Masters degree in Computer Science, Software Engineering, or a related field. 2-7 year of relevant experience Proven experience in AIops, MLOps, or related fields. Strong proficiency in Python and experience with FastAPI. Strong handson expertise on Kubernetes (Or AKS) Hands-on experience with MS Azure and its AI/ML services, including Azure ML Flow. Proficiency in using DevContainer for development. Knowledge of CI/CD tools such as Jenkins, GitHub Actions, or Azure DevOps. Experience with containerization and orchestration tools like Docker and Kubernetes. Strong problem-solving skills and the ability to work in a fast-paced environment. Excellent communication and collaboration skills. Preferred Skills: Experience with machine learning frameworks such as TensorFlow, PyTorch, or scikit-learn. Familiarity with data engineering tools like Apache Kafka, Apache Spark, or similar. Knowledge of monitoring and logging tools such as Prometheus, Grafana, or ELK stack. Understanding of data versioning tools like DVC or MLflow. Experience with infrastructure as code (IaC) tools like Terraform or Ansible. Proficiency in Azure-specific tools and services, such as: Azure Machine Learning (Azure ML) Azure DevOps Azure Kubernetes Service (AKS) Azure Functions Azure Logic Apps Azure Data Factory Azure Monitor and Application Insights
Posted 1 week ago
4.0 - 8.0 years
15 - 20 Lacs
Bengaluru
Work from Office
**JIRA Data Center architecture*(multi-node clustering, shared databases, and file systems) **Linux systems administration** **Scripting*(Bash, Python, or Groovy) **Observability stack*(ELK, Prometheus, Grafana, etc.) **Cloud providers*(AWS, GCP, or Azure) if hosted on cloud infrastructure **PostgreSQL / Oracle* as JIRA backend DBs Familiarity with **Atlassian tools ecosystem* Confluence, Bitbucket, Crowd Technologies / OS Python Terraform Docker Linux Windows Network CI/CD Tools JIRA System ADMIN JIRA Functional ADMIN Github Github Actions Jenkins Sonarqube JFrog Artifactory Sonatype Nexus JFrog XRAY Kubernetes Soft Skills Communication Customer service Autonomy Problem-solving Adaptability Team Spirit Time Management
Posted 1 week ago
3.0 - 8.0 years
11 - 16 Lacs
Bengaluru
Work from Office
Automation of Infrastructure, CI/CD and transversal tools (Pre-Trade) to improve the reliability and performance of software systems. Building effective monitoring systems for production using Python scripting. Upgrading the applications and infrastructure to recent frameworks and technologies. Automation of configuration and deployment on Unix servers using Python. Obsolescence and Vulnerabilities Management Qualification - BE / B.Tech in Computer Science or Electronics Engineering with an excellent academic track record Min of 3 years experience and expertise in Python Programming: develop automation modules in Python. Experience in infrastructure automation or expertise in Dev Ops or similar role. Strong expertise in Unix systems: Unix Commands, Shell scripting and other Unix utilities. Monitoring automation expertise (Prometheous , Graffana, Kibana etc. ) Experience in building pipeline automation in different programming languages Experience in Jenkins, Git. Proactive approach in problem analysis and performance issues. Good collaboration and communication skills.
Posted 1 week ago
5.0 - 9.0 years
17 - 20 Lacs
Bengaluru
Work from Office
The Opportunity A DevOps role at FICO is an opportunity to work with cutting edge cloud technologies with a team focused on delivery of secure cloud solutions and products to enterprise customers. - VP, DevOps Engineering What Youll Contribute Design, implement, and maintain Kubernetes clusters in AWS environments. Develop and manage CI/CD pipelines using Tekton, ArgoCD, Flux or similar tools. Implement and maintain observability solutions (monitoring, logging, tracing) for Kubernetes-based applications. Collaborate with development teams to optimize application deployments and performance on Kubernetes. Automate infrastructure provisioning and configuration management using AWS services and tools. Ensure security and compliance in the cloud infrastructure. What Were Seeking Proficiency in Kubernetes administration and deployment, particularly in AWS (EKS). Experience with AWS services such as EC2, S3, IAM, ACM, Route 53, ECR. Experience with Tekton for building CI/CD pipelines. Strong understanding of observability tools like Prometheus, Grafana or similar. Scripting and automation skills (e.g., Bash, GitHub workflows). Knowledge of cloud platforms and container orchestration. Experience with infrastructure as code tools (Terraform, CloudFormation). Knowledge of Helm. Understanding of security best practices in cloud and Kubernetes environments. Proven experience in delivering microservices and Kubernetes-based systems.
Posted 1 week ago
7.0 - 12.0 years
10 - 20 Lacs
Chennai
Work from Office
Dear Candidate, Greetings from Genworx.ai About Us Genworx.ai is a pioneering startup at the forefront of generative AI innovation, dedicated to transforming how enterprises harness artificial intelligence. We specialize in developing sophisticated AI agents and platforms that bridge the gap between cutting-edge AI technology and practical business applications. We have an opening for Principal DevOps Engineer position at Genworx.ai . please find below detailed Job Description for your understanding. Required Skills and Qualifications: Job Title: Principal DevOps Engineer Experience: 8+ years with atleast 5+ years in cloud automation Education: Bachelors or Masters degree in Computer Science, Engineering or a related field Work Location: Chennai Job Type: Full-Time Website: https://genworx.ai/ Key Responsibilities: Cloud Strategy and Automation Leadership: Architect and lead the implementation of cloud automation strategies with a primary focus on GCP. Integrate multi-cloud environments by leveraging AWS and/or Microsoft Azure as needed. Define best practices for Infrastructure as Code (IaC) and automation frameworks. Technical Architecture & DevOps Practice Design scalable, secure, and efficient CI/CD pipelines using industry-leading tools. Lead the development and maintenance of automated configuration management systems. Establish processes for continuous integration, delivery, and deployment of cloud-native applications. Develop solutions for cloud optimization and performance tuning Create reference architectures for DevOps solutions and best practices Establish standards for cloud architecture, versioning, and governance Lead cost optimization initiatives for cloud infrastructure using GenAI Security, Compliance, & Best Practices Enforce cloud security standards and best practices across all automation and deployment processes. Implement role-based access controls and ensure compliance with relevant regulatory standards. Continuously evaluate and enhance cloud infrastructure to mitigate risks and maintain high security Research & Innovation: Drive research into emerging GenAI technologies and techniques in cloud automation and DevOps. Lead proof-of-concept development for new AI capabilities Collaborate with research teams on model implementation and support Guide the implementation of novel AI architectures Leadership & Mentorship: Provide technical leadership and mentorship to teams in cloud automation, DevOps practices, and emerging AI technologies. Drive strategic decisions and foster an environment of innovation and continuous improvement. Act as a subject matter expert and liaison between technical teams, research teams, and business stakeholders Technical Expertise: Cloud Platforms: Deep GCP expertise with additional experience in AWS and/or Microsoft Azure. DevOps & Automation Tools: Proficiency in CI/CD tools (e.g., GitHub Actions, GitLab, Azure DevOps) and Infrastructure as Code (e.g., Terraform). Containerization & Orchestration: Experience with Docker, Kubernetes, and container orchestration frameworks. Scripting & Programming: Strong coding skills in Python, Shell scripting, or similar languages. Observability: Familiarity with tools like Splunk, Datadog, Prometheus, Grafana, and similar solutions. Security: In-depth understanding of cloud security, identity management, and compliance requirements. Interested candidates, kindly send your updated resume and a link to your portfolio to anandraj@genworx.ai . Thank you Regards, Anandraj B Lead Recruiter Mail ID: anandraj@genworx.ai Contact: 9656859037 Website: https://genworx.ai/
Posted 1 week ago
7.0 - 12.0 years
16 - 20 Lacs
Bengaluru
Work from Office
DevOps As a Lead DevOps / Site Reliability Engineer, you will be supporting production and development environments, from creating new and improving existing tools and processes to automating deployment and monitoring procedures, leading continuous integration effort, administering source control systems, deploying and maintaining production infrastructure and applications. What youll do day to day Design and implementation of monitoring strategies. Improving reliability, stability, and performance of production systems. Leading automation of engineering and operations processes. Systems administration and management of production, pre-production, and test environments. Design and optimization of CI/CD pipelines. Maintenance and administration of source control systems. On-call support of production systems. What you must have 7+ years of experience as an SRE, DevOps, or TechOps Engineer. 5+ years of tools development or automation using Python, Perl, Java, or Go . 3+ years of containerization and orchestration experience. Solid experience in managing production environments in a public cloud, AWS preferred. Proficiency in Linux system administration. Experience with monitoring and observability tools: Prometheus, Loki, Grafana. Experience with at least two of the following: Puppet, Salt, Ansible, Terraform. Experience in setting up and supporting CI/CD pipelines.
Posted 1 week ago
3.0 - 8.0 years
4 - 9 Lacs
Noida
Work from Office
Job Title: OpenStack Engineer / Cloud Operations Engineer Experience: 3+ Years Location: Noida Employment Type: Full-time Job Responsibilities: Manage and support production-grade OpenStack environments (Nova, Neutron, Glance, Keystone, Cinder, Horizon, etc.) Handle day-to-day operations: provisioning, system upgrades, patching, incident response, and troubleshooting Automate tasks and workflows using Ansible , Terraform , Bash , or Python Ensure system observability using tools like Prometheus , Grafana , Zabbix , or ELK Stack Maintain high availability, backup, and disaster recovery strategies Collaborate with DevOps, platform engineering, and network/security teams for seamless cloud operations Maintain documentation, playbooks, and runbooks for operations and incident response Desired Skills: Strong knowledge of OpenStack Cloud Computing Experience in Linux administration and Shell Scripting Familiarity with CI/CD tools Hands-on knowledge of Monitoring Tools : Grafana , Prometheus , Zabbix , or ELK Stack Exposure to virtualization and infrastructure as code Value Adds: Strong analytical and problem-solving skills Good communication and team collaboration Key Skills: OpenStack , Linux , Shell Scripting , Ansible , Terraform , Bash , Python , CI/CD , Grafana , Prometheus , Zabbix , ELK Stack , Virtualization , Cloud Operations , Monitoring Tools Interested Candidates: Please share your updated resume along with the following details to Anurag.yadav@Softenger.com WhatsApp: 7385556898 Total Experience Relevant Experience Current CTC Expected CTC Notice Period Current Location Willing to Relocate to Noida
Posted 1 week ago
3.0 - 8.0 years
6 - 12 Lacs
Gurugram
Work from Office
Location: NCR Team Type: Platform Operations Shift Model: 24x7 Rotational Coverage / On-call Support (L2/L3) Team Overview The OpenShift Container Platform (OCP) Operations Team is responsible for the continuous availability, health, and performance of OpenShift clusters that support mission-critical workloads. The team operates under a tiered structure (L2, L3) to manage day-to-day operations, incident management, automation, and lifecycle management of the container platform. This team is central to supporting stakeholders by ensuring the container orchestration layer is secure, resilient, scalable, and optimized. L2 OCP Support & Platform Engineering (Platform Analyst) Role Focus: Advanced Troubleshooting, Change Management, Automation Experience: 3–6 years Resources : 5 Key Responsibilities: Analyze and resolve platform issues related to workloads, PVCs, ingress, services, and image registries. Implement configuration changes via YAML/Helm/Kustomize. Maintain Operators, upgrade OpenShift clusters, and validate post-patching health. Work with CI/CD pipelines and DevOps teams for build & deploy troubleshooting. Manage and automate namespace provisioning, RBAC, NetworkPolicies. Maintain logs, monitoring, and alerting tools (Prometheus, EFK, Grafana). Participate in CR and patch planning cycles. L3 – OCP Platform Architect & Automation Lead (Platform SME) Role Focus: Architecture, Lifecycle Management, Platform Governance Experience: 6+ years Resources : 2 Key Responsibilities: Own lifecycle management: upgrades, patching, cluster DR, backup strategy. Automate platform operations via GitOps, Ansible, Terraform. Lead SEV1 issue resolution, post-mortems, and RCA reviews. Define compliance standards: RBAC, SCCs, Network Segmentation, CIS hardening. Integrate OCP with IDPs (ArgoCD, Vault, Harbor, GitLab). Drive platform observability and performance tuning initiatives. Mentor L1/L2 team members and lead operational best practices. Core Tools & Technology Stack Container Platform: OpenShift, Kubernetes CLI Tools: oc, kubectl, Helm, Kustomize Monitoring: Prometheus, Grafana, Thanos Logging: Fluentd, EFK Stack, Loki CI/CD: Jenkins, GitLab CI, ArgoCD, Tekton Automation: Ansible, Terraform Security: Vault, SCCs, RBAC, NetworkPolicies
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As an AI Ops Expert, you will be responsible for the delivery of projects with defined quality standards within set timelines and budget constraints. Your role will involve managing the AI model lifecycle, versioning, and monitoring in production environments. You will be tasked with building resilient MLOps pipelines and ensuring adherence to governance standards. Additionally, you will design, implement, and oversee AIops solutions to automate and optimize AI/ML workflows. Collaboration with data scientists, engineers, and stakeholders will be essential to ensure seamless integration of AI/ML models into production systems. Monitoring and maintaining the health and performance of AI/ML systems, as well as developing and maintaining CI/CD pipelines for AI/ML models, will also be part of your responsibilities. Troubleshooting and resolving issues related to AI/ML infrastructure and workflows will require your expertise, along with staying updated on the latest AI Ops, MLOps, and Kubernetes tools and technologies. To be successful in this role, you must possess a Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field, along with at least 8 years of relevant experience. Your proven experience in AIops, MLOps, or related fields will be crucial. Proficiency in Python and hands-on experience with Fast API are required, as well as strong expertise in Docker and Kubernetes (or AKS). Familiarity with MS Azure and its AI/ML services, including Azure ML Flow, is essential. Additionally, you should be proficient in using DevContainer for development and have knowledge of CI/CD tools like Jenkins, Argo CD, Helm, GitHub Actions, or Azure DevOps. Experience with containerization and orchestration tools, Infrastructure as Code (Terraform or equivalent), strong problem-solving skills, and excellent communication and collaboration abilities are also necessary. Preferred skills for this role include experience with machine learning frameworks such as TensorFlow, PyTorch, or scikit-learn, as well as familiarity with data engineering tools like Apache Kafka, Apache Spark, or similar. Knowledge of monitoring and logging tools such as Prometheus, Grafana, or ELK stack, along with an understanding of data versioning tools like DVC or MLflow, would be advantageous. Proficiency in Azure-specific tools and services like Azure Machine Learning (Azure ML), Azure DevOps, Azure Kubernetes Service (AKS), Azure Functions, Azure Logic Apps, Azure Data Factory, Azure Monitor, and Application Insights is also preferred. Joining our team at Socit Gnrale will provide you with the opportunity to be part of a dynamic environment where your contributions can make a positive impact on the future. You will have the chance to innovate, collaborate, and grow in a supportive and stimulating setting. Our commitment to diversity and inclusion, as well as our focus on ESG principles and responsible practices, ensures that you will have the opportunity to contribute meaningfully to various initiatives and projects aimed at creating a better future for all. If you are looking to be directly involved, develop your expertise, and be part of a team that values collaboration and innovation, you will find a welcoming and fulfilling environment with us at Socit Gnrale.,
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
kochi, kerala
On-site
As a Java Backend Developer in our IoT domain team based in Kochi, you will be responsible for designing, developing, and deploying scalable microservices using Spring Boot, SQL databases, and AWS services. Your role will involve leading the backend development team, implementing DevOps best practices, and optimizing cloud infrastructure. Your key responsibilities will include architecting and implementing high-performance, secure backend services using Java (Spring Boot), developing RESTful APIs and event-driven microservices with a focus on scalability and reliability, designing and optimizing SQL databases (PostgreSQL, MySQL), and deploying applications on AWS using services like ECS, Lambda, RDS, S3, and API Gateway. You will also be responsible for implementing CI/CD pipelines, monitoring and improving backend performance, ensuring security best practices, and authentication using OAuth, JWT, and IAM roles. The required skills for this role include proficiency in Java (Spring Boot, Spring Cloud, Spring Security), microservices architecture, API development, SQL (PostgreSQL, MySQL), ORM (JPA, Hibernate), DevOps tools (Docker, Kubernetes, Terraform, CI/CD, GitHub Actions, Jenkins), AWS cloud services (EC2, Lambda, ECS, RDS, S3, IAM, API Gateway, CloudWatch), messaging systems (Kafka, RabbitMQ, SQS, MQTT), testing frameworks (JUnit, Mockito, Integration Testing), and logging & monitoring tools (ELK Stack, Prometheus, Grafana). Preferred skills that would be beneficial for this role include experience in the IoT domain, work experience in startups, event-driven architecture using Apache Kafka, knowledge of Infrastructure as Code (IaC) with Terraform, and exposure to serverless architectures. In return, we offer a competitive salary, performance-based incentives, the opportunity to lead and mentor a high-performing tech team, hands-on experience with cutting-edge cloud and microservices technologies, and a collaborative and fast-paced work environment. If you have any experience in the IoT domain and are looking for a full-time role with a day shift schedule in an in-person work environment, we encourage you to apply for this exciting opportunity in Kochi.,
Posted 1 week ago
12.0 - 20.0 years
0 Lacs
karnataka
On-site
You will be joining a Global Banks GCC in Bengaluru, a strategic technology hub that drives innovation and delivers enterprise-scale solutions across global markets. In this leadership position, you will play a key role in shaping the engineering vision, leading technical solutioning, and developing high-impact platforms that cater to millions of customers. Your responsibilities will revolve around bridging technology and business needs, creating scalable, secure, and modern applications utilizing cloud-native and full-stack technologies to enable cutting-edge digital banking solutions. The ideal candidate for this role is a seasoned engineering leader with extensive hands-on experience in software development, architecture, and solution design. You should possess expertise in full-stack development using technologies such as .NET Core, ReactJS, Node.js, Typescript, Next.js, and Python. Additionally, experience in Microservices and BIAN architecture for financial platforms, a deep understanding of AWS cloud-native development and infrastructure, and proficiency in REST/GraphQL API design, TDD, and secure coding practices are essential. Hands-on experience with tools like GitHub, GitHub Actions, monitoring tools (Prometheus, Grafana), and AI-powered dev tools (e.g., GitHub Copilot) is also desirable. Familiarity with DevOps/DevSecOps pipelines and deployment automation is a plus. Moreover, you should possess excellent problem-solving and leadership skills, coupled with the ability to mentor and enhance team productivity. A degree in Computer Science, IT, or a related field (Bachelors/Masters) will be advantageous for this position.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You will be responsible for designing, developing, and maintaining enterprise-grade search solutions using Apache Solr and SolrCloud. Your key tasks will include developing and optimizing search indexes and schema for various use cases such as product search, document search, or order/invoice search. Additionally, you will be required to integrate Solr with backend systems, databases, and APIs, implementing features like full-text search, faceted search, auto-suggestions, ranking, and relevancy tuning. It will also be part of your role to optimize search performance, indexing throughput, and query response time for efficient results. Your expertise in Apache Solr & SolrCloud, along with a strong understanding of Lucene, inverted index, analyzers, tokenizers, and search relevance tuning will be essential for this position. Proficiency in Java or Python for backend integration and development is required, as well as experience with RESTful APIs, data pipelines, and real-time indexing. Familiarity with Zookeeper, Docker, Kubernetes for SolrCloud deployments, and knowledge of JSON, XML, and schema design in Solr will also be necessary. Furthermore, your responsibilities will include ensuring data consistency and high availability using SolrCloud and Zookeeper for cluster coordination & configuration management. You will be expected to monitor the health of the search system and troubleshoot any issues that may arise in production. Collaboration with product teams, data engineers, and DevOps teams will be crucial for ensuring smooth delivery. Staying updated with new features of Apache Lucene/Solr and recommending improvements will also be part of your role. Preferred qualifications for this position include a Bachelors or Masters degree in Computer Science, Engineering, or a related field. Experience with Elasticsearch or other search technologies will be advantageous, as well as working knowledge of CI/CD pipelines and cloud platforms such as Azure. Overall, your role will involve working on search solutions, optimizing performance, ensuring data consistency, and collaborating with cross-functional teams for successful project delivery.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
indore, madhya pradesh
On-site
The Modern Data Company is seeking a skilled and experienced DevOps Engineer to join our team. As a DevOps Engineer, you will play a crucial role in designing, implementing, and managing CI/CD pipelines for automated deployments. You will be responsible for maintaining and optimizing cloud infrastructure for scalability and performance, ensuring system security, monitoring, and incident response using industry best practices, and automating infrastructure provisioning using tools such as Terraform, Ansible, or similar technologies. Collaboration with development teams to streamline release processes, improve system reliability, troubleshoot and resolve infrastructure and deployment issues efficiently, implement containerization and orchestration (Docker, Kubernetes), and drive cost optimization strategies for cloud infrastructure will be key responsibilities in this role. The ideal candidate will have at least 5 years of experience in DevOps, SRE, or Cloud Engineering roles, with strong expertise in CI/CD tools such as Jenkins, GitLab CI/CD, or GitHub Actions. Proficiency in cloud platforms like AWS, Azure, or GCP, Infrastructure as Code tools like Terraform, CloudFormation, or Ansible, Kubernetes, Docker, monitoring and logging tools, scripting skills for automation, networking, security best practices, and Linux administration are required. Experience working in agile development environments is a plus. Nice to have skills include experience with serverless architectures, microservices, database management, performance tuning, and exposure to AI/ML deployment pipelines. In return, we offer a competitive salary and benefits package, the opportunity to work on cutting-edge AI technologies and products, a collaborative and innovative work environment, professional development opportunities, and career growth. If you are passionate about AI and data products, and eager to work in a dynamic team environment to make a significant impact in the world of AI and data, we encourage you to apply now and join our team at The Modern Data Company.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a Senior DevOps Engineer at TechBlocks, you will be responsible for designing and managing robust, scalable CI/CD pipelines, automating infrastructure with Terraform, and improving deployment efficiency across GCP-hosted environments. With 5-8 years of experience in DevOps engineering roles, your expertise in CI/CD, infrastructure automation, and Kubernetes will be crucial for the success of our projects. In this role, you will own the CI/CD strategy and configuration, implement DevSecOps practices, and drive an automation-first culture within the team. Your key responsibilities will include designing and implementing end-to-end CI/CD pipelines using tools like Jenkins, GitHub Actions, and Argo CD for production-grade deployments. You will also define branching strategies and workflow templates for development teams, automate infrastructure provisioning using Terraform, Helm, and Kubernetes manifests, and manage secrets lifecycle using Vault for secure deployments. Collaborating with engineering leads, you will review deployment readiness, ensure quality gates are met, and integrate DevSecOps tools like Trivy, SonarQube, and JFrog into CI/CD workflows. Monitoring infrastructure health and capacity planning using tools like Prometheus, Grafana, and Datadog, you will implement alerting rules, auto-scaling, self-healing, and resilience strategies in Kubernetes. Additionally, you will drive process documentation, review peer automation scripts, and provide mentoring to junior DevOps engineers. Your role will be pivotal in ensuring the reliability, scalability, and security of our systems while fostering a culture of innovation and continuous learning within the team. TechBlocks is a global digital product engineering company with 16+ years of experience, helping Fortune 500 enterprises and high-growth brands accelerate innovation, modernize technology, and drive digital transformation. We believe in the power of technology and the impact it can have when coupled with a talented team. Join us at TechBlocks and be part of a dynamic, fast-moving environment where big ideas turn into real impact, shaping the future of digital transformation.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
chandigarh
On-site
As a DevOps Engineer, you will be responsible for designing, implementing, and managing CI/CD pipelines to streamline software development and deployment processes. Your role will involve overseeing Jenkins management for continuous integration and automation, as well as deploying and managing cloud infrastructure using AWS services. Additionally, you will configure and optimize brokers such as RabbitMQ, Kafka, or similar messaging systems to ensure efficient communication between microservices. Monitoring, troubleshooting, and enhancing system performance, security, and reliability will also be key aspects of your responsibilities. Collaboration with developers, QA, and IT teams is essential to optimize development workflows effectively. To excel in this role, you are required to have an AWS Certification (preferably AWS Certified DevOps Engineer, Solutions Architect, or equivalent) and strong experience in CI/CD pipeline automation using tools like Jenkins, GitLab CI/CD, or GitHub Actions. Proficiency in Jenkins management, including installation, configuration, and troubleshooting, is necessary. Knowledge of brokers for messaging and event-driven architectures, hands-on experience with containerization tools like Docker, and proficiency in scripting and automation (e.g., Python, Bash) are also essential. Experience with monitoring and logging tools such as Prometheus, Grafana, ELK stack, or CloudWatch, along with an understanding of networking, security, and cloud best practices, will be beneficial. Preferred skills for this role include experience in mobile and web application development environments and familiarity with Agile and DevOps methodologies. This is a full-time position with benefits such as paid sick time, paid time off, and a performance bonus. The work schedule is during the day shift from Monday to Friday, and the work location is in person.,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough