Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
0 Lacs
kolkata, west bengal
On-site
You are a passionate and customer obsessed AWS Solutions Architect looking to join Workmates, the fastest growing partner to the worlds major cloud provider, AWS. Your role will involve driving innovation, building differentiated solutions, and defining new customer experiences to help customers maximize their AWS potential in their cloud journey. Working alongside industry specialist organizations and technology groups, you will play a key role in leading our customers towards native cloud transformation. Choosing Workmates and the AWS Practice will enable you to elevate your AWS experience and skills in an innovative and collaborative environment. At Workmates, you will have the opportunity to lead the worlds AWS growing partner in pioneering cloud transformation and be at the forefront of cloud advancements. Join Workmates in delivering innovative work as part of your extraordinary career. People are considered the biggest assets at Workmates, and together we aim to achieve best-in-class cloud native operations. Be part of our mission to drive innovations across Cloud Management, Media, DevOps, Automation, IoT, Security, and more, where independence and ownership are valued, allowing you to thrive and contribute your best. Responsibilities: - Building and maintaining cloud infrastructure environments - Ensuring availability, performance, security, and scalability of production systems - Collaborating with application teams to implement DevOps practices - Creating solution prototypes and conducting proof of concepts for new tools - Designing repeatable, automated, and scalable processes to enhance efficiency - Automating and streamlining operations and processes - Troubleshooting and diagnosing issues/outages and providing operational support - Engaging in incident handling and supporting a culture of post-mortem and knowledge sharing Requirements: - 2+ years of hands-on experience in building and supporting large-scale environments - Strong Architecting and Implementation Experience with AWS Cloud - Proficiency in AWS CloudFormation and Terraform - Experience in Docker Containers and container environment deployment - Good understanding and work experience in Kubernetes and EKS - Sysadmin and infrastructure background (Linux internals, filesystems, networking) - Proficiency in scripting, particularly writing Bash scripts - Familiarity with CI/CD pipeline build and release - Experience with CICD tools like Jenkins/GitLab/TravisCI - Hands-on experience with AWS Developer tools such as AWS Code Pipeline, Code Build, Code Deploy, AWS Lambda, AWS Step Function, etc. - Experience in log management solutions (ELK/EFK or similar) - Experience with Configuration Management tools like Ansible or similar - Proficiency in modern Monitoring and Alerting tools like CloudWatch, Prometheus, Grafana, Opsgenie, etc. - Strong passion for automating routine tasks and solving production issues - Experience in automation testing, script generation, and integration with CI/CD - Familiarity with AWS Security features (IAM, Security Groups, KMS, etc.) - Good to have experience in database technologies (MongoDB/MySQL, etc.) Desired Skills: - AWS Professional Certifications - CKA/CKAD Certifications - Knowledge of Python/Go - Experience with Service Mesh and Distributed tracing - Familiarity with Scrum/Agile methodology Join Workmates and be part of a team that values innovation, collaboration, and continuous improvement in the cloud technology landscape. Your expertise and skills will play a crucial role in driving customer success and shaping the future of cloud solutions.,
Posted 1 week ago
3.0 - 9.0 years
23 - 27 Lacs
Hyderabad
Work from Office
Are you ready to shape the future of learning through cutting-edge AI? As a Principal AI/Machine Learning Engineer at Skillsoft, you’ll dive into the heart of innovation, crafting intelligent systems that empower millions worldwide. From designing generative AI solutions to pioneering agentic workflows, you’ll collaborate with internal and external teams to transform knowledge into a catalyst for growth—unleashing your edge while helping others do the same. Join us in redefining eLearning for the world’s leading organizations! Responsibilities : H ands-on Principal AI/ML engineer, driving technical innovation Partner with product owners to define visionary AI features Collaborate cross-functionally to assess impacts of new AI capabilities Consult and guide teams to productize prototypes with provable accuracy Lead research and selection of COTS and development of in-house AI/ML technologies Evaluate foundational models and emerging AI advancements Explore new technologies and design patterns through impactful prototypes Present research and insights to inspire innovation across teams Guide, d esign and test ing of agentic workflows and prompt engineering Fine-tune models, validate efficacy with metrics, and ensure reliability Evaluate and g uide synthetic data generation for training and validation Design and guide scalable data pipelines for AI/ML training and inference Oversee data analysis, curation, and preprocessing Collaborate with external partners on AI development and integration Establish AI design best practices and standards for alignment Contribute to patentable AI innovations Utilize and apply generative AI to increase productivity for yourself and the organization Environment, Tools & Technologies: Agile/Scrum O perating Systems – Mac , Linux JavaScript , Node.js , Python PyTorch , Tensorflow , Keras , OpenAI , Anthropic, and friends Langchain , Langgraph , etc. APIs GraphQL , REST Docker, Kubernetes Amazon Web Services (AWS) , MS Azure Sagemaker , NIMS SQL : Postgres RDS NoSQL : Cassandra, Elasticsearch ( VectorDb ) M essaging – Kafka, RabbitMQ, SQS Monitoring – Prometheus, ELK GitHub , IDE ( your choice ) Skills & Qualifications: 8+ Years of Relevant Industry Experience ( with a Master’s Degree Preferred) Experience with LLMs and fine-tuning foundation models Development experience including unit t esting Design and documentation experience of new APIs, data models, service interactions Familiarity with and ability to explain : Agentic AI development and testing AI s ecurity and data privacy concerns Synthetic data generation and concerns Foundation model fine-tuning Generative AI prompt engineering and challenges Attributes for Success : Proactive, Independent, Adaptable Collaborative team player Customer service minded with an ownership mindset Excellent analytic and communication skills Ability and desire to coach and mentor other developers Passionate, curious, open to new ideas , and ability to research and learn new technologies
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
We are seeking a skilled and experienced DevOps Lead to become part of our team. The ideal candidate will possess a solid background in constructing and deploying pipelines utilizing Jenkins and GitHub Actions, along with familiarity with messaging systems like ArtemisMQ and extensive expertise in 3-tier and microservices architecture, including Spring Cloud Services SCS. Proficiency in Azure cloud services and deployment models is a crucial requirement. Your responsibilities will include designing, implementing, and maintaining CI/CD pipelines using Jenkins and GitHub Actions for Java applications. Ensuring secure and efficient build and deployment processes, collaborating with development and operations teams to integrate security practices into the DevOps workflow, and managing and optimizing messaging systems specifically ArtemisMQ. You will also be tasked with architecting and implementing solutions based on 3-tier and microservices architecture, utilizing Azure cloud services for application deployment and management, monitoring and troubleshooting system performance and security issues, and staying updated with industry trends in DevSecOps and cloud technologies. Additionally, mentoring and guiding team members on DevSecOps practices and tools will be part of your role. As a DevOps Lead, you will be expected to take ownership of parts of proposal documents, provide inputs in solution design based on your expertise, plan configuration activities, conduct solution product demonstrations, and actively lead small projects. You will also contribute to unit-level and organizational initiatives aimed at delivering high-quality, value-adding solutions to customers. In terms of technical requirements, you should have proven experience as a DevSecOps Lead or in a similar role, strong proficiency in Jenkins and GitHub Actions for building and deploying Java applications, the ability to execute CI/CD pipeline migrations from Jenkins to GitHub Actions for Azure deployments, familiarity with messaging systems such as ArtemisMQ, and extensive knowledge of 3-tier and microservices architecture, including Spring Cloud Services SCS. Furthermore, familiarity with infrastructure as code tools like Terraform or Ansible, knowledge of containerization and orchestration tools like Docker and Kubernetes, proficiency in Azure cloud services and AI services deployment, a strong understanding of security best practices in DevOps, and excellent problem-solving skills are prerequisites. Effective communication, leadership skills, the ability to work in a fast-paced collaborative environment, and knowledge of tools like Gitops, Podman, ArgoCD, Helm, Nexus, Github container registry, Grafana, and Prometheus are also desired. Furthermore, you should possess the ability to develop value-creating strategies, have good knowledge of software configuration management systems, stay updated on the latest technologies and industry trends, exhibit logical thinking and problem-solving skills, understand financial processes for various project types and pricing models, identify improvement areas in current processes and suggest technological solutions, and have client interfacing skills. Project and team management capabilities, along with one or two industry domain knowledge, are also beneficial. Preferred skills include expertise in Azure DevOps within the Cloud Platform technology domain.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
As an Azure DevOps Engineer, you will be a valuable member of our technology team, bringing your expertise in system administration, DevOps methodologies, and IT infrastructure management to ensure automation, scalability, and operational excellence are at the forefront of our operations. Your primary responsibilities will include managing and maintaining our enterprise IT infrastructure, which encompasses servers, networks, and cloud environments. You will design and implement DevOps pipelines for continuous integration and deployment (CI/CD), automate system tasks and workflows using scripting and configuration management tools, and monitor system performance to troubleshoot issues and ensure high availability and reliability. Collaboration with development, QA, and operations teams will be essential to streamline deployment and release processes, while also maintaining system security, compliance, and backup strategies. Documenting system configurations, operational procedures, and incident resolutions will also be part of your duties. To excel in this role, you should possess a Bachelor's degree in Information Technology, Computer Science, or a related field, along with at least 3-7 years of experience in DevOps, IT operations, or system administration. Your proficiency should include Linux/Windows server administration, CI/CD tools such as Jenkins, GitLab CI, and Azure DevOps, Infrastructure as Code tools like Terraform and Ansible, familiarity with cloud platforms like AWS, Azure, and GCP, and experience with monitoring tools like Prometheus, Grafana, and Nagios. A strong understanding of networking, security, and virtualization technologies is crucial, along with excellent problem-solving and communication skills. Preferred qualifications for this role include certifications in AWS, Azure, or DevOps tools, experience with containerization using Docker and Kubernetes, and familiarity with ITIL practices and incident management systems. In your role as a Software Engineer, you will apply scientific methods to analyze and solve software engineering problems, developing and maintaining software solutions and applications. Your responsibilities will involve the development and application of software engineering practice and knowledge, requiring original thought, judgment, and the ability to supervise other software engineers. Building your skills and expertise within the software engineering discipline is essential, collaborating as a team player with other software engineers and stakeholders to achieve project goals and standards.,
Posted 1 week ago
5.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
As a Managed Services Provider (MSP), we are looking for an experienced TechOps Lead to take charge of our cloud infrastructure operations team. Your primary responsibility will be ensuring the seamless delivery of high-quality, secure, and scalable managed services across multiple customer environments, predominantly on AWS and Azure. In this pivotal role, you will serve as the main point of contact for customers, offering strategic technical direction, overseeing day-to-day operations, and empowering a team of cloud engineers to address complex technical challenges. Conducting regular governance meetings with customers, you will provide insights and maintain strong, trust-based relationships. As our clients explore AI workloads and modern platforms, you will lead the team in rapidly adopting and integrating new technologies to keep us ahead of evolving industry trends. Your key responsibilities will include: - Acting as the primary technical and operational contact for customer accounts - Leading governance meetings with customers to review SLAs, KPIs, incident metrics, and improvement initiatives - Guiding the team in diagnosing and resolving complex technical problems in AWS, Azure, and hybrid environments - Ensuring adherence to best practices in cloud operations, infrastructure-as-code, security, cost optimization, monitoring, and compliance - Staying updated on emerging cloud, AI, and automation technologies to enhance our service offerings - Overseeing incident, change, and problem management activities to ensure SLA compliance - Identifying trends from incidents and metrics and driving proactive improvements - Establishing runbooks, standard operating procedures, and automation to reduce toil and improve consistency To be successful in this role, you should possess: - 12+ years of overall experience with at least 5 years managing or delivering cloud infrastructure services on Azure and/or AWS - Strong hands-on skills in Terraform, DevOps tools, monitoring, logging, alerting, and exposure to AI workloads - Solid understanding of networking, security, IAM, and cost optimization in cloud environments - Experience leading technical teams in a managed services or consulting environment - Ability to quickly learn new technologies and guide the team in adopting them to solve customer problems Nice to have skills include exposure to container platforms, multi-cloud cost management tools, AI ML Ops services, security frameworks, and relevant certifications like AWS Solutions Architect, Azure Administrator, or Terraform Associate.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
We are seeking a talented DevOps Engineer who possesses a deep understanding of the complete CI/CD cycle, from application development to deployment. As the ideal candidate, you have extensive experience working with Version Control Systems and Delivery platforms, enabling you to efficiently transition code into production. Collaboration with Development, QA, and Operations teams is a key aspect of this role to ensure the readiness of high-quality code for production environments. Your responsibilities will include deploying, managing, and providing comprehensive support for cloud services, CI/CD toolchains, and closely collaborating with various teams to deliver top-notch code. Proactive monitoring of resource usage, upgrading resources for improved performance and security, profiling code for performance issues, and responding promptly to production incidents are essential tasks. Additionally, you will be responsible for maintaining existing Bash scripts, building C# code for Azure IoT device management, automating deployments with Azure/AWS, monitoring costs, managing code repositories, and identifying areas for process automation. The successful candidate must possess excellent knowledge of Azure CLI and Portal, proficient in Bash and Python scripting, and adept in handling Azure ecosystem components such as Instances, VPNs, Networks, Security, IOT HUB/Devices, and Azure Container Registry. In-depth understanding of device/OS provisioning, Azure DevOps/GitLab CI/CD processes, Docker, Docker Swarm, Kubernetes, and cloud capacity planning are essential qualifications. Preferred qualifications include experience in configuring backups, administering Linux distributions, developing applications with Azure SDKs, familiarity with Prometheus, Time Series Databases, modern monitoring systems, and expertise in system/database backups and restores. Joining our team at Cogniphi offers numerous benefits including involvement in niche projects in Computer Vision, AI, and Telematics across various industry sectors, medical insurance coverage, a psychologically safe workplace, competitive compensation packages, and opportunities for personal and professional growth. At Cogniphi, we are dedicated to revolutionizing the automation and cognitive technology landscape, focusing on scalability solutions, large-scale applications, and innovative platforms to achieve our ambitious mission of AI for all. Our hiring process involves an interview where we will discuss your qualifications and introduce you to Cogniphi. Clear feedback will be provided post-interview, and we strive to communicate our decisions promptly. We are a team of passionate individuals working on groundbreaking projects, supported by billion-dollar companies UST Global and SunTec, creating a stimulating and challenging work environment at our office located in the eco-friendly Technopark Campus in Thiruvananthapuram, Kerala.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a DevOps engineer at C1X AdTech Private Limited, a global technology company, your primary responsibility will be to manage the infrastructure, support development pipelines, and ensure system reliability. You will play a crucial role in automating deployment processes, maintaining server environments, monitoring system performance, and supporting engineering operations throughout the development lifecycle. Our objective is to design and manage scalable, cloud-native infrastructure using GCP services, Kubernetes, and Argo CD for high-availability applications. Additionally, you will implement and monitor observability tools such as Elasticsearch, Logstash, and Kibana to ensure full system visibility and support performance tuning. Enabling real-time data streaming and processing pipelines using Apache Kafka and GCP DataProc will be a key aspect of your role. You will also be responsible for automating CI/CD pipelines using GitHub Actions and Argo CD to facilitate faster, secure, and auditable releases across development and production environments. Your responsibilities will include building, managing, and monitoring Kubernetes clusters and containerized workloads using GKE and Argo CD, designing and maintaining CI/CD pipelines using GitHub Actions integrated with GitOps practices, configuring and maintaining real-time data pipelines using Apache Kafka and GCP DataProc, managing logging and observability infrastructure using Elasticsearch, Logstash, and Kibana (ELK stack), setting up and securing GCP services including Artifact Registry, Compute Engine, Cloud Storage, VPC, and IAM, implementing caching and session stores using Redis for performance optimization, and monitoring system health, availability, and performance with tools like Prometheus, Grafana, and ELK. Collaboration with development and QA teams to streamline deployment processes and ensure environment stability, as well as automating infrastructure provisioning and configuration using Bash, Python, or Terraform will be essential aspects of your role. You will also be responsible for maintaining backup, failover, and recovery strategies for production environments. To qualify for this position, you should hold a Bachelor's degree in Computer Science, Engineering, or a related technical field with at least 4-8 years of experience in DevOps, Cloud Infrastructure, or Site Reliability Engineering. Strong experience with Google Cloud Platform (GCP) services including GKE, IAM, VPC, Artifact Registry, and DataProc is required. Hands-on experience with Kubernetes, Argo CD, and GitHub Actions for CI/CD workflows, proficiency with Apache Kafka for real-time data streaming, experience managing ELK Stack (Elasticsearch, Logstash, Kibana) in production, working knowledge of Redis for distributed caching and session management, scripting/automation skills using Bash, Python, Terraform, etc., solid understanding of containerization, infrastructure-as-code, and system monitoring, and familiarity with cloud security, IAM policies, and audit/compliance best practices are also essential qualifications for this role.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Transformation Engineering professional at Talworx, you will be expected to meet the following requirements: A Bachelor's degree in Computer Science, Information Systems, or a related field is preferred. You should have at least 5 years of experience in application development, deployment, and support. Your expertise should encompass a wide range of technologies including Java, JEE, JSP, Spring, Spring Boot (Microservices), Spring JPA, REST, JSON, Junit, React, Python, Javascript, HTML, and XML. Additionally, you should have a minimum of 3 years of experience in a Platform/Application Engineering role supporting on-premises and Cloud-based deployments, with a preference for Azure. While not mandatory, the following skills would be beneficial for the role: - At least 3 years of experience in Platform/Application Administration. - Proficiency in software deployments on Linux and Windows systems. - Familiarity with Spark, Docker, Containers, Kubernetes, Microservices, Data Analytics, Visualization Tools, and GIT. - Hands-on experience in building and supporting modern AI technologies such as Azure Open AI and LLM Infrastructure/Applications. - Experience in deploying and maintaining applications and infrastructure through configuration management software like Ansible and Terraform, following Infrastructure as Code (IaC) best practices. - Strong scripting skills in languages like bash and Python. - Proficiency in using GitHub to manage application and infrastructure deployment lifecycles within a structured CI/CD environment. - Familiarity with working in a structured ITSM change management environment. - Knowledge of configuring monitoring solutions and creating dashboards using tools like Splunk, Wily, Prometheus, Grafana, Dynatrace, and Azure Monitor. If you are passionate about driving transformation through engineering and possess the required qualifications and skills, we encourage you to apply and be a part of our dynamic team at Talworx.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
punjab
On-site
The key responsibilities for the Monitoring Platform Integrator role include designing, building, deploying, and configuring the new monitoring infrastructure to enhance efficiency and effectiveness. You will collaborate with tech-leads of system migrations to ensure proper monitoring of new platforms, establish alerting rules, and define escalation paths. It is essential to monitor the monitoring system itself and set up redundant escalation paths to detect failures. Developing and maintaining any required code-base for solutions and customer-specific configurations is also part of the role. As a Monitoring Platform Integrator, you will focus on configuring the platform as automatically as possible using technologies like service discovery, ansible, and git to minimize manual configuration. Additionally, you will assist tech-leads and system owners in setting up Grafana and other dashboarding tools. Working closely with NOC teams and system owners, you will gather monitoring and alerting requirements to ensure smooth system transitions. You will also play a key role in transitioning custom monitoring scripts from Nagios to either Prometheus or icinga2 platforms and integrating existing monitoring systems into the new design. Qualifications for this position include a basic degree or diploma in IT and certifications from Microsoft, Enterprise Linux, Cloud Foundations, AWS Cloud Practitioner, or similar DevOps-centered training. The ideal candidate should have over 5 years of experience in a systems admin role, focusing on implementing, developing, and maintaining enterprise-level platforms, preferably in the media industry. Proficiency in various areas such as Docker and Kubernetes management, Redhat/Oracle Linux/CentOS administration, AWS Cloud toolsets, and monitoring technologies like Prometheus, Grafana, and Nagios is crucial. Experience in logging technologies such as Kibana, Elasticsearch, and CloudWatch, as well as orchestration management tools like Ansible, Terraform, or Puppet, is highly desirable. Strong skills in Python development, JSON, API integration, and NetBox are essential for this role. Knowledge of GO and Alerta.io may also be advantageous for the Monitoring Platform Integrator position.,
Posted 1 week ago
5.0 - 10.0 years
27 - 40 Lacs
Noida, Pune, Bengaluru
Work from Office
Description: We are seeking a highly skilled Senior Data Engineer with strong expertise in Python development and MySQL, along with hands-on experience in Big Data technologies, PySpark, and cloud platforms such as AWS, GCP, or Azure. The ideal candidate will play a critical role in designing and developing scalable data pipelines and infrastructure to support advanced analytics and data-driven decision-making across teams. Requirements: 7 to 12 years of overall experience in data engineering or related domains. Proven ability to work independently on analytics engines like Big Data and PySpark. Strong hands-on experience in Python programming, with a focus on data handling and backend services. Proficiency in MySQL, with the ability to write and optimize complex queries; knowledge of Redis is a plus. Solid understanding and hands-on experience with public cloud services (AWS, GCP, or Azure). Familiarity with monitoring tools such as Grafana, ELK, Loki, and Prometheus. Experience with IaC tools like Terraform and Helm. Proficiency in containerization and orchestration using Docker and Kubernetes. Strong collaboration and communication skills to work in agile and cross-functional environments. Job Responsibilities: Design, develop, and maintain robust data pipelines using Big Data and PySpark for ETL/ELT processes. Build scalable and efficient data solutions across cloud platforms (AWS/GCP/Azure) using modern tools and technologies Write high-quality, maintainable, and efficient code in Python for data engineering tasks. Develop and optimize complex queries using MySQL and work with caching systems like Redis. Implement monitoring and logging using Grafana, ELK, Loki, and Prometheus to ensure system reliability and performance. Use Terraform and Helm for infrastructure provisioning and automation (Infrastructure as Code). Leverage Docker and Kubernetes for containerization and orchestration of services. Collaborate with cross-functional teams including engineering, product, and analytics to deliver impactful data solutions. Contribute to system architecture decisions and influence best practices in cloud data infrastructure. What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Noida
Work from Office
Senior Full Stack Engineer We are seeking a Senior Full Stack Engineer to design, build and scale a portfolio of cloud-native products including real-time speech-assessment tools, GenAI content services, and analytics dashboards used by customers worldwide. You will own end-to-end delivery across React/Next.js front-ends, Node/Python micro-services, and a MongoDB-centric data layer, all orchestrated in containers on Kubernetes, while championing multi-tenant SaaS best practices and modern MLOps. Role: Product & Architecture • Design multi-tenant SaaS services with isolated data planes, usage metering, and scalable tenancy patterns. • Lead MERN-driven feature work: SSR/ISR dashboards in Next.js, REST/GraphQL APIs in Node.js or FastAPI, and event-driven pipelines for AI services. • Build and integrate AI/ML & GenAI modules (speech scoring, LLM-based content generation, predictive analytics) into customer-facing workflows. DevOps & Scale • Containerise services with Docker, automate deployment via Helm/Kubernetes, and implement blue-green or canary roll-outs in CI/CD. • Establish observability for latency, throughput, model inference time, and cost-per-tenant across micro-services and ML workloads. Leadership & Collaboration • Conduct architecture reviews, mentor engineers, and promote a culture that pairs AI-generated code with rigorous human code review. • Partner with Product and Data teams to align technical designs with measurable business KPIs for AI-driven products. Required Skills & Experience • Front-End React 18, Next.js 14, TypeScript, modern CSS/Tailwind • Back-End Node 20 (Express/Nest) and Python 3.11 (FastAPI) • Databases MongoDB Atlas, aggregation pipelines, TTL/compound indexes • AI / GenAI Practical ML model integration, REST/streaming inference, prompt engineering, model fine-tuning workflows • Containerisation & Cloud Docker, Kubernetes, Helm, Terraform; production experience on AWS/GCP/Azure • SaaS at Scale Multi-tenant data isolation, per-tenant metering & rate-limits, SLA design • CI/CD & Quality GitHub Actions/GitLab CI, unit + integration testing (Jest, Pytest), E2E testing (Playwright/Cypress) Preferred Candidate Profile • Production experience with speech analytics or audio ML pipelines. • Familiarity with LLMOps (vector DBs, retrieval-augmented generation). • Terraform-driven multi-cloud deployments or FinOps optimization. • OSS contributions in MERN, Kubernetes, or AI libraries. Tech Stack & Tooling - React 18 • Next.js 14 • Node 20 • FastAPI • MongoDB Atlas • Redis • Docker • Kubernetes • Helm • Terraform • GitHub Actions • Prometheus + Grafana • OpenTelemetry • Python/Rust micro-services for ML inference
Posted 1 week ago
5.0 - 7.0 years
15 - 20 Lacs
Bengaluru
Work from Office
Role & responsibilities We are looking for an experienced Cloud Engineer with a strong foundation in cloud infrastructure, DevOps, monitoring, and cost optimization. The ideal candidate will be responsible for designing scalable architectures, implementing CI/CD pipelines, and managing secure and efficient cloud environments using AWS, GCP, or Azure. Key Responsibilities : - Design and deploy scalable, secure, and cost-optimized infrastructure across cloud platforms (AWS, GCP, or Azure) - Implement and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or GitHub Actions - Set up infrastructure monitoring, alerting, and logging systems (e.g., CloudWatch, Prometheus, Grafana) - Collaborate with development and architecture teams to implement cloud-native solutions - Manage infrastructure security, IAM policies, backups, and disaster recovery strategies - Drive cloud cost control initiatives and resource optimization - Troubleshoot production and staging issues related to infrastructure and deployments Requirements Must-Have Skills: - 5-7 years of experience working with cloud platforms (AWS, GCP, or Azure) - Strong hands-on experience in infrastructure provisioning and automation - Expertise in DevOps tools and practices, especially CI/CD pipelines - Good understanding of network configurations, VPCs, firewalls, IAM, and security best practices - Experience with monitoring and log aggregation tools - Solid knowledge of Linux system administration - Familiarity with Git and version control workflows Good to Have: - Experience with Infrastructure as Code tools (Terraform, CloudFormation, Pulumi) - Working knowledge of Kubernetes or other container orchestration platforms (EKS, GKE, AKS) - Exposure to scripting languages like Python, Bash, or PowerShell - Familiarity with serverless architecture and event-driven designs - Awareness of cloud compliance and governance frameworks Preferred candidate profile
Posted 1 week ago
12.0 - 17.0 years
12 - 17 Lacs
Pune
Work from Office
BMC is looking for a talented Devops Engineer to join our family who are just as passionate about solving issues with distributed systems as they are to automate, code and collaborate to tackle problem. Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Monitor and manage infrastructure via IasC, ensuring optimal performance, security, and scalability. Define and develop, test, release, update, and support processes for DevOps operations. Troubleshoot and resolve issues related to deployment, and operations. Select and validate tools and technologies which best fit business needs. Lead platform upgrade/migration/maintenance independently. Stay abreast of emerging technologies and industry trends, then utilize them to enhance the software infrastructure. Design, develop, test and maintain CI/CD pipelines, and ability to maintain continuous integration, delivery, and deployment (CI/CD) processes for a complex set of software requirements and products spread across multiple platforms. As every BMC employee, you will be given the opportunity to learn, be included in global projects, challenge yourself and be the innovator when it comes to solving everyday problems. To ensure youre set up for success, you will bring the following skillset & experience: You can embrace, live and breathe our BMC values every day! You have 12+ years of experience working with various DevOps concepts and tools like Terraform, Ansible, Packer, AWS, Jenkins, Git. Familiarity with IBM z/OS-based infrastructure is required, particularly in hybrid enterprise environments. Strong knowledge of Shell Scripting and any other programming languages such as Python, C, Groovy, Java. Hands-On experience with Containers & Container Orchestration tools like Docker, Podman, AWS ECS, Kubernetes, Docker Swarm and infrastructure monitoring tools like Prometheus and Grafana. Experience with designing, building, and maintaining cloud-native applications across major cloud platforms such as AWS, Azure or GCP is a strong plus. Knowledge of Data Protection, Privacy and Security domain. Understanding of agile methodologies and principles. Knowledge of databases and SQL. Excellent communication and collaboration skills, as well as the ability to work effectively in cross-functional teams including nearshore and offshore. Whilst these are nice to have, our team can help you develop in the following skills: Good to have Mainframe Storage management skills, Tapes, Catalogs etc. Good to have Mainframe knowledge (Z/OS JCL).
Posted 1 week ago
7.0 - 12.0 years
9 - 14 Lacs
Pune
Work from Office
Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Participate in all aspects of SaaS product development, from requirements analysis to product release and sustaining Drive the adoption of the DevOps process and tools across the organization. Learn and implement cutting-edge technologies and tools to build best of class enterprise SaaS solutions Deliver high-quality enterprise SaaS offerings on schedule Develop Continuous Delivery Pipeline Initiate projects and ideas to improve the teams results On-board and mentor new employees To ensure youre set up for success, you will bring the following skillset & experience: You can embrace, live and breathe our BMC values every day! You have at least 7 years of experience in a DevOps\SRE role You have experience as a Tech Lead You implemented CI\CD pipelines with best practices You have experience in Kubernetes You have knowledge in AWS\Azure Cloud implementation You worked with GIT repository and JIRA tools You are passionate about quality and demonstrate creativity and innovation in enhancing the product. You are a problem-solver with good analytical skills You are a team player with effective communication skills Whilst these are nice to have, our team can help you develop in the following skills: SRE practices GitHub/ Spinnaker/Jenkins/Maven/ JIRA etc. Automation Playbooks (Ansible) Infrastructure-as-a-code (IaaC) using Terraform/Cloud Formation Template/ ARM Template Scripting in Bash/Python/Go Microservices, Database, API implementation Monitoring Tools, such as Prometheus/Jager/Grafana /AppDynamic, DataDog, Nagios etc.) Agile/Scrum process
Posted 1 week ago
9.0 - 12.0 years
0 - 3 Lacs
Chennai, Bengaluru, Mumbai (All Areas)
Work from Office
Primary skill: Azure Cloud Service Experience: 9 to 12 years Work Mode: WFO 5 Days Location: Chennai/Bengaluru/Mumbai/Pune Develop and Maintain Azure Cloud Services in accordance with Security and Standards L3 level of experience in Infrastructure and Operations L3 level years of experience in Cloud Architecture and Engineering services Good understanding of and experience in working on Microsoft Azure (IAAS) Deep understanding of IT Infrastructure and Networking Experience in working with products like ServiceNow, Arago, Prometheus etc. (Any one) Experience with Agile and DevOps concepts Admin / Operations experience Kindly Send your updated resume to AnuroopaSharonP@hexaware.com
Posted 1 week ago
6.0 - 10.0 years
20 - 35 Lacs
Pune
Hybrid
Senior SRE - SaaS Our SRE role spans software, systems, and operations engineering. If your passion is building stable, scalable systems for a growing set of innovative products, as well as helping to reduce the friction for deploying these products for our engineering team, Pattern is the place for you. Come help us build a best-in-class platform for amazing growth. Key Responsibilities Infrastructure and Automation Design, build, and manage scalable and reliable infrastructure in AWS (Postgres, Redis, Docker, Queues, Kinesis Streams, S3, etc.) Develop Python or shell scripts for automation, reducing operational toil. Implement and maintain CI/CD pipelines for efficient build and deployment processes using Github Actions. Monitoring and Incident Response Establish robust monitoring and alerting systems using observability methods, logs, and APM tools. Participate in on-call rotations to respond to incidents, troubleshoot problems, and ensure system reliability. Perform root cause analysis on production issues and implement preventative measures to mitigate future incidents. Cloud Administration Manage AWS resources, including Lambda functions, SQS, SNS, IAMs, RDS, etc. Perform Snowflake administration and set up backup policies for various databases. Reliability Engineering Define Service Level Indicators (SLIs) and measure Service Level Objectives (SLOs) to maintain high system reliability. Utilise Infrastructure as Code (IaC) tools like Terraform for managing and provisioning infrastructure. Collaboration and Empowerment Collaborate with development teams to design scalable and reliable systems. Empower development teams to deliver value quickly and accurately. Document system architectures, procedures, run books and best practices. Assist developers in creating automation scripts and workflows to streamline operational tasks and deployments. Innovative Infrastructure Solutions Spearhead the exploration of innovative infrastructure solutions and technologies aligned with industry best practices. Embrace a research-based approach to continuously improve system reliability, scalability, and performance. Encourage a culture of experimentation to test and implement novel ideas for system optimization. Required Qualifications : Bachelors degree in a technical field or relevant work experience 6+ years of experience in engineering, development, DevOps/SRE fields 3+ years experience deploying and managing systems using Amazon Web Services 3+ years experience on Software as a Service (SaaS) application. Proven doer” attitude with ability to self-start, take a project to completion. Demonstrate project ownership. Familiarity with container orchestration tools like Kubernetes, Fargate, etc. Familiarity with Infrastructure as Code tooling like Terraform, CloudFormation, Ansible, Puppet Experience working with CI/CD automated deployments using tools like Github Actions, Jenkins, CircleCI Experience working on observability tools like Datadog, NewRelic, Dynatrace, Grafana, Prometheus, etc. Experience with Linux server management, bash scripting, SSH keys, SSL/TLS certificates, MFA, cron, and log files Deep understanding of AWS networking (VPCs, subnets, security groups, route tables, internet gateways, NAT gateways, NACLs), IAM policies, DNS, Route53, and domain management Strong problem-solving and troubleshooting skills Attention to Details: Thoroughness in accomplishing tasks, ensuring accuracy and quality in all aspects of work. Excellent communication and collaboration abilities Desire to help take Pattern to the next level through exploration and innovation Preferred Qualifications : Experience in deploying applications on ECS, Fargate with ELB/ALB and Auto Scaling Groups. Experience in deploying serverless applications with Lambda, API Gateway, Cognito, CloudFront. Experience in deploying applications built using JavaScript, Ruby, Go, Python. Experience with Infrastructure as Code (IaC) using Terraform. Experience with database administration for Snowflake, Postgres. AWS Certification would be a plus. A focus on adopting security best practices while building great tools.
Posted 1 week ago
5.0 - 8.0 years
7 - 12 Lacs
Pune
Work from Office
Job Description: As a Senior Cloud Engineer at NCSi, you will play a critical role in designing, implementing, and managing cloud infrastructure that meets our clients' needs. You will work closely with cross-functional teams to architect solutions, optimize existing systems, and ensure security and compliance across cloud environments. This position requires strong technical skills, a deep understanding of cloud services, and an ability to mentor junior engineers. Responsibilities: - Design and implement scalable cloud solutions using AWS, Azure, or Google Cloud platforms. - Manage cloud infrastructure with a focus on security, compliance, and cost optimization. - Collaborate with development and operations teams to streamline CI/CD pipelines for cloud-based applications. - Troubleshoot and resolve cloud service issues and performance bottlenecks. - Develop and maintain documentation for cloud architectures, procedures, and best practices. - Mentor junior engineers and provide technical guidance on cloud technologies and services. - Stay up to date with the latest cloud technologies and industry trends, and recommend improvements for existing infrastructure. Skills and Tools Required: - Strong experience with cloud platforms such as AWS, Azure, or Google Cloud. - Proficiency in cloud infrastructure management tools like Terraform, CloudFormation, or Azure Resource Manager. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of programming/scripting languages such as Python, Go, or Bash for automation purposes. - Experience with monitoring and logging tools like Prometheus, Grafana, or ELK Stack. - Understanding of security best practices for cloud deployments, including IAM, VPC configurations, and data encryption. - Strong problem-solving skills, attention to detail, and ability to work in a collaborative team environment. - Excellent communication skills, both verbal and written, to convey complex technical concepts to non-technical stakeholders. Preferred Qualifications: - Cloud certifications from AWS, Azure, or Google Cloud (e.g., AWS Certified Solutions Architect, Azure Solutions Architect Expert). - Experience with Agile methodologies and DevOps practices. - Familiarity with database technologies, both SQL and NoSQL, as well as serverless architectures. Roles and Responsibilities NA
Posted 1 week ago
7.0 - 10.0 years
15 - 30 Lacs
Gurugram, Delhi / NCR
Work from Office
Work Environment: This role involves rotational shifts on a weekly basis . Shift allowances will be provided as per company policy. Employees will also have the flexibility to work from home during night shifts to support convenience and continuity. Job Responsibilities: System Monitoring and Incident Management: Monitor the health and performance of critical systems, applications, and services. Respond to incidents, troubleshoot issues, and ensure timely resolution to minimize downtime and service disruptions. Automation and Scripting: Develop and maintain automation scripts and tools to streamline operational tasks, deployment processes, and infrastructure management. Infrastructure Management: Manage and scale the underlying infrastructure, including servers, cloud services, and network components. Implement best practices for configuration management, monitoring, and disaster recovery. Release Management: Collaborate with development teams to ensure smooth and reliable software releases. Participate in the design and implementation of deployment strategies. Performance Optimization: Identify performance bottlenecks and optimize the system to improve reliability and response times. Capacity Planning: Analyze system capacity and plan for future growth to meet increasing demands. Security and Compliance: Implement security best practices and ensure compliance with relevant industry standards and regulations. Collaboration and Documentation: Work closely with cross-functional teams, including developers, product managers, and operations, to ensure efficient communication and knowledge sharing. Document processes, procedures, and troubleshooting guides. On-Call Support: Participate in an on-call rotation to handle urgent issues and incidents outside regular business hours. Qualifications: Experience with Cloud Technologies: Proficiency in working with one or more cloud platforms like AWS, Google Cloud Platform, or Microsoft Azure. Programming and Scripting Skills: Strong knowledge of at least one programming language (e.g., Python, Java,) and experience with shell scripting. System Administration: Linux/Unix system hands on and good to have administration and networking concepts. Monitoring and Logging: Experience with monitoring tools such as Prometheus, Grafana, Nagios, and log management solutions like ELK stack. Infrastructure as Code (IaC): Knowledge of Infrastructure as Code tools like Terraform or CloudFormation. Automation and Configuration Management: Experience with tools like Ansible, Chef, or Puppet for automating infrastructure management. Version Control: Familiarity with version control systems like Git. Problem-Solving Skills: Ability to analyze and troubleshoot complex technical issues and can work with other teams to help and streamline Process. Communication Skills: Strong verbal and written communication skills to collaborate effectively with team members and stakeholders. KPI/Metrics: Understand Key SRE Metrics such as Availability, SLA/SLO, MTTA and MTTR Any hands on individual with BCA/MCA and B.Tech background.
Posted 1 week ago
4.0 - 8.0 years
7 - 12 Lacs
Pune
Work from Office
Job Summary: We are seeking a skilled and proactive Linux System Administrator with strong experience in Kubernetes (K8s) to join our IT infrastructure team. The ideal candidate will be responsible for managing Linux-based systems and ensuring the stability, performance, and security of our containerized applications. Key Responsibilities : Administer, configure, and maintain Linux systems (preferably Rocky Linux, RHEL, or Ubuntu). Deploy, manage, and troubleshoot Kubernetes clusters (on-prem and/or cloud). Automate system tasks and deployments using tools like Ansible, Python, or shell scripting. Monitor and improve system performance and reliability. Implement security measures, patch management, and compliance practices. Collaborate with DevOps and development teams to support CI/CD pipelines. Manage container orchestration using Kubernetes, Helm, and related tools. Maintain system documentation and standard operating procedures. Required Qualifications: Bachelors degree in Computer Science, IT, or related field (or equivalent experience). 5+ years of experience as a Linux Administrator. 3+ years of hands-on experience with Kubernetes in production environments. Strong knowledge of containers (Docker) and container orchestration. Experience with Linux shell scripting and automation tools. Familiarity with monitoring tools (e.g., Prometheus, Grafana, ELK stack). Familiarity with hypervisors tools (VMware , Openstack). Preferred Qualifications: Certifications such as CKA (Certified Kubernetes Administrator) and RHCE. Exposure to Infrastructure as Code (IaC) tools like Ansible. Understanding of networking concepts related to Kubernetes (Ingress, Services, DNS). Soft Skills: Strong troubleshooting and analytical skills. Excellent communication and collaboration abilities. Ability to work independently and in a team-oriented environment.
Posted 1 week ago
18.0 - 22.0 years
20 - 25 Lacs
Pune
Work from Office
SeniorSolution Architect for Cloud and Data Centre Infrastructure Managed Services Objective: The Senior Solution Architect for Tooling will be responsible fordesigning, developing, and managing of tooling solution for IT infrastructure.With 15+ years of industry experience, the successful candidate will bringextensive technical expertise and leadership to shape the company's managedservices strategy and deliver high-impact projects. Responsibilities: Lead the design andarchitecture of managed services for Tools in the IT infrastructure,ensuring high availability, scalability, and security. Design and developsolutions on Data Centre, Cloud technology using GenAI and Automation tool. Collaborate with IT team andsenior stakeholders on complex IT projects and initiatives,providing technical guidance and expertise. Good understanding of securityand privacy for IT infrastructure, implementing advancedsecurity measures such as encryption, identity management, and threatdetection. Understanding ofmonitoring tools used in IT infrastructure for peakperformance and availability, such as Nagios, Zabbix, or Prometheus. Analyse and resolvecomplex technical issues related to IT managed servicesimplementation, employing troubleshooting tools and techniques. Help customer in Developingand implement automation processes to streamline data centre and cloudoperations and enhance efficiency, using tools like Ansible, Terraform, orCloudFormation. Stay ahead of industry trends,emerging technologies, and best practices in IT services and make recommendations for continuous improvement. Conduct comprehensivecapacity planning to ensure the IT infrastructure can handlecurrent and future demand, utilizing tools like Capacity Planner or realizeOperations. Provide solution ondisaster recovery and business continuity plans for IT Environment Engage with clients,stakeholders, and executive leadership to understand business requirements andtranslate them into technical solutions, creating detailed design documents andtechnical specifications. Lead technicalpresentations and discussions with clients, partners, and internal teams,demonstrating deep technical knowledge and expertise in data centre and cloudmanaged services. Participate in pre-salesactivities, including conducting Proof of Concepts (PoC), building BoQs,supporting security designs, writing proposals, delivering product-specificpresentations, training, and responding to RFPs and RFIs with commercial proposals. Respond to customerrequests for proposals, develop detailed designs, and create bills of materialsfor cost-effective solutions, ensuring alignment with best practices andcustomer requirements. Qualifications: Bachelorsor masters degree in computer science, Engineering, or related field 15+years of experience in Tooling and ITinfrastructure Strongunderstanding of enterprise IT architecture, and data migration Experiencein system architecture, cloud, and project management Excellentproblem-solving and analytical skills Strongcommunication, collaboration, and leadership abilities Relevantcertifications (e.g., AWAI, Gen AI, Tooling, Infrastructure Solutions Architect) are a plus Knowledgeof Operating System, Compute, Storage, backup, Active Directory, and securitysolutions Desired Skills Expertise in Tooling, IT infrastructure solutions (On-Premises /Cloud-Hosted). Proficiency in cloud computing technologies and architectures. Have done PoC and Demo in past Experience with cloud connectivity and network fundamentals. Proficiency in IT management and orchestration tools (Terraform,Ansible, CloudFormation). Strong understanding of cloud security and compliance (IAM,encryption, threat detection). Knowledge of backup and disaster recovery solutions for cloud anddata centre environments. Familiarity with monitoring and performance management tools(Nagios, Zabbix, Prometheus). Understanding of software-defined networking (SDN) and networkfunction virtualization (NFV). Experience with hybrid cloud environments and multi-cloudstrategies. Knowledge of containerization and orchestration products(Docker, Kubernetes). Strong understanding of IT service management (ITSM) and ITILprocesses.
Posted 1 week ago
5.0 - 9.0 years
17 - 20 Lacs
Bengaluru
Work from Office
We are seeking a skilled and motivated DevOps Architect to join our dynamic IT team. The ideal candidate will be responsible for collaborating with software developers, system administrators, and other team members to streamline our development and deployment processes. You will play a key role in automating and optimizing our infrastructure and software delivery pipelines, ensuring reliability, scalability, and efficiency. Key Responsibilities Infrastructure Automation: Design, implement, and maintain infrastructure as code (IaC) using tools likeTerraform, Ansible, or similar. Automate the provisioning, configuration, and management of servers, databases, and networking components. Continuous Integration and Continuous Deployment (CI/CD): Develop and enhance CI/CD pipelines for smooth software delivery. Integrate code repositories, build tools, testing frameworks, and deployment mechanisms to achieve automated and reliable releases. Containerization and Orchestration: Utilize Docker and Kubernetes to containerize applications and manage their orchestration. Implement and optimize Kubernetes clusters for scalability, high availability, and performance. Monitoring and Logging: Implement monitoring solutions to track system performance, availability, and security. Set up log management tools to gather, store, and analyze logs for troubleshooting and insights. Security and Compliance: Collaborate with security teams to implement best practices in securing infrastructure and applications. Ensure compliance with industry standards and regulations. Environment Management: Maintain development, testing, and production environments, ensuring consistency across different stages of the software development lifecycle. Collaboration: Work closely with cross-functional teams to understand their needs and provide technical solutions. Collaborate with software developers to optimize code for deployment and troubleshoot issues. Scripting and Automation: Develop scripts and tools to automate routine tasks and processes. Enhance efficiency by eliminating manual interventions wherever possible. Backup and Recovery: Design and implement backup and disaster recovery strategies to ensure data integrity and system availability. Technical Documentation: Create and maintain technical documentation, including system diagrams, configurations, and procedures. Education and Certification Bachelors degree in computer science, Information Technology, or related field (Masters preferred). Knowledge and Skills Experience: 10+ years Proven experience as a DevOps Engineer or in a similar role. Strong proficiency in cloud platforms, preferably GCP Expertise in infrastructure as code (IaC) tools like Terraform or Ansible. Hands-on experience with containerization using Docker and orchestration with Kubernetes. Proficiency in scripting languages like Python, Bash, or PowerShell. Familiarity with CI/CD tools like Jenkins, GitLab CI/CD, or Travis CI. Experience with version control systems like Git. Solid understanding of networking, security, and system administration concepts. Knowledge of monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack). Strong problem-solving skills and the ability to troubleshoot complex issues. Excellent communication and collaboration skills. Relevant certifications (e.g., AWS Certified DevOps Engineer, Certified Kubernetes Administrator) are a plus
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
The role involves providing production support for trading applications and requires the candidate to be comfortable with working in a rotational shift (7 AM - 4 PM / 11 AM - 8 PM / 1 PM - 10 PM). The applications have transitioned from on-premises to AWS cloud, necessitating strong experience in AWS services such as EC2, S3, and Kubernetes. Monitoring overnight batch jobs is also a key responsibility. Key Requirements: - Proficiency in AWS services like EC2, S3, Kubernetes, CloudWatch, etc. - Familiarity with monitoring tools like Datadog, Grafana, Prometheus. Good to have: - Basic understanding of SQL. - Experience in utilizing Control-M/Autosys schedulers.,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
As a GCP DevOps Lead, you will be responsible for leading the architecture and infrastructure strategy using Google Cloud Platform (GCP). Your role will involve designing, implementing, and managing CI/CD pipelines, infrastructure as code, and deployment automation to ensure high availability, scalability, and performance of cloud environments. You will guide a team of DevOps engineers in daily operations and project execution, while also implementing and maintaining monitoring, logging, and alerting frameworks such as Stackdriver, Prometheus, and Grafana. Driving security best practices, collaborating with cross-functional teams, and optimizing cost, resource usage, and performance in GCP will be key aspects of your responsibilities. The ideal candidate should possess 7+ years of total experience in DevOps, Cloud, or Infrastructure roles, with at least 3 years of hands-on experience with Google Cloud Platform (GCP). Strong skills in CI/CD tools like Jenkins, GitLab CI/CD, or Cloud Build, along with proficiency in Docker and Kubernetes (GKE) are essential. Experience with Terraform, Ansible, or Deployment Manager, source control systems like Git and Bitbucket, scripting languages such as Python, Bash, or Go, and knowledge of networking components and monitoring tools are also required. Understanding DevSecOps practices and security compliance standards will be beneficial in this role.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
hyderabad, telangana
On-site
You are a Java Developer with AI/ML experience, required to have at least 5+ years of industry experience in Java, Spring Boot, Spring Data, and a minimum of 2 years of AI/ML project or professional experience. You should possess a strong background in developing and consuming REST APIs and asynchronous messaging using technologies like Kafka or RabbitMQ. Your role involves integrating AI/ML models into Java services or making calls to external ML endpoints. You need to have a comprehensive understanding of the ML lifecycle encompassing training, validation, inference, monitoring, and retraining. Familiarity with tools such as TensorFlow, PyTorch, Scikit-Learn, or ONNX is essential. Previous experience in implementing domain-specific ML solutions like fraud detection, recommendation systems, or NLP chatbots is beneficial. Proficiency in working with various data formats including JSON, Parquet, Avro, and CSV is required. You should have a solid grasp of both SQL (PostgreSQL, MySQL) and NoSQL (Redis) database systems. Your responsibilities will include integrating machine learning models (both batch and real-time) into backend systems and APIs, optimizing and automating AI/ML workflows using MLOps best practices, and monitoring model performance, versioning, and rollbacks. Collaboration with cross-functional teams such as DevOps, SRE, and Product Engineering is necessary to ensure smooth deployment. Exposure to MLOps tools like MLflow, Kubeflow, or Seldon is desired. Experience with at least one cloud platform, preferably AWS, and knowledge of observability tools, metrics, events, logs, and traces (e.g., Prometheus, Grafana, Open Telemetry, Splunk, Data Dog, App Dynamics) are valuable skills in this role. Thank you. Aatmesh,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
indore, madhya pradesh
On-site
As a Solution Architect at Advantal Technologies, you will leverage your 8+ years of experience to design, architect, and deliver large-scale, high-traffic systems using Java, Angular/React, MySQL, MongoDB, and cloud-based platforms. Your role will be pivotal in driving the architecture and technology strategy for scalable, high-performance applications, ensuring optimization for huge traffic volumes, security, and reliability. You will lead the design and architecture of large-scale systems, ensuring scalability, fault-tolerance, and high performance. Designing microservices-based architectures and integrating with relational and NoSQL databases will be key responsibilities. You will architect solutions to handle millions of concurrent users, manage large-scale data across distributed databases, and implement cloud-based solutions on AWS, Azure, or Google Cloud Platform. Act as a technical leader, guiding development teams on best practices and effective use of technologies. Collaborate with stakeholders to align business objectives with technical solutions and provide performance optimization and monitoring. Design cloud-native solutions, implement CI/CD pipelines, and ensure security and compliance with industry best practices. Required skills include expertise in Java, Angular/React, MySQL, MongoDB, cloud platforms, Docker, Kubernetes, and security protocols. Strong communication, problem-solving, leadership, and mentoring skills are essential. A degree in Computer Science or related field is required, along with relevant industry certifications. Preferred skills include experience with Kubernetes, event-driven architectures, and AI/ML implementations. If you are interested in this opportunity, please share your resume to kratika.vijaywargiya@advantal.net.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough