Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description RESPONSIBILITIES Design and implement CI/CD pipelines for AI and ML model training, evaluation, and RAG system deployment (including LLMs, vectorDB, embedding and reranking models, governance and observability systems, and guardrails). Provision and manage AI infrastructure across cloud hyperscalers (AWS/GCP), using infrastructure-as-code tools -strong preference for Terraform-. Maintain containerized environments (Docker, Kubernetes) optimized for GPU workloads and distributed compute. Support vector database, feature store, and embedding store deployments (e.g., pgVector, Pinecone, Redis, Featureform. MongoDB Atlas, etc). Monitor and optimize performance, availability, and cost of AI workloads, using observability tools (e.g., Prometheus, Grafana, Datadog, or managed cloud offerings). Collaborate with data scientists, AI/ML engineers, and other members of the platform team to ensure smooth transitions from experimentation to production. Implement security best practices including secrets management, model access control, data encryption, and audit logging for AI pipelines. Help support the deployment and orchestration of agentic AI systems (LangChain, LangGraph, CrewAI, Copilot Studio, AgentSpace, etc.). Must Haves: 4+ years of DevOps, MLOps, or infrastructure engineering experience. Preferably with 2+ years in AI/ML environments. Hands-on experience with cloud-native services (AWS Bedrock/SageMaker, GCP Vertex AI, or Azure ML) and GPU infrastructure management. Strong skills in CI/CD tools (GitHub Actions, ArgoCD, Jenkins) and configuration management (Ansible, Helm, etc.). Proficient in scripting languages like Python, Bash, -Go or similar is a nice plus-. Experience with monitoring, logging, and alerting systems for AI/ML workloads. Deep understanding of Kubernetes and container lifecycle management. Bonus Attributes: Exposure to MLOps tooling such as MLflow, Kubeflow, SageMaker Pipelines, or Vertex Pipelines. Familiarity with prompt engineering, model fine-tuning, and inference serving. Experience with secure AI deployment and compliance frameworks Knowledge of model versioning, drift detection, and scalable rollback strategies. Abilities: Ability to work with a high level of initiative, accuracy, and attention to detail. Ability to prioritize multiple assignments effectively. Ability to meet established deadlines. Ability to successfully, efficiently, and professionally interact with staff and customers. Excellent organization skills. Critical thinking ability ranging from moderately to highly complex. Flexibility in meeting the business needs of the customer and the company. Ability to work creatively and independently with latitude and minimal supervision. Ability to utilize experience and judgment in accomplishing assigned goals. Experience in navigating organizational structure. Show more Show less
Posted 2 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description RESPONSIBILITIES Design and implement CI/CD pipelines for AI and ML model training, evaluation, and RAG system deployment (including LLMs, vectorDB, embedding and reranking models, governance and observability systems, and guardrails). Provision and manage AI infrastructure across cloud hyperscalers (AWS/GCP), using infrastructure-as-code tools -strong preference for Terraform-. Maintain containerized environments (Docker, Kubernetes) optimized for GPU workloads and distributed compute. Support vector database, feature store, and embedding store deployments (e.g., pgVector, Pinecone, Redis, Featureform. MongoDB Atlas, etc). Monitor and optimize performance, availability, and cost of AI workloads, using observability tools (e.g., Prometheus, Grafana, Datadog, or managed cloud offerings). Collaborate with data scientists, AI/ML engineers, and other members of the platform team to ensure smooth transitions from experimentation to production. Implement security best practices including secrets management, model access control, data encryption, and audit logging for AI pipelines. Help support the deployment and orchestration of agentic AI systems (LangChain, LangGraph, CrewAI, Copilot Studio, AgentSpace, etc.). Must Haves: 4+ years of DevOps, MLOps, or infrastructure engineering experience. Preferably with 2+ years in AI/ML environments. Hands-on experience with cloud-native services (AWS Bedrock/SageMaker, GCP Vertex AI, or Azure ML) and GPU infrastructure management. Strong skills in CI/CD tools (GitHub Actions, ArgoCD, Jenkins) and configuration management (Ansible, Helm, etc.). Proficient in scripting languages like Python, Bash, -Go or similar is a nice plus-. Experience with monitoring, logging, and alerting systems for AI/ML workloads. Deep understanding of Kubernetes and container lifecycle management. Bonus Attributes: Exposure to MLOps tooling such as MLflow, Kubeflow, SageMaker Pipelines, or Vertex Pipelines. Familiarity with prompt engineering, model fine-tuning, and inference serving. Experience with secure AI deployment and compliance frameworks Knowledge of model versioning, drift detection, and scalable rollback strategies. Abilities: Ability to work with a high level of initiative, accuracy, and attention to detail. Ability to prioritize multiple assignments effectively. Ability to meet established deadlines. Ability to successfully, efficiently, and professionally interact with staff and customers. Excellent organization skills. Critical thinking ability ranging from moderately to highly complex. Flexibility in meeting the business needs of the customer and the company. Ability to work creatively and independently with latitude and minimal supervision. Ability to utilize experience and judgment in accomplishing assigned goals. Experience in navigating organizational structure. Show more Show less
Posted 2 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
5+ years experience in Software Engineering and 3+ years of experience with cloud-native architectures 2+ years implementing secure and compliant solutions for highly regulated environments 3+ years of experience with a container orchestration platform like Kubernetes, EKS, ECS, AKS or equivalent 2+ years of production system administration and infrastructure operations experience Excellence in Container architecture, design, ecosystem and/or development Experience with container-based CI/CD tools such as ArgoCD, Helm, CodeFresh, GitHub Actions, GitLab or equivalent Show more Show less
Posted 2 days ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
ob Title: SRE Engineer with GCP cloud (Only Immediate joiner's, No Fix Notice Period) Location : Hyderabad & Ahmedabad Employment Type: Full-Time Work Model - 3 Days from office Exp in year : 7year+ Job Overview Dynamic, motivated individuals deliver exceptional solutions for the production resiliency of the systems. The role incorporates aspects of software engineering and operations, DevOps skills to come up with efficient ways of managing and operating applications. The role will require a high level of responsibility and accountability to deliver technical solutions. Summary: As a Senior SRE, you will ensure platform reliability, incident management, and performance optimization. You'll define SLIs/SLOs, contribute to robust observability practices, and drive proactive reliability engineering across services. Experience Required: 6–10 years of SRE or infrastructure engineering experience in cloud-native environments. Mandatory: • Cloud: GCP (GKE, Load Balancing, VPN, IAM) • Observability: Prometheus, Grafana, ELK, Datadog • Containers & Orchestration: Kubernetes, Docker • Incident Management: On-call, RCA, SLIs/SLOs • IaC: Terraform, Helm • Incident Tools: PagerDuty, OpsGenie Nice to Have : • GCP Monitoring, Skywalking • Service Mesh, API Gateway • GCP Spanner, MongoDB (basic) Scope: • Drive operational excellence and platform resilience • Reduce MTTR, increase service availability • Own incident and RCA processes Roles and Responsibilities: •Define and measure Service Level Indicators (SLIs), Service Level Objectives (SLOs), and manage error budgets across services. • Lead incident management for critical production issues – drive root cause analysis (RCA) and postmortems. • Create and maintain runbooks and standard operating procedures for high[1]availability services. • Design and implement observability frameworks using ELK, Prometheus, and Grafana; drive telemetry adoption. • Coordinate cross-functional war-room sessions during major incidents and maintain response logs. • Develop and improve automated system recovery, alert suppression, and escalation logic. • Use GCP tools like GKE, Cloud Monitoring, and Cloud Armor to improve performance and security posture. • Collaborate with DevOps and Infrastructure teams to build highly available and scalable systems. • Analyze performance metrics and conduct regular reliability reviews with engineering leads. • Participate in capacity planning, failover testing, and resilience architecture reviews. Show more Show less
Posted 2 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position: DevOps Engineer Experience: 8+ years Location: Hyderabad/Ahmedabad Job Overview Dynamic, motivated individuals deliver exceptional solutions for the production resiliency of the systems. The role incorporates aspects of software engineering and operations, DevOps skills to come up with efficient ways of managing and operating applications. The role will require a high level of responsibility and accountability to deliver technical solutions. Summary: As a DevOps Engineer, you will support infrastructure provisioning, automation, and continuous deployment pipelines to streamline and scale our development lifecycle. You’ll work closely with engineering teams to maintain a stable, high-performance CI/CD ecosystem and cloud infrastructure on GCP. Experience Required: 4-6 years of hands-on DevOps experience with cloud and containerized deployments. Mandatory: • OS: Linux • Cloud: GCP (VPC, Compute Engine, GKE, GCS, IAM) • CI/CD: Jenkins, GitHub Actions, Bitbucket Pipelines • Containers: Docker, Kubernetes • IaC: Terraform, Helm • Monitoring: Prometheus, Grafana • Version Control: Git • Trivy, Vault, Owasp Nice to Have: • ELK Stack, Trivy, JFrog, Vault • Basic scripting in Python or Bash • Jira, Confluence Scope: • Implement and support CI/CD pipelines • Maintain development, staging, and production environments • Optimize resource utilization and infrastructure costs Roles and Responsibilities: • Assist in developing and maintaining CI/CD pipelines across various environments (dev, staging, prod) using Jenkins, GitHub Actions, or Bitbucket Pipelines. • Collaborate with software developers to ensure proper configuration of build jobs, automated testing, and deployment scripts. • Write and maintain scripts for infrastructure provisioning and automation using Terraform and Helm. • Manage and troubleshoot containerized applications using Docker and Kubernetes on GCP. • Monitor system health and performance using Prometheus and Grafana; raise alerts and participate in issue triage. • Maintain secrets and configurations using Vault and KMS solutions under supervision. • Participate in post-deployment verifications and rollout validation. • Document configuration changes, CI/CD processes, and environment details in Confluence. • Maintain Jira tickets related to DevOps issues and track resolutions effectively. • Provide support in incident handling under guidance from senior team members. Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Company Profile Our client is a global IT services company that helps businesses with digital transformation with offices in India and the United States. It helps businesses with digital transformation, provide IT collaborations and uses technology, innovation, and enterprise to have a positive impact on the world of business. With expertise is in the fields of Data, IoT, AI, Cloud Infrastructure and SAP, it helps accelerate digital transformation through key practice areas - IT staffing on demand, innovation and growth by focusing on cost and problem solving. Location & work – New Delhi (On –Site), WFO Employment Type - Full Time Profile – Platform Engineer Preferred experience – 3-5 Years The Role: We are looking for a highly skilled Platform Engineer to join our infrastructure and data platform team. This role will focus on the integration and support of Posit integration for data science workloads, managing R language environments, and leveraging Kubernetes to build scalable, reliable, and secure data science infrastructure. Responsibilities: Integrate and manage Posit Suite (Workbench, Connect, Package Manager) within containerized environments. Design and maintain scalable R environment integration (including versioning, dependency management, and environment isolation) for reproducible data science workflows. Deploy and orchestrate services using Kubernetes, including Helm-based Posit deployments. Automate provisioning, configuration, and scaling of infrastructure using IaC tools (Terraform, Ansible). Collaborate with Data Scientists to optimize R runtimes and streamline access to compute resources. Implement monitoring, alerting, and logging for Posit components and Kubernetes workloads. Ensure platform security and compliance, including authentication (e.g., LDAP, SSO), role-based access control (RBAC), and network policies. Support continuous improvement of DevOps pipelines for platform services. Must-Have Qualifications ● Bachelor's or Master's degree in Computer Science, Information Systems, or a related field. Minimum 3+ years of experience in platform, DevOps, or infrastructure engineering. Hands-on experience with Posit (RStudio) products including deployment, configuration, and user management. Proficiency in R integration practices in enterprise environments (e.g., dependency management, version control, reproducibility). Strong knowledge of Kubernetes, including Helm, pod security, and autoscaling. Experience with containerization tools (Docker, OCI images) and CI/CD pipelines. Familiarity with monitoring tools (Prometheus, Grafana) and centralized logging (ELK, Loki). Scripting experience in Bash, Python, or similar. Preferred Qualifications Experience with cloud-native Posit deployments on AWS, GCP, or Azure. Familiarity with Shiny apps, RMarkdown, and their deployment through Posit Connect. Background in data science infrastructure, enabling reproducible workflows across R and Python. Exposure to JupyterHub or similar multi-user notebook environments. Knowledge of enterprise security controls, such as SSO, OAuth2, and network segmentation. Application Method Apply online on this portal or on email at careers@speedmart.co.in Show more Show less
Posted 2 days ago
8.0 years
0 Lacs
Mohali district, India
On-site
Position Title: Senior Technology Lead Work Location: Mohali Required Experience : 8 - 12 Years Key Responsibilities Tech-Lead: Deliver robust and performant solutions across the web and mobile platforms Drive project(s) based on different full stack technologies: backend (PHP, MySQL, Redis), frontend (JavaScript, React Native), and DevOps (Kubernetes, CI/CD) Ensure scalable architecture and design and drive technical decisions aligned with business goals Build deep understanding of existing and new projects technologically and client/domain specific. Hands-on and responsible for efficient, secure implementations across technology stack (including third party tools and services) Contribute to project planning by providing technical input on timelines, resource needs, risk assessment, and feasibility. Collaborate with project managers and stakeholders to define deliverables, break down tasks, and ensure alignment between technical execution and business objectives. Harness power of AI in all related areas of application design and development Technical Leadership: Guide a cross-functional team across backend, frontend, and DevOps etc. Act as the technical point of contact, mentor team members, conduct code reviews, and collaborate closely with product and design teams to deliver robust and performant solutions across web and mobile platforms. Work with Team lead to set clear goals and objectives for the team, ensuring alignment with company vision and strategy. Stay current with emerging trends and technologies and related frameworks and introduce them to the project(s) as appropriate. Required Skills: 8+ years of hands-on software development experience, with at least 3 years as a tech lead or senior developer. Extensive experience with full stack development; primarily mobile apps, using PHP, JavaScript / TypeScript, Reach Native, MySQL. Good exposure on the mobile app ecosystem like Play Stores, Firebase, one or more payment systems etc. Good Understanding of concepts related to version management, CI/CD pipelines, etc. Know how about Redis, Kubernetes, Helm charts, GitHub etc. Excellent conversational skills Knowledge on NodeJS is a plus Share your cv at Email ID: hr.1@azalio.io Show more Show less
Posted 2 days ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We are looking for AWS DevOps Engineers who will be able exhibit their technology skills & influence team members to increase the overall productivity & effectiveness by sharing your in-depth knowledge on Cloud & DevOps. You will be leading and contributing to the define the industry standard best practices of Cloud & DevOps to the team to follow. This would be an outstanding opportunity for you to apply and enhance your both technical and people management skills, thereby adding value to the business and operations of CloudifyOps What will you do : Own the quality of Cloud and DevOps architecture, design, and delivery of the client engagement You would be designing the automation of the Cloud & DevOps operations/process with proper tools. Advise on implementing Cloud & DevOps best practices and provide architectural and design support to the team. Functionally split the complex issues or problems into much simpler and straightforward solutions for the engineers in the team to execute. To assist in the technical and design meetings with clients to help them in adoption of Cloud & DevOps tools and technologies practices. Take ownership of the end-to-end development & implementation quality of the team by managing dependencies and focusing on technology best practices. What we are looking for : 4+ years of hands-on experience as a DevOps & Cloud A person who has experience in assisting/helping to architect, design and develop Cloud & DevOps practices on AWS cloud platform Must be an expert in implementing CI & CD pipeline with various DevOps tool sets. Should be able to evaluate, implement, and streamline the DevOps practices for the clients thereby speeding up the software development and deployment process Hands-on experience with CI & CD tools(like Jenkins, SonarQube, Artifactory/Nexus), Source Control (like Git- Bitbucket, Github, Gitlab/SVN etc.) Experience in creating continuous delivery practices by using Terraform/ARM Templates/Cloud Formation. Must possess excellent knowledge of infrastructure configuration & automation tools (like Ansible) in both development and production environments Should possess the experience in designing & building applications using container & serverless technologies. Kubernetes expertise is a must have and knowledge on Service Mesh, Tracing, Helm Charts etc would be an added advantage. Must have strong expertise in operating Linux environments and scripting languages like Shell. Good presentation and communications skills. Be willing to be challenged and learn new skills. Finally, and most importantly to lead and grow your team by example Show more Show less
Posted 2 days ago
3.0 - 6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Software Development Engineers are experienced professionals that design, develop, test, deploy, maintain, and enhance software solutions. They have in-depth knowledge and subject matter expertise in software development. Sr. Software Development Engineers interact with internal and external teams to train them on the products, work on projects independently and collaborate with cross-functional teams to manage project priorities, deadlines, and deliverables. In this role, you will mentor and guide others by reviewing the code of more junior software engineers as well as encourage others to grow their technical skillset. Sr. Software Development Engineers are creative problem solvers and are involved in continuously driving improvements across the software development life cycle as well as ensure best practices are utilized. About The Role: In this role as Software Engineer, you will: Designs, develops and tests software systems and/or applications for enhancements and new products Writes code according to coding specifications established for software solutions. Delivers software features with exceptional quality, meeting designated release plans and delivery commitments. Develops software solutions by studying information needs, conferring with users, studying systems flow, data usage, and work processes; investigating problem areas; and following the software development lifecycle. Prepares and installs solutions by determining and designing system specifications, standards, and programming. Determines operational feasibility by evaluating analysis, problem definition, requirements, solution development, and proposed solutions. Documents and demonstrates solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments, and clear code. Improves operations by conducting systems analysis and recommending changes in policies and procedures. Updates job knowledge by studying state-of-the-art development tools, programming techniques, and computing equipment, and by participating in educational opportunities, reading professional publications, maintaining personal networks, and participating in professional organizations. Protects operations by keeping information confidential. Provides information by collecting, analyzing, and summarizing development and service issues. Accomplishes engineering and organization mission by completing related results as needed. Collaborates with other designers and engineers Breaks down customer requirements/problems into for the team. Ability to clearly communicate technical concepts to stakeholders About You: You are a fit for this position if your background includes: 3 to 6 years of experience in software development. Bachelor's degree in systems Engineering or similar. Proficient in Java/ JavaScript / Angular. Experience with REST APIs and microservices. Strong problem solving and analytical thinking. Good written and verbal communication skills. Required Skills: Amazon Web Services (AWS); Fiddler Web Debugger (Inactive); Git; GitHub; Gradle; Hypertext Transfer Protocol (HTTP); JUnit Testing; JUnit Testing Framework; Mockito; Mockito Unit Test Framework; PostgreSQL; Postman (Platform); Postman (Software); REST Client; RESTful APIs; Spring MVC (Model View Controller); Spring Web MVC; Structured Query Language (SQL). Optional Skills: Apache Ant; Apache Ivy; Apache Tomcat; Azure Devops; Eclipse Development; Eclipse IDE; GitHub Copilot; Helm (Tool); IntelliJ IDEA IDE (Integrated Development Environment); JetBrains IntelliJ IDEA; Kubernetes; Microsoft Azure DevOps Boards; Microsoft Azure DevOps Pipelines. What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com. Show more Show less
Posted 3 days ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Bangalore,Karnataka,India Job ID 767284 Join our Team About this opportunity: We are seeking a highly motivated and detail-oriented Experienced Cloud Engineer to join our dynamic software DevOps team. You should be a curious professional, eager to grow, and an excellent team player! As a Cloud Engineer, you will work closely with our r-Apps DevOps team to gain exposure to cloud native infrastructure, automation, and optimization tasks. You will support the implementation and maintenance of CI-CDD, Deployments, helm, Security aspects of cloud native applications/environments, assist with troubleshooting and contribute to the SaaS/AaaS based Microservice solutions development team. What you will do: AWS Cloud: Experience with AWS Cloud pipelines and AWS CloudFormation (IaC). Kubernetes & Helm: Kubernetes administration & Cloud native application packaging/management using Helm charts. CI-CDD: Design and implement CI-CDD using Jenkins & spinnaker Automation & Scripting: Develop and maintain scripts to automate routine tasks using technologies such as Ansible, Python, and Shell scripting. Monitoring & Optimization: Monitor microservice resources for performance, availability. Assist in optimizing environments to enhance performance. Troubleshooting: Troubleshoot and resolve issues within AaaS applications, focusing on resource failures, performance degradation, and connectivity disruptions. Documentation: Assist in documenting DevOps infrastructure setups, processes, and workflows, and help maintain knowledge base articles. Learning & Development: Continuously expand your knowledge of cloud technologies and cloud architecture, stay updated on the latest trends in cloud computing. You will bring: Bachelor/ master’s degree in computer science, Software Engineering, or related field Experience of cloud platforms like AWS. Proficiency in containerization and orchestration using Docker and Kubernetes. Proficient in using Helm for managing Kubernetes applications, including creating and deploying Helm charts. Experience in CICD tools like Jenkins, Spinnaker, Gitlab. Experience with monitoring tools such as Prometheus, Grafana. Implement and manage security tools for CI/CD pipelines, cloud environments, and containerized applications. Experience of scripting and automation (e.g., Python, Bash, Ansible). Strong problem-solving skills and the ability to troubleshoot cloud native infrastructure. Good communication skills and the ability to work effectively in a team environment. Eagerness to learn new technologies and contribute to cloud native applications. Understanding of the software development lifecycle (SDLC) and agile methodologies Preferred qualifications: Certifications / Hands-on experience with AWS. Exposure to AI services for DevOps. Predictive analysis on Monitoring of AaaS applications. Design and enforce security best practices across the entire DevOps lifecycle. Familiarity with industry security standards and frameworks (e.g., CIS, NIST, OWASP). Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?
Posted 3 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role as associate level What you'll do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. We are seeking a highly skilled and motivated AWS Cloud Engineer with deep expertise in Amazon EKS, Kubernetes, Docker, and Helm chart development. The ideal candidate will be responsible for designing, implementing, and maintaining scalable, secure, and resilient containerized applications in the cloud. You’ll Also Be Design, deploy, and manage Kubernetes clusters using Amazon EKS. Develop and maintain Helm charts for deploying containerized applications. Build and manage Docker images and registries for microservices. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Monitor and troubleshoot Kubernetes workloads and cluster health. Support CI/CD pipelines for containerized applications. Collaborate with development and DevOps teams to ensure seamless application delivery. Ensure security best practices are followed in container orchestration and cloud environments. Optimize performance and cost of cloud infrastructure. The skills you'll need You’ll need a background in software engineering, software design, architecture, and an understanding of how your area of expertise supports our customers. You'll need experience in Java full stack including Microservices, ReactJS, AWS, Spring, SpringBoot, SpringBatch, Pl/SQL, Oracle, PostgreSQL, Junit, Mockito, Cloud, REST API, API Gateway, Kafka and API development. You’ll Also Need 3+ years of hands-on experience with AWS services, especially EKS, EC2, IAM, VPC, and CloudWatch. Strong expertise in Kubernetes architecture, networking, and resource management. Proficiency in Docker and container lifecycle management. Experience in writing and maintaining Helm charts for complex applications. Familiarity with CI/CD tools such as Jenkins, GitLab CI, or GitHub Actions. Solid understanding of Linux systems, shell scripting, and networking concepts. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of security practices in cloud and container environments. Preferred Qualifications: AWS Certified Solutions Architect or AWS Certified DevOps Engineer. Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools like ArgoCD or Flux. Experience with logging and observability tools (e.g., ELK stack, Fluentd). Show more Show less
Posted 3 days ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role as associate level What you'll do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. We are seeking a highly skilled and motivated AWS Cloud Engineer with deep expertise in Amazon EKS, Kubernetes, Docker, and Helm chart development. The ideal candidate will be responsible for designing, implementing, and maintaining scalable, secure, and resilient containerized applications in the cloud. You’ll Also Be Design, deploy, and manage Kubernetes clusters using Amazon EKS. Develop and maintain Helm charts for deploying containerized applications. Build and manage Docker images and registries for microservices. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Monitor and troubleshoot Kubernetes workloads and cluster health. Support CI/CD pipelines for containerized applications. Collaborate with development and DevOps teams to ensure seamless application delivery. Ensure security best practices are followed in container orchestration and cloud environments. Optimize performance and cost of cloud infrastructure. The skills you'll need You’ll need a background in software engineering, software design, architecture, and an understanding of how your area of expertise supports our customers. You'll need experience in Java full stack including Microservices, ReactJS, AWS, Spring, SpringBoot, SpringBatch, Pl/SQL, Oracle, PostgreSQL, Junit, Mockito, Cloud, REST API, API Gateway, Kafka and API development. You’ll Also Need 3+ years of hands-on experience with AWS services, especially EKS, EC2, IAM, VPC, and CloudWatch. Strong expertise in Kubernetes architecture, networking, and resource management. Proficiency in Docker and container lifecycle management. Experience in writing and maintaining Helm charts for complex applications. Familiarity with CI/CD tools such as Jenkins, GitLab CI, or GitHub Actions. Solid understanding of Linux systems, shell scripting, and networking concepts. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of security practices in cloud and container environments. Preferred Qualifications: AWS Certified Solutions Architect or AWS Certified DevOps Engineer. Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools like ArgoCD or Flux. Experience with logging and observability tools (e.g., ELK stack, Fluentd). Show more Show less
Posted 3 days ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Grow with us About this opportunity: It will be practically impossible for human brains to understand how to run and optimize next generation of wireless networks, i.e., 5G networks with distributed edge compute, that will drive economic and social transformation for all aspects of society. Machine Learning (ML) and other Artificial Intelligence (AI) technologies will be vital for us to handle this opportunity. We are expanding the Ericsson MLOps Engineering Unit with tech savvy developers with right attitude and skillset. MLOps is an iterative process for building, deploying, operationalizing, and observing AI/ML systems. MLOps aim is to manage the end-to-end life cycle of AI/ML models through all the phases, experimentation, development, deployment, model performance monitoring and re-training and, when needed, re-design and re-architecture to keep models operating in optimal conditions in production environments. A MLOps platform provides services and components to assist and guide organizations through this iterative process. MLOps platform components are designed to overcome the challenges to develop and operate AI/ML systems at industrial scale on production environments. Role Summary As a Software Engineer , you will build and deploy MLOps Services and components, enabling AI Use Cases Development and Production deployment with focus on scaling, monitoring and performance, re-using the MLOps Platform. The MLOps Platform unit is designing, engineering, operating, and maintaining cloud native K8s based micro-service Architecture – Service and Components that delivers that aim to deliver end to end MLOps features and functionality e.g. CI/CD, data exploration notebooks (Jupyter), ML model development and deployment, workflow engines, and ML frameworks (i.e. TensorFlow) for easy consumption by Ericsson products and services. The AI MLOps Platform covers infrastructure capacity and tools for all AI/ML project and system needs across different Ericsson Products. The main approach is to integrate/extend existing private and public cloud infrastructures, and to base the toolbox components on open source software. The deployed environments are heterogeneous, so multi-cloud, hybrid-cloud, and WAN networking are also key technology areas. In this role, you are expected to be a very hands-on developer, functioning as an individual contributor, as well as work within a cross functional team that is responsible from study, design, implement, test, deliver and maintain phases of the projects/products. Key Responsibilities Develop/integrate/automate a core AI/ML software environment, in close collaboration with data scientist and product developers Operationalize and extend open source software components, covering the entire ML model life-cycle, including e.g. data transformation, model development, deployment, monitoring, re-training, security. Collaborate with product development teams and partners in Ericsson Businesses to industrialize a platform for machine learning models and solutions as part of Ericsson offerings including providing code, workflows and documents Work with MLOps projects and development teams to identify needs and requirements for AI/ML tools and infrastructure resources. Evaluate and plan capacity of CPU, GPU, memory, storage, and networking resources to balance cost versus desired productivity and performance Develop essential automation scripts and tooling to help quality assurance, maintenance, migration, and cost-control of infrastructure deployments. Manage communication, planning, collaboration and feedback loops with business stakeholders. Model the business problem statement into AI/ML problem. Contribute to IPR creation for Ericsson in AI/ML Lead functional and technical analysis within Ericsson businesses and for strategic customers to understand MI-driven business needs and opportunities Lead studies and creative usage of new and/or existing data sources. Work with Data Architects to leverage existing data models and build new ones as needed. Provide MI Competence build-up in Ericsson Businesses and Customer Serving Units Develop new and apply/extend existing, concepts, methodologies, techniques for cross functional initiatives Key Qualifications Bachelors/Masters in Computer Science, Electrical Engineering or related disciplines from any of the reputed institutes. First Class, preferably with Distinction. Applied experience: 2+ years of experience with infrastructure, platforms, networking, and software systems; and an overall industry experience of about 4+ years. Strong software engineering experience with one or more of Golang, Java, Scala, Python, JavaScript, using container-based development practices Experience with data analytics and AI/ML systems, for example, Spark, Jupyter, Tensorflow Experience with large scale systems, for example reliability/HA, deployment, operations, testing, and trouble-shooting. Experience with delivering software products, for example release management, documentation Experience with usage/integration of public cloud services, for example, identity and access management, key management, storage systems, CPU/GPU, private/virtual networking, and Kubernetes services. Experience with modern distributed systems and tooling, for example, Prometheus, Terraform, Kubernetes, Helm, Vault, CI/CD systems. Experience with WAN networking solutions, redundancy/fail-over, QoS, and VPN technologies. Experience with Infrastructure-as-code and SRE ways-of-working Strong system administration skills, Linux and Windows Awareness of ITIL/ITSM methodologies for operations and service delivery Soft Skills Good communication skills in written and spoken English Great Team worker and collaborator Creativity and ability to formulate problems and solve them independently Self-driven and ability to work through abstraction Ability to build and nurture internal and external communities Ability to work independently with high energy, enthusiasm and persistence Experience in partnering and collaborative co-creation, i.e., working with complex multiple stakeholder business units, global customers, technology and other ecosystem partners in a multi-culture, global matrix organization with sensitivity and persistence. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Chennai Req ID: 766515 Show more Show less
Posted 3 days ago
6.0 years
0 Lacs
Ganganagar, Rajasthan, India
On-site
33128BR Noida Job Description Job Description: Charging Profile Engineer Experience Required: 4–6 Years Location: [Chennai, Bangalore, Hyderabad, Noida, Gurugram"] Notice Period: Immediate / Early Joiner Preferred – 30 Days Job Description We are looking for a highly skilled Charging Profile Engineer with strong technical expertise and telecom domain knowledge to join our team. The ideal candidate should be experienced in deployment, configuration, troubleshooting, and automation within cloud-native environments, with an emphasis on 4G/5G charging, provisioning, and migration. Key Responsibilities - Perform installation, deployment, and upgrades of telecom charging solutions. - Handle CIQ sheet preparation, Helm chart customization, and YAML file generation. - Work on AWS, OpenShift (OCP), CNS, and Kubernetes-based environments. - Configure tariff plans, bundle creation, CL creation, and write RSV for slicing profiles. - Execute Diameter protocol configuration, SS7 protocol support, and handle BSS provisioning tasks. - Deep understanding of SPS charging, 4G/5G call flows, and overall telecom architecture. - Ensure smooth migration planning and tool development for legacy-to-cloud transformations. - Conduct functional (FUT) and non-functional (load/performance) testing. - Automate use case testing and validate service-based charging bundles. - Interface with customers for requirement gathering, implementation plans, and issue resolution. - Prepare documentation and contribute to customer demos and discussions. Must-Have Skills - Deployment, upgrade, and troubleshooting of telecom applications. - Helm charts, YAML, CIQ, Kubernetes, CNS, AWS, OCP. - Strong command over Diameter protocol, 4G/5G call flow, and telecom charging flows. - Experience in tariff configuration, bundle and service creation, RSV writing, and SS7. - Exposure to automation testing frameworks, FUT, and load testing tools. - Excellent communication skills with proven client handling experience. Good to Have - Experience in migration tool development and scripting. - Familiarity with slicing profiles and dynamic service configurations. - Working knowledge of BSS/OSS ecosystem. Qualifications Bachelor’s degree in Engineering, Telecommunications, or a related field. 4–6 years of relevant industry experience in telecom charging or network integration roles. Joining Requirement Candidates with immediate availability or short notice will be given priority. Qualifications B.Tech Range of Year Experience-Min Year 4 Range of Year Experience-Max Year 6 Show more Show less
Posted 3 days ago
7.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Demonstrate a deep understanding of cloud native, distributed micro service based architectures Deliver solutions for complex business problems through software standard SDLC Build strong relationships with both internal and external stakeholders including product, business and sales partners Demonstrate excellent communication skills with the ability to both simplify complex problems and also dive deeper if needed Build and manage strong technical teams that deliver complex software solutions that scale Manage teams with cross functional skills that include software, quality, reliability engineers, project managers and scrum masters Provide deep troubleshooting skills with the ability to lead and solve production and customer issues under pressure Leverage strong experience in full stack software development and public cloud like GCP and AWS Mentor, coach and develop junior and senior software, quality and reliability engineers Lead with a data/metrics driven mindset with a maniacal focus towards optimizing and creating efficient solutions Ensure compliance with EFX secure software development guidelines and best practices and responsible for meeting and maintaining QE, DevSec, and FinOps KPIs Define, maintain and report SLA, SLO, SLIs meeting EFX engineering standards in partnership with the product, engineering and architecture teams Collaborate with architects, SRE leads and other technical leadership on strategic technical direction, guidelines, and best practices Drive up-to-date technical documentation including support, end user documentation and run books Lead Sprint planning, Sprint Retrospectives, and other team activity Responsible for implementation architecture decision making associated with Product features/stories, refactoring work, and EOSL decisions Create and deliver technical presentations to internal and external technical and non-technical stakeholders communicating with clarity and precision, and present complex information in a concise format that is audience appropriate What Experience You Need Bachelor's degree or equivalent experience 7+ years of software engineering experience 7+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 7+ years experience with Cloud technology: GCP, AWS, or Azure 7+ years experience designing and developing cloud-native solutions 7+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 7+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Strong communication and presentation skills Strong leadership qualities Demonstrated problem solving skills and the ability to resolve conflicts Experience creating and maintaining product and software roadmaps Experience overseeing yearly as well as product/project budgets Working in a highly regulated environment Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
On-site
Job Description Senior Engineer, Cyber – Product Security (Penetration Testing) NielsenIQ is maturing its Application Security programs and is recruiting an Application Security Engineer who will be responsible for supporting the rollout of DevSecOps capabilities and practises across all geographies and business units. As the Application Security Engineer, you will be responsible for integration, maintenance and analyses of the tools and technologies used in securing NIQ products/application throughout their development. You will oversee application security capabilities within a multi-national matrixed environment. The application security engineer will have the opportunity to replace the current Static and Dynamic Application Security Tool and advocate for the tech stack used for monitoring. This position will involve working closely with development/engineering teams, business units, technical and non-technical stakeholders, educating them and driving the adoption and maturity of the NIQ’s Product & Application Security programs. Responsibilities Collaborate within Product Security Engineering and Cybersecurity teams to support delivery of its strategic initiatives Work with engineering teams (Developers, SREs & QAs) to ensure that products are secure on delivery and implement provided security capabilities Actively contribute to building and maintaining Product Security team security tools and services, including integrations security tools in the CI/CD process Report on security key performance indicators (KPIs) to drive improvements across engineering teams’ security posture Contribute to Product Security Engineering team security education program and become an advocate within the organization’s DevSecOps and application security community of practice Review IaaS / PaaS architecture roadmaps for the cloud to and recommend baseline security controls and hardening requirements, supporting threat modelling of NIQ’s products Qualifications 5+ years of experience working in a technical/hands-on application security, development, or DevOps professional environment Working Knowledge of web stack, web security and common vulnerabilities (e.g. SQLi, XSS, & beyond.) Experience deploying containers using CI/CD pipeline tools like GitHub Actions, Gitlab Pipelines, Jenkins, and Terraform or Helm Self-starter, technology and security hobbyist, enthusiast Lifelong learner with endless curiosity Bonus Points if you: Have experience building serverless functions in Cloud environments Have knowledge of Cloud Workload Protection Experience using SAST and DAST tools Demonstrated engagement in security conferences, training, learning, associations is highly desired and fully supported Ability to think like a hacker Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion Show more Show less
Posted 3 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Introduction A career in IBM Software means you'll be part of a team that transforms our customer's challenges into industry-leading solutions. We are an infinitely curious team, always seeking new possibilities, and dedicated to creating the world's leading AI-powered, cloud-native software solutions. Our renowned legacy creates endless global opportunities for our network of IBMers. We are a team of deep product experts, ensuring exceptional client experiences, with a focus on delivery, excellence, and obsession over customer outcomes. This position involves contributing to HashiCorp's offerings, now part of IBM, which empower organizations to automate and secure multi-cloud and hybrid environments. You will join a team managing the lifecycle of infrastructure and security, enhancing IBM's cloud solutions to ensure enterprises achieve efficiency, security, and scalability in their cloud journey. Your Role And Responsibilities "Ensure platform reliability and performance: Monitor, troubleshoot, and optimize production systems running on Kubernetes (EKS, GKE, AKS). Automate operations: Develop and maintain automation for infrastructure provisioning, scaling, and incident response. Incident response & on-call support: Participate in on-call rotations to quickly detect, mitigate, and resolve production incidents. Kubernetes upgrades & management: Own and drive Kubernetes version upgrades, node pool scaling, and security patches. Observability & monitoring: Implement and refine observability tools (Datadog, Prometheus, Splunk, etc.) for proactive monitoring and alerting. Infrastructure as Code (IaC): Manage infrastructure using Terraform, Terragrunt, Helm, and Kubernetes manifests. Cross-functional collaboration: Work closely with developers, DBPEs (Database Production Engineers), SREs, and other teams to improve platform stability. Performance tuning: Analyze and optimize cloud and containerized workloads for cost efficiency and high availability. Security & compliance: Ensure platform security best practices, incident response, and compliance adherence.." Preferred Education Bachelor's Degree Required Technical And Professional Expertise Strong expertise in Kubernetes (EKS, GKE, AKS) and container orchestration. Experience with AWS, GCP, or Azure, particularly in managing large-scale cloud infrastructure. Proficiency in Terraform, Helm, and Infrastructure as Code (IaC). Strong understanding of Linux systems, networking, and security best practices. Experience with monitoring & logging tools (Datadog, Splunk, Prometheus, Grafana, ELK, etc.). Hands-on experience with automation & scripting (Python, Bash, or Go). Preferred Technical And Professional Experience Experience in incident management & debugging complex distributed systems. Familiarity with CI/CD pipelines and release automation. Show more Show less
Posted 3 days ago
2.0 - 4.0 years
4 - 6 Lacs
Bengaluru
Work from Office
ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it , our most valuable asset is our people. Here you ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage an d passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. Platform and Product Team is shaping one of the key growth vector area for ZS, our engagement, comprising of clients from industries like Quick service restaurants, Technology, Food & Beverage, Hospitality, Travel, Insurance, Consumer Products Goods & other such industries across North America, Europe & South East Asia region. Platform and Product India team currently has presence across New Delhi, Pune and Bengaluru offices and is continuously expanding further at a great pace. Platform and Product India team works with colleagues across clients and geographies to create and deliver real world pragmatic solutions leveraging AI SaaS products & platforms, Generative AI applications, and other Advanced analytics solutions at scale. What You ll Do: Experience with cloud technologies AWS, Azure or GCP Create container images and maintain container registries. Create, update, and maintain production grade applications on Kubernetes clusters and cloud. Inculcate GitOps approach to maintain deployments. Create YAML scripts, HELM charts for Kubernetes deployments as required. Take part in cloud design and architecture decisions and support lead architects build cloud agnostic applications. Create and maintain Infrastructure-as-code templates to automate cloud infrastructure deployment Create and manage CI/CD pipelines to automate containerized deployments to cloud and K8s. Maintain git repositories, establish proper branching strategy, and release management processes. Support and maintain source code management and build tools. Monitoring applications on cloud and Kubernetes using tools like ELK, Grafana, Prometheus etc. Automate day to day activities using scripting. Work closely with development team to implement new build processes and strategies to meet new product requirements. Troubleshooting, problem solving, root cause analysis, and documentation related to build, release, and deployments. Ensure that systems are secure and compliant with industry standards. What You ll Bring A master s or bachelor s degree in computer science or related field from a top university. 2-4+ years of hands-on experience in DevOps Hands-on experience designing and deploying applications to cloud (Aws / Azure/ GCP) Expertise on deploying and maintaining applications on Kubernetes Technical expertise in release automation engineering, CI/CD or related roles. Hands on experience in writing Terraform templates as IaC, Helm charts, Kubernetes manifests Should have strong hold on Linux commands and script automation. Technical understanding of development tools, source control, and continuous integration build systems, e.g. Azure DevOps, Jenkins, Gitlab, TeamCity etc. Knowledge of deploying LLM models and toolchains Configuration management of various environments. Experience working in agile teams with short release cycles. Good to have programming experience in python / go. Characteristics of a forward thinker and self-starter that thrives on new challenges and adapts quickly to learning new knowledge. Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered.
Posted 3 days ago
4.0 - 8.0 years
6 - 10 Lacs
Pune
Work from Office
RabbitMQ Administrator - Prog Leasing1 Job TitleRabbitMQ Cluster Migration Engineer Job Summary: We are seeking an experienced RabbitMQ Cluster Migration Engineer to lead and execute the seamless migration of our existing RabbitMQ infrastructure to a AWS - new high-availability cluster environment. This role requires deep expertise in RabbitMQ, clustering, messaging architecture, and production-grade migrations with minimal downtime. Key Responsibilities: Design and implement a migration plan to move existing RabbitMQ instances to a new clustered setup. Evaluate the current messaging architecture, performance bottlenecks, and limitations. Configure, deploy, and test RabbitMQ clusters (with or without federation/mirroring as needed). Ensure high availability, fault tolerance, and disaster recovery configurations. Collaborate with development, DevOps, and SRE teams to ensure smooth cutover and rollback plans. Automate setup and configuration using tools such as Ansible, Terraform, or Helm (for Kubernetes). Monitor message queues during migration to ensure message durability and delivery guarantees. Document all aspects of the architecture, configurations, and migration process. Required Qualifications: Strong experience with RabbitMQ, especially in clustered and high-availability environments. Deep understanding of RabbitMQ internalsqueues, exchanges, bindings, vhosts, federation, mirrored queues. Experience with RabbitMQ management plugins, monitoring, and performance tuning. Proficiency with scripting languages (e.g., Bash, Python) for automation. Hands-on experience with infrastructure-as-code tools (e.g., Ansible, Terraform, Helm). Familiarity with containerization and orchestration (e.g., Docker, Kubernetes). Strong understanding of messaging patterns and guarantees (at-least-once, exactly-once, etc.). Experience with zero-downtime migration and rollback strategies. Preferred Qualifications: Experience migrating RabbitMQ clusters in production environments. Working knowledge of cloud platforms (AWS, Azure, or GCP) and managed RabbitMQ services. Understanding of security in messaging systems (TLS, authentication, access control). Familiarity with alternative messaging systems (Kafka, NATS, ActiveMQ) is a plus.
Posted 3 days ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
At R3 our vision is a world where value moves freely, and business is done safely. Our mission is to enable an open, trusted, and enduring digital economy. We are a scale up with a startup’s grit. We encourage a workforce where no idea is too small, and no two days are the same. The DevOps Engineer is a pivotal role in the design, implementation and operation of the services R3 offers to its customers. Reporting to the Infrastructure and Release team manager, you will be part of a delivery team with the exciting objective of building, delivering and maintaining infrastructure that will host R3’s main customer services. This is a challenging role which will include working alongside Software Engineers within R3 to deliver both internal and customer facing services. You will have a strong technical background in financial services, telecoms or critical infrastructure service provider. You embrace the DevOps Culture. Working with distributed teams and carrying out peer code reviews is second nature. You have a “go getter attitude” by taking proactive approach into fixing issues that appear from continuous improvement changes or you detect them by carrying out day to day tasks. You believe strongly in delivering experiences, not just lines of code. You are focused on the end result for the customer, with a service-led approach, and have a strong background in operating, maintaining and improving services to the highest possible standards. Responsibilities: Keep repositories up to date with software releases within the DevOps software tooling currently used. As well as innovating and driving for changes that will benefit the entire team Collaboration across different teams Driving for infrastructural improvements that can help improve high availability and scalability of existing and new service offerings Owning and advocating the approach of automation first Delivering on high quality work that meet customer requirements on existing services for other Production department teams such as Business operations, Security. While still being able to drive for improvements of the current process in place Responsible for pushing for changes both in Development and Operations that are essential and necessary to be running a high standard Managed Service Educate other team members both on simple and complex operational and administrative tasks Qualifications (the must-haves): First and foremost, we want you to love what you do. You’ll need to be a DevOps evangelist within R3 and the community of R3 products, both current and future You’ll have a minimum of 5 years of experience in a DevOps-type role. We’d love to see evidence of other experience too, you might have been a sysadmin, network operations person, or worked in support in previous roles We believe that we work better as a team. You’ll be working in a diverse team of people with a variety of skills and backgrounds, a high level of emotional intelligence will be assumed. People skills are essential You’ll need excellent communication skills, both verbal and written. We operate under Infra-as-Code, which means you are able to pick up and understand code changes on a daily basis Strong experience in DevOps Tools like Terraform, Ansible, Helm to automate Infrastructure You’ll need working experience with at least one public cloud provider You’ll have solid Linux experience. You’ll need to have been deploying infrastructure as code in your previous role (such as Terraform and Ansible) and used automated provisioning and configuration management tools Working knowledge of at least one contemporary scripting language is essential to be able to automate things You must demonstrate proven experience in software development tasks. Ideally a background in Kotlin, Java, or other JVM-based languages are required for the role Qualifications (the nice to haves): Exposure to Kubernetes clusters in production systems/services would be a plus Exposure to Cloud Native Landscape software An engineering or science degree would be great, but appropriate career experience is just as important. Be prepared to tell us all about that experience R3 is a leading enterprise technology firm specialising in digital solutions for regulated financial markets. Our technology enables financial markets to operate with greater efficiency, transparency and enhanced connectivity. Our focus is on progressing markets and fostering an open, trusted and enduring digital economy. R3ers center around our core values – Collaborative, Ownership, Bold and Customer First – as a result our flagship DLT platform, Corda, and R3 Digital Markets product suite is trusted by the world’s leading financial market infrastructures (FMIs), exchanges, central banks and commercial banks. R3 is proud to be an equal opportunity workplace. We are a diverse and inclusive team that supports all ethnicities, races, genders, sexual orientations, origins, disability and veteran status and cultures. At R3, we’re committed to fostering an environment where individuality–not conformity–is embraced and valued because we believe our collective differences are what make us better together. For more information, visit www.r3.com or connect with us on Twitter or LinkedIn. If you don’t meet all of the above criteria, but you think you’d be a great addition to R3, send us your CV/resume. We’re always interested in meeting bold, collaborative people who are excited to work with us. Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
At R3 our vision is a world where value moves freely, and business is done safely. Our mission is to enable an open, trusted, and enduring digital economy. We are a scale up with a startup’s grit. We encourage a workforce where no idea is too small, and no two days are the same. R3's Professional Services team works to bring Corda specialist expertise to our customers to make their adoption successful. We engage directly with our customers to design, build, deploy and advise them on their Corda journey to ensure long-term capabilities are sustained. Through business consulting, technical solutions and implementation, we help customers achieve their goals most effectively for their business, owning projects ranging from small design POCs to full-end to end implementations for Digital Currencies and Digital Assets real-world problems. Our customer-centric and innovative approach to providing services and solutions in the industry allows us to strategically assess our customer's needs and ensure they are set up for success from the beginning of their journey. In addition to working directly with our customers, we strive to collaborate internally with our Sales, Engineering and Product organizations to provide better tooling, services and products based on our customer's evolving needs. Build your path with us while having fun. Excel your skills and make us better. Working with the mantra of good work equals more work, constantly looking to provide world-class customer service and continuously looking at ways to improve. Role of DevOps Solutions Engineer Your mission as a DevOps Solutions Engineer is to help our customers design highly scalable architectures and build resilient infrastructure using distributed systems. With commercial experience and a strong understanding of infrastructure as code, you will have the opportunity to expand our containerisation story using Docker and Kubernetes. Responsibilities Understand customer requirements and provide guidance on DevOps deployment strategies. Build new scripts based on customer requirements whilst leverage R3’s deployment templates. Help our customers with scalability, disaster recovery and maintainability of their solutions Automate infrastructure provisioning, configuration, and management using tools like Terraform, Ansible, along with Helm charts Build, test and implement continuous integration and continuous deployment (CI/CD) pipelines using industry-standard tools such as Jenkins, Docker, GitLab, or Cloud Provider Tools and Frameworks from Azure / AWS / GCP Troubleshoot and resolve issues related to build failures, deployment errors, and infrastructure problems in a timely manner. Deliver deployment strategy workshops with R3 customers to understand their bespoke infrastructure / application hosting requirements, platforms and tools Ensure security best practices are implemented throughout the DevOps processes, including code scanning, vulnerability management, and access control Collaborate with cross-functional teams to drive innovation and continuous improvement in our DevOps practices. Stay up-to-date with the latest trends and technologies in DevOps and recommend new tools and methodologies to enhance our processes Ensure all assigned work items are fully managed, taking accountability for delivery, scope, ensuring successful implementation in conjunction with the PS DevOps team leads. Required Skills Proven experience as a DevOps Engineer or similar role, with a minimum of 5 years of experience. Strong experience in DevOps Tools like Terraform, Ansible, Helm to automate Infrastructure Strong experience in Microservices driven deployment with Docker, Kubernetes and modern containerisation platforms. Cloud experience in Azure DevOps, AWS or Google Cloud. Experience with microservices architecture and serverless computing deployment with AKS / EKS / GKE Proven experience with monitoring systems like DataDog, Prometheus You must demonstrate proven experience in software development tasks, ideally in Kotlin, Java, or other JVM-based languages as Kotlin and Java development skills are required for the role Basic understanding of configuring and running services in a Linux-based environment including proficiency in related scripting languages such as Python/YAML/BASH & standard Linux file editors (VI, VIM, EMACS) Familiarity with DevOps practices in regulated industries such as healthcare or finance Desired Skills: Kubernetes experience in a commercial enterprise product and thorough understanding of taking software to production Understanding of DLT / Blockchain technologies and its benefits Willingness to learn new skills and the ability to solve complex problems R3 is a leading enterprise technology firm specialising in digital solutions for regulated financial markets. Our technology enables financial markets to operate with greater efficiency, transparency and enhanced connectivity. Our focus is on progressing markets and fostering an open, trusted and enduring digital economy. R3ers center around our core values – Collaborative, Ownership, Bold and Customer First – as a result our flagship DLT platform, Corda, and R3 Digital Markets product suite is trusted by the world’s leading financial market infrastructures (FMIs), exchanges, central banks and commercial banks. R3 is proud to be an equal opportunity workplace. We are a diverse and inclusive team that supports all ethnicities, races, genders, sexual orientations, origins, disability and veteran status and cultures. At R3, we’re committed to fostering an environment where individuality–not conformity–is embraced and valued because we believe our collective differences are what make us better together. For more information, visit www.r3.com or connect with us on Twitter or LinkedIn. If you don’t meet all of the above criteria, but you think you’d be a great addition to R3, send us your CV/resume. We’re always interested in meeting bold, collaborative people who are excited to work with us. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Position Overview: We are looking for an experienced DevOps Engineer – ServiceNow who will be responsible for enabling automation, continuous integration, and seamless deployment within the ServiceNow ecosystem. This role demands strong technical knowledge of CI/CD pipelines, Infrastructure as Code (IaC), cloud platforms, and API integrations, as well as a working understanding of platform performance optimization and security basics. Key Responsibilities: Set up and maintain CI/CD pipelines for ServiceNow applications Automate deployments using Infrastructure as Code (IaC) tools such as Terraform or Ansible Manage ServiceNow instances: performance tuning, system upgrades, and configuration Integrate ServiceNow APIs with external tools for end-to-end automation Handle cloud infrastructure optimization, scaling, and security configurations Technical Skills Required: ServiceNow Scripting & Configuration: JavaScript, Glide API, Flow Designer, UI Policies CI/CD Tools: Jenkins, GitLab, Azure DevOps (any) Cloud Platforms (Hands-on with any one): AWS, Azure, or GCP Infrastructure as Code (IaC): Terraform Ansible CloudFormation Containers & Orchestration (Preferred but not mandatory): Docker Kubernetes Helm API Integrations: REST SOAP Monitoring Tools (Any exposure is beneficial): Splunk ELK Stack Prometheus Security & Networking Basics: VPN Firewalls Access Control Soft Skills: Strong troubleshooting and debugging abilities Clear and effective communication skills Ability to work collaboratively in Agile environments High attention to detail and commitment to quality Show more Show less
Posted 3 days ago
5.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Job Title: - Devops Engineer Candidate Specifications Candidate should have 5-8 years of experience. Job Description Candidates should have experience in depth understanding of how to use and the roles of the tools for specific DevOps functions and Design, develop and maintain CI and CD pipelines. Candidate should have good experience in AWS, Terraform, Jenkins and Kubernetes. Candidates should have good experience in Consolidation by transferring instances into other accounts and Implementation of automated routines by Terraform. Candidates should have good knowledge in Linux, AIX, JAVA, Python Helm and ArgoCD. Candidates should also have exposure in Stakeholder management and team handling skills. Candidate should have excellent in written and verbal communication skills. Skills Required RoleDevops Engineer Industry TypeIT/ Computers - Software Functional AreaIT-Software Required Education Bachelor Degree Employment TypeFull Time, Permanent Key Skills DEVOPS KUBERNETES TERRAFORM JAVA PYTHON Other Information Job CodeGO/JC/249/2025 Recruiter NameSheena Rakesh Show more Show less
Posted 3 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Description Our data goes beyond data hub models to provide purpose-built data connection, transformation, storage, and integration with an organization’s Anaplan planning models. Requirements Requirements: Proven Expertise: Strong hands-on experience with CI/CD pipelines, Kubernetes, and major cloud platforms (AWS, Azure, GCP). Infrastructure as Code (IaC): Solid understanding of tools such as Terraform and Helm to define, deploy, and manage infrastructure. Programming Knowledge: Proficient in programming and scripting languages such as Python, Bash/Shell, or similar. Databases & Storage: Strong understanding of relational databases (Postgres, MySQL) and cloud-based storage solutions. Monitoring & Observability: Good knowledge of observability, logging, and monitoring tools such as Grafana, Prometheus, and Loki to ensure system reliability and performance. Collaboration: Excellent communication skills, with the ability to influence teams and stakeholders across the organization. Adaptability: Ability to quickly learn new tools, technologies, and practices to stay ahead of emerging trends in the industry. Technologies & Frameworks We Use: Programming/Scripting Languages: Python, Bash/Shell CI/CD Tools: Jenkins, Terraform, Harness Containerization & Orchestration: Kubernetes, Docker, Helm Monitoring & Observability: Grafana, Loki, Prometheus Cloud Platforms: AWS, Azure, GCP Databases: Postgres, MySQL Version Control & Artifact Repositories: GitHub, Artifactory Operating Systems: Linux, MacOS Preferred Skills: Experience with microservices architecture and managing containerized applications. Familiarity with security practices in cloud-native environments. Knowledge of scaling and optimizing systems for high-volume traffic. Experience with modern cloud-native services (e.g., serverless, managed databases). Job responsibilities Your Impact: As a Senior DevOps Engineer, you will play a crucial role in the design, deployment, and management of high-performance, scalable, and reliable software infrastructure. Working alongside a diverse, cross-functional, multi-geo team, you will have a significant impact on delivering high-quality software solutions that are both innovative and strategically critical to our business. Collaborate closely with stakeholders from design, product, and technical leads to drive the success of mission-critical projects. Work within an agile development process, thriving in an environment that promotes high levels of autonomy and accountability. Contribute to the success of multiple teams by leveraging your expertise across a broad range of technical domains. Maintain and optimize high-volume production infrastructure and applications, ensuring robustness and reliability. More About You: You are eager to learn and adapt to new technologies, frameworks, and methodologies to stay ahead of the curve. You have excellent problem-solving skills and the ability to architect simple, efficient, and scalable solutions from high-level direction and specifications. Your strong autonomy allows you to work with minimal direction while ensuring your work aligns with team goals and broader organizational objectives. You enjoy collaborating across a variety of domains, including databases, storage, networking, cloud infrastructure, continuous integration/deployment, and more. Your excellent communication skills allow you to influence and align teams across the organization, fostering collaboration and mutual understanding. Key Responsibilities: Infrastructure Management: Design, implement, and maintain scalable, secure, and high-availability infrastructure solutions across cloud environments (AWS, Azure, GCP). CI/CD & Automation: Build and maintain CI/CD pipelines that enable the seamless integration and delivery of applications. Automate manual processes to improve productivity and reliability. Cloud & Containerization: Work with Kubernetes, Helm, and Docker to build and maintain containerized environments and orchestrate cloud-based services. Observability & Monitoring: Implement observability, metrics collection, and logging solutions (Grafana, Loki, Prometheus) to ensure the performance and reliability of production systems. Collaboration & Leadership: Lead efforts to build a DevOps culture across teams, sharing knowledge and best practices, mentoring junior team members, and collaborating with stakeholders to drive delivery and innovation. Security & Compliance: Work to ensure that infrastructure and pipelines are secure, compliant, and meet industry best practices for data privacy and protection. Troubleshooting & Optimization: Take ownership of troubleshooting and resolving production issues, optimize system performance, and reduce downtime. What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services. Show more Show less
Posted 3 days ago
6.0 - 8.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact. To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures—and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive. What You'll Do BCG is looking for a Global IT Software Engineer Senior Manager to contribute to the development, deployment, and optimization of cutting-edge Generative AI (GenAI) tools and IT solutions. In this role, you will work closely with cross-functional teams, bringing technical expertise and hands-on problem-solving to ensure the successful delivery of innovative and scalable software solutions that support BCG’s business objectives. Leading the implementation and optimization of GenAI applications and IT tools to enhance productivity and operational efficiency. Collaborate with Product Owners, Tribe Leaders, and other stakeholders to align technology solutions with business requirements. Administer and configure AI-powered SaaS tools, ensuring secure deployment and smooth integration across the organization. Identify opportunities for enhancements to enterprise AI tools, focusing on improving efficiency and user satisfaction. Support proof-of-concept (POC) projects to explore and validate innovative technologies and solutions. Continuously assess and optimize software architecture, focusing on scalability, reliability, and alignment with emerging trends. Document designs, development processes, and best practices to promote knowledge sharing and operational efficiency. Stay updated on emerging technologies such as LLMs, APIs, and cloud-based solutions, applying these innovations to drive impactful outcomes. What You'll Bring A bachelor’s degree in Computer Science, Engineering, or a related field. Advanced degrees are a plus. 6-8 years of professional experience in software development or IT operations, with increasing responsibility. Proven experience in implementing AI-driven applications and SaaS solutions. Strong technical proficiency in both frontend and backend development (e.g., React, Python, Java, Typescript). Experience with cloud technologies and infrastructure as code (e.g., AWS, Kubernetes, Terraform). Familiarity with software design patterns, architecture trade-offs, and integration best practices. Knowledge of DevOps practices, CI/CD pipelines, and automated testing frameworks. Experience And Skills (Nice To Have) Previous experience building a user-facing GenAI/LLM software application Previous experience with vectors and embeddings (pgvector, chromadb) Knowledge of LLM RAG/Agent core concepts and fundamentals Experience with Helm, Neo4J, GraphQL for efficient data querying for APIs, and CI/CD tools like Jenkins for automating deployments Other AWS Managed Services (RDS, Batch, Lambda, Fargate, Step Functions, SQS/SNS, etc.) FastAPI and NextJS experience (if we’re still using the latter) Websockets, Server-Side Events, Pub/Sub (RabbitMQ, Kafka, etc.) Who You'll Work With Squad members of a specific squad, led by a Product Owner. Tribe Leaders, Product Owners, and other Chapter Leads to align resources and priorities. Agile Coaches and Scrum Masters to embed Agile practices and principles into daily operations. Cross-functional IT teams to ensure alignment with BCG’s overall IT strategy and architecture. Additional info YOU’RE GOOD AT Driving the adoption and optimization of SaaS tools and AI-driven applications to meet organizational needs. Solving technical challenges and developing scalable, innovative solutions. Applying Change Management disciplines to ensure successful technology rollouts. Proactively identifying and implementing automation capabilities to reduce manual effort and errors. Collaborating effectively with diverse stakeholders, including technical teams and business leaders. Adapting to fast-paced environments and evolving priorities with high energy and autonomy. Leveraging expertise in GenAI, SaaS integrations, cloud technologies, and security to deliver impactful solutions. Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify. Show more Show less
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2