Jobs
Interviews

2769 Helm Jobs - Page 7

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Description: Senior Full Stack Developer Position: Senior Full Stack Developer Location: Gurugram Relevant Experience Required: 8+ years Employment Type: Full-time About The Role We are looking for a Senior Full Stack Developer who can build end-to-end web applications with strong expertise in both front-end and back-end development. The role involves working with Django, Node.js, React, and modern database systems (SQL, NoSQL, and Vector Databases), while leveraging real-time data streaming, AI-powered integrations, and cloud-native deployments. The ideal candidate is a hands-on technologist with a passion for modern UI/UX, scalability, and performance optimization. Key Responsibilities Front-End Development Build responsive and user-friendly interfaces using HTML5, CSS3, JavaScript, and React. Implement modern UI frameworks such as Next.js, Tailwind CSS, Bootstrap, or Material-UI. Create interactive charts and dashboards with D3.js, Recharts, Highcharts, or Plotly. Ensure cross-browser compatibility and optimize for performance and accessibility. Collaborate with designers to translate wireframes and prototypes into functional components. Back-End Development Develop RESTful & GraphQL APIs with Django/DRF and Node.js/Express. Design and implement microservices & event-driven architectures. Optimize server performance and ensure secure API integrations. Database & Data Management Work with structured (PostgreSQL, MySQL) and unstructured databases (MongoDB, Cassandra, DynamoDB). Integrate and manage Vector Databases (Pinecone, Milvus, Weaviate, Chroma) for AI-powered search and recommendations. Implement sharding, clustering, caching, and replication strategies for scalability. Manage both transactional and analytical workloads efficiently. Real-Time Processing & Visualization Implement real-time data streaming with Apache Kafka, Pulsar, or Redis Streams. Build live features (e.g., notifications, chat, analytics) using WebSockets & Server-Sent Events (SSE). Visualize large-scale data in real time for dashboards and BI applications. DevOps & Deployment Deploy applications on cloud platforms (AWS, Azure, GCP). Use Docker, Kubernetes, Helm, and Terraform for scalable deployments. Maintain CI/CD pipelines with GitHub Actions, Jenkins, or GitLab CI. Monitor, log, and ensure high availability with Prometheus, Grafana, ELK/EFK stack. Good To Have AI & Advanced Capabilities Integrate state-of-the-art AI/ML models for personalization, recommendations, and semantic search. Implement Retrieval-Augmented Generation (RAG) pipelines with embeddings. Work on multimodal data processing (text, image, and video). Preferred Skills & Qualifications Core Stack Front-End: HTML5, CSS3, JavaScript, TypeScript, React, Next.js, Tailwind CSS/Bootstrap/Material-UI Back-End: Python (Django/DRF), Node.js/Express Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB, Vector Databases (Pinecone, Milvus, Weaviate, Chroma) APIs: REST, GraphQL, gRPC State-of-the-Art & Advanced Tools Streaming: Apache Kafka, Apache Pulsar, Redis Streams Visualization: D3.js, Highcharts, Plotly, Deck.gl Deployment: Docker, Kubernetes, Helm, Terraform, ArgoCD Cloud: AWS Lambda, Azure Functions, Google Cloud Run Monitoring: Prometheus, Grafana, OpenTelemetry

Posted 5 days ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Description: Senior MLOps Engineer Position: Senior MLOps Engineer Location: Gurugram Relevant Experience Required: 6+ years Employment Type: Full-time About The Role We are seeking a Senior MLOps Engineer with deep expertise in Machine Learning Operations, Data Engineering, and Cloud-Native Deployments . This role requires building and maintaining scalable ML pipelines , ensuring robust data integration and orchestration , and enabling real-time and batch AI systems in production. The ideal candidate will be skilled in state-of-the-art MLOps tools , data clustering , big data frameworks , and DevOps best practices , ensuring high reliability, performance, and security for enterprise AI workloads. Key Responsibilities MLOps & Machine Learning Deployment Design, implement, and maintain end-to-end ML pipelines from experimentation to production. Automate model training, evaluation, versioning, deployment, and monitoring using MLOps frameworks. Implement CI/CD pipelines for ML models (GitHub Actions, GitLab CI, Jenkins, ArgoCD). Monitor ML systems in production for drift detection, bias, performance degradation, and anomaly detection. Integrate feature stores (Feast, Tecton, Vertex AI Feature Store) for standardized model inputs. Data Engineering & Integration Design and implement data ingestion pipelines for structured, semi-structured, and unstructured data. Handle batch and streaming pipelines with Apache Kafka, Apache Spark, Apache Flink, Airflow, or Dagster. Build ETL/ELT pipelines for data preprocessing, cleaning, and transformation. Implement data clustering, partitioning, and sharding strategies for high availability and scalability. Work with data warehouses (Snowflake, BigQuery, Redshift) and data lakes (Delta Lake, Lakehouse architectures). Ensure data lineage, governance, and compliance with modern tools (DataHub, Amundsen, Great Expectations). Cloud & Infrastructure Deploy ML workloads on AWS, Azure, or GCP using Kubernetes (K8s) and serverless computing (AWS Lambda, GCP Cloud Run). Manage containerized ML environments with Docker, Helm, Kubeflow, MLflow, Metaflow. Optimize for cost, latency, and scalability across distributed environments. Implement infrastructure as code (IaC) with Terraform or Pulumi. Real-Time ML & Advanced Capabilities Build real-time inference pipelines with low latency using gRPC, Triton Inference Server, or Ray Serve. Work on vector database integrations (Pinecone, Milvus, Weaviate, Chroma) for AI-powered semantic search. Enable retrieval-augmented generation (RAG) pipelines for LLMs. Optimize ML serving with GPU/TPU acceleration and ONNX/TensorRT model optimization. Security, Monitoring & Observability Implement robust access control, encryption, and compliance with SOC2/GDPR/ISO27001. Monitor system health with Prometheus, Grafana, ELK/EFK, and OpenTelemetry. Ensure zero-downtime deployments with blue-green/canary release strategies. Manage audit trails and explainability for ML models. Preferred Skills & Qualifications Core Technical Skills Programming: Python (Pandas, PySpark, FastAPI), SQL, Bash; familiarity with Go or Scala a plus. MLOps Frameworks: MLflow, Kubeflow, Metaflow, TFX, BentoML, DVC. Data Engineering Tools: Apache Spark, Flink, Kafka, Airflow, Dagster, dbt. Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB. Vector Databases: Pinecone, Weaviate, Milvus, Chroma. Visualization: Plotly Dash, Superset, Grafana. Tech Stack Orchestration: Kubernetes, Helm, Argo Workflows, Prefect. Infrastructure as Code: Terraform, Pulumi, Ansible. Cloud Platforms: AWS (SageMaker, S3, EKS), GCP (Vertex AI, BigQuery, GKE), Azure (ML Studio, AKS). Model Optimization: ONNX, TensorRT, Hugging Face Optimum. Streaming & Real-Time ML: Kafka, Flink, Ray, Redis Streams. Monitoring & Logging: Prometheus, Grafana, ELK, OpenTelemetry.

Posted 5 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Role We are looking for an experienced DevOps Engineer to join our engineering team. This role involves setting up, managing, and scaling development, staging, and production environments both on AWS cloud and on-premise (open source stack) . You will be responsible for CI/CD pipelines, infrastructure automation, monitoring, container orchestration, and model deployment workflows for our enterprise applications and AI platform. Key Responsibilities Infrastructure Setup & Management Design and implement cloud-native architectures on AWS and be able to manage on-premise open source environments when required . Automate infrastructure provisioning using tools like Terraform or CloudFormation. Maintain scalable environments for dev, staging, and production . CI/CD & Release Management Build and maintain CI/CD pipelines for backend, frontend, and AI workloads. Enable automated testing, security scanning, and artifact deployments. Manage configuration and secret management across environments. Containerization & Orchestration Manage Docker-based containerization and Kubernetes clusters (EKS, self-managed K8s) . Implement service mesh, auto-scaling, and rolling updates. Monitoring, Security, and Reliability Implement observability (logging, metrics, tracing) using open source or cloud tools. Ensure security best practices across infrastructure, pipelines, and deployed services. Troubleshoot incidents, manage disaster recovery, and support high availability. Model DevOps / MLOps Set up pipelines for AI/ML model deployment and monitoring (LLMOps). Support data pipelines, vector databases, and model hosting for AI applications. Required Skills and Qualifications Cloud & Infra Strong expertise in AWS services : EC2, ECS/EKS, S3, IAM, RDS, Lambda, API Gateway, etc. Ability to set up and manage on-premise or hybrid environments using open source tools. DevOps & Automation Hands-on experience with Terraform / CloudFormation . Strong skills in CI/CD tools such as GitHub Actions, Jenkins, GitLab CI/CD, or ArgoCD. Containerization & Orchestration Expertise with Docker and Kubernetes (EKS or self-hosted). Familiarity with Helm charts, service mesh (Istio/Linkerd). Monitoring / Observability Tools Experience with Prometheus, Grafana, ELK/EFK stack, CloudWatch . Knowledge of distributed tracing tools like Jaeger or OpenTelemetry. Security & Compliance Understanding of cloud security best practices . Familiarity with tools like Vault, AWS Secrets Manager. Model DevOps / MLOps Tools (Preferred) Experience with MLflow, Kubeflow, BentoML, Weights & Biases (W&B) . Exposure to vector databases (pgvector, Pinecone) and AI pipeline automation . Preferred Qualifications Knowledge of cost optimization for cloud and hybrid infrastructures . Exposure to infrastructure as code (IaC) best practices and GitOps workflows. Familiarity with serverless and event-driven architectures . Education Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience). What We Offer Opportunity to work on modern cloud-native systems and AI-powered platforms . Exposure to hybrid environments (AWS and open source on-prem). Competitive salary, benefits, and growth-oriented culture.

Posted 5 days ago

Apply

1.0 - 4.0 years

3 - 7 Lacs

Pune, Bengaluru

Work from Office

Role Overview: Trellix is looking for quality engineers who are self-driven and passionate to work on on-prem/cloud products that cover SIEM, EDR, and XDR technologies. This job involves manual, automated testing (including automation development), non-functional (performance, stress, soak), security testing and much more. Work smartly by using cutting edge technologies and AI driven solutions. About the role: Champion a quality-first mindset throughout the entire software development lifecycle. Develop and implement comprehensive test strategies and plans for a complex hybrid application, considering the unique challenges of both on-premise and cloud deployments. Collaborate with architects and development teams to understand system architecture, design, and new features to define optimal test approaches. Peruse the requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas. Design, develop, and maintain robust, scalable, and high-performance automated test frameworks and tools from scratch, utilizing industry-standard programming languages (e.g., Python, Java, Go). Manage and maintain test environments, including setting up and configuring both on-premise and cloud instances for testing. Execute new feature and regression cases manually, as needed for a product release. Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is essential. Filing defects effectively, i.e., noting all the relevant details that reduce the back-and-forth, and aids quick turnaround with bug fixing, is an essential trait for this job Identify cases that are automatable, and within this scope, segregate cases with high ROI from low-impact areas to improve testing efficiency Analyze test results, identify defects, and work closely with development teams to ensure timely resolution. Willing to explore and increase understanding on Cloud/ On-prem infrastructure About you: 1-4 years of experience in an SDET role with a relevant degree in Computer Science or Information Technology is required Show ability to quickly learn a product or concept, viz., its feature set, capabilities, and functionality. Solid fundamentals in any programming language (preferably, Python or go) and OOPS concepts. Also, hands-on with any of the popular CI/CD tools such as Teamcity, Jenkins or similar is a must RESTful API testing using tools such as Postman or similar is a must Familiarity and exposure to AWS and its offerings, such as, S3, EC2, EBS, EKS, IAM, etc., is required. Exposure to Docker, Helm, GitOps is an added advantage. Extensive experience designing, developing, and maintaining automated test frameworks (e.g., Playwright, Selenium, Cypress, TestNG, JUnit, Pytest). Experience with API testing tools and frameworks (e.g., Postman, Rest Assured, OpenAPI/Swagger). Good foundational knowledge in working on Linux based systems. This includes setting up git repos, user management, network configurations, use of package managers, etc. Hands-on with functional and non-functional testing, such as, performance and load, is desirable. Any level of proficiency with prometheus, grafana, service metrics, would be nice to have Understanding of cyber security concepts would be helpful.

Posted 5 days ago

Apply

6.0 - 9.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Job Title: Senior SDET Role Overview: Trellix is looking for SDETs who are self-driven and passionate to work on Endpoint Detection and Response (EDR) line of products. Tasks range from manual and, automated testing (including automation development), non-functional (performance, stress, soak), solution, security testing and much more. About the role: Peruse requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas Execute new feature and regression cases manually, as needed for a product release Identify critical issues and communicate them effectively in a timely manner Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is helpful. Filing defects effectively, i.e., noting all the relevant details that reduces the back-and-forth, and aids quick turnaround with bug fixing is an essential trait for this job Identify cases that are automatable, and within this scope segregate cases with high ROI from low impact areas to improve testing efficiency Hands-on with automation programming languages such as Python, Java, etc. is advantageous. Execute, monitor and debug automation runs Author automation code to improve coverage across the board Lead fellow team members and also aspects of the product end-to-end, while thinking of all aspects including enhancements, automation, performance, and others Write automation code to reduce repetitive tasks and improve regression coverage About you: 6-9 years of experience in a SDET role with a relevant degree in Computer Science or Information Technology is required Show ability to quickly learn a product or concept, viz., its feature set, capabilities, functionality and nitty-gritty Solid fundamentals in any programming language (preferably, Python ) and OOPS concepts. Also, hands-on with CI/CD with Jenkins or similar is a must RESTful API testing using tools such as Postman or similar is desired Knowledge of creating design patterns is desirable Strong foundational knowledge in working on Linux based systems and their administration is needed. This includes, setting up git repos, user management, network configurations, use of package managers, etc. Proficient with Kubernetes and AWS and its offerings, such as, S3, EC2, EBS, EKS, IAM, etc., is highly desired. Exposure to Docker, helm, argo CD is an added advantage Exposure to non-functional testing, such as, performance and load, is desired. Being hands-on with tools such as Locust and/or JMeter would be a huge advantage Any level of proficiency with Prometheus, grafana, service metrics, and such is desired Understanding of Endpoint security concepts around Endpoint Detection and Response (EDR) and hands-on experience working on SaaS based applications and platforms would be a plus Proven track record of taking ownership and driving aspects of product enhancements end-to-end

Posted 5 days ago

Apply

6.0 - 9.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Role Overview: Trellix is looking for SDETs who are self-driven and passionate to work on Endpoint Detection and Response (EDR) line of products. Tasks range from manual and, automated testing (including automation development), non-functional (performance, stress, soak), solution, security testing and much more. About the role: Peruse requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas Execute new feature and regression cases manually, as needed for a product release Identify critical issues and communicate them effectively in a timely manner Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is helpful. Filing defects effectively, i.e., noting all the relevant details that reduces the back-and-forth, and aids quick turnaround with bug fixing is an essential trait for this job Identify cases that are automatable, and within this scope segregate cases with high ROI from low impact areas to improve testing efficiency Hands-on with automation programming languages such as Python, Java, etc. is advantageous.Execute, monitor and debug automation runs Author automation code to improve coverage across the board Lead fellow team members and also aspects of the product end-to-end, while thinking of all aspects including enhancements, automation, performance, and others Write automation code to reduce repetitive tasks and improve regression coverage About you: 6-9 years of experience in a SDET role with a relevant degree in Computer Science or Information Technology is required Show ability to quickly learn a product or concept, viz., its feature set, capabilities, functionality and nitty-gritty Solid fundamentals in any programming language (preferably, Python ) and OOPS concepts. Also, hands-on with CI/CD with Jenkins or similar is a must RESTful API testing using tools such as Postman or similar is desired Knowledge of creating design patterns is desirable Strong foundational knowledge in working on Linux based systems and their administration is needed. This includes, setting up git repos, user management, network configurations, use of package managers, etc. Proficient with Kubernetes and AWS and its offerings, such as, S3, EC2, EBS, EKS, IAM, etc., is highly desired. Exposure to Docker, helm, argo CD is an added advantage Exposure to non-functional testing, such as, performance and load, is desired. Being hands-on with tools such as Locust and/or JMeter would be a huge advantage Any level of proficiency with Prometheus, grafana, service metrics, and such is desired Understanding of Endpoint security concepts around Endpoint Detection and Response (EDR) and hands-on experience working on SaaS based applications and platforms would be a plus Proven track record of taking ownership and driving aspects of product enhancements end-to-end

Posted 5 days ago

Apply

9.0 - 14.0 years

27 - 32 Lacs

Gurugram, Bengaluru

Hybrid

Perform the analysis of the existing solution from data platforms and its interface customers. Perform the analysis of alternative technologies if needed. Conduct impact analysis on the product toolchain to integrate the deployment of each layer

Posted 5 days ago

Apply

8.0 - 13.0 years

7 - 11 Lacs

Bengaluru

Work from Office

About the Role: Lead the design, development, and deployment of large-scale software systems in Python and Go. Understanding of data pipelines and event/log processing eg., syslog, JSON. Protobuf/MsgPack, gRPC, Apache Kafka, Pulsar, Red Panda, RabbitMQ, etc. Own end-to-end product features, from initial design through to production, with a focus on high-quality, maintainable code. Architect scalable, reliable, and secure software solutions with a focus on performance and usability. Contribute to system design decisions, optimizing for scalability, availability, and performance. Mentor and guide junior engineers, providing technical leadership and fostering a culture of excellence. Integrate with CI/CD pipelines, continuously improving and optimizing them for faster and more reliable software releases. Conduct code reviews to ensure best practices in coding, testing, and design patterns. Troubleshoot, debug, and resolve complex technical issues in production and development environments. About You : 8+ years of professional software development experience. Expertise in Golang and Python and design patterns. Hands-on experience with system design, architecture, and scaling of complex systems. Strong exposure to CI/CD practices and tools (e.g., ArgoCD, Github Actions). Deep knowledge of Kubernetes, e.g., CRD, Helm, Kustomize, design and implementation of k8s Operators Familiarity with infrastructure as code tools (e.g., Terraform, CloudFormation). Good understanding of Networking and storage e.g., Load balancers, proxies. Experience working in cloud environments (e.g., AWS, Azure, GCP) and containerization technologies (e.g., Docker, Kubernetes). Proficient in database design and optimization, with experience in both SQL and NoSQL databases (Eg., OpenSearch, ClickHouse, Apache Iceberg) Proven experience in Agile methodologies and working in cross-functional teams. Excellent problem-solving skills with the ability to break down complex problems into manageable solutions

Posted 5 days ago

Apply

4.0 - 9.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Role Overview: Trellix is looking for quality engineers who are self-driven and passionate to work on on-prem/cloud products that cover SIEM, EDR, and XDR technologies. This job involves manual, automated testing (including automation development), non-functional (performance, stress, soak), security testing and much more. Work smartly by using cutting edge technologies and AI driven solutions. About the role: Peruse requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas Execute new feature and regression cases manually, as needed for a product release Identify critical issues and communicate them effectively in a timely manner Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is helpful. Filing defects effectively, i.e., noting all the relevant details that reduces the back-and-forth, and aids quick turnaround with bug fixing is an essential trait for this job Automate using popular frameworks suitable for backend code, APIs and frontend. Hands-on with automation programming languages (Python, go, Java, etc). Execute, monitor and debug automation runs Willing to explore and increase understanding on Cloud/ On-prem infrastructure About you: 4-10 years of experience in a SDET role with a relevant degree in Computer Science or Information Technology is required Show ability to quickly learn a product or concept, viz., its feature set, capabilities, and functionality. Solid fundamentals in any programming language (preferably, Python or go). Sound knowledge of any of the popular automation frameworks such as selenium, Playwright, Postman, PyTest, etc. Hands-on with any of the popular CI/CD tools such as Teamcity, Jenkins or similar is a must RESTful API testing using tools such as Postman or similar is a must Familiarity and exposure to AWS and its offerings, such as S3, EC2, EBS, EKS, IAM, etc., is an added advantage. Exposure to Kubernetes, Docker, helm, GitOps is desired. strong foundational knowledge in working on Linux based systems. Hands-on with non-functional testing, such as, performance and load, is desirable. Some proficiency with prometheus, grafana, service metrics and analysis is highly desirable. Understanding of cyber security concepts would be helpful.

Posted 5 days ago

Apply

4.0 years

20 - 30 Lacs

Noida, Uttar Pradesh, India

On-site

About Us CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance. Our Values We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community. Equal Opportunity Statement CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Job Title : Senior DevOps Engineer Location : Noida(Hybrid) The Opportunity We are seeking a highly skilled and experienced Senior DevOps Engineer to join our team. The ideal candidate will have extensive expertise in modern DevOps tools and practices, particularly in managing CI/CD pipelines, infrastructure as code, and cloud-native environments. This role involves designing, implementing, and maintaining robust, scalable, and efficient infrastructure and deployment pipelines to support our development and operations teams. Mandatory Skills required: GCP, DevOps, Terraform, Kubernetes, Docker, CI/CD, GitHub Actions, Helm Charts Secondary Skills required: - Experience with other CI/CD tools (e.g., Jenkins, GitLab CI/CD) Knowledge of additional cloud platforms (e.g., AWS, Azure) Certification in Kubernetes (CKA/CKAD) or Google Cloud (GCP Professional DevOps Engineer) Required Skills And Experience 4+ years of experience in DevOps, infrastructure automation, or related fields. Advanced expertise in Terraform for infrastructure as code. Solid experience with Helmfor managing Kubernetes applications. Proficient with GitHub for version control, repository management, and workflows. Extensive experience with Kubernetes for container orchestration and management. In-depth understanding of Google Cloud Platform (GCP) services and architecture. Strong scripting and automation skills (e.g., Python, Bash, or equivalent). Excellent problem-solving skills and attention to detail. Strong communication and collaboration abilities in agile development environments. Preferred Qualifications Experience with other CI/CD tools (e.g., Jenkins, GitLab CI/CD). Knowledge of additional cloud platforms (e.g., AWS, Azure). Certification in Kubernetes (CKA/CKAD) or Google Cloud (GCP Professional DevOps Engineer). Skills:- DevOps, Google Cloud Platform (GCP), Terraform, Kubernetes, Docker, helm, Jenkins and GitHub

Posted 5 days ago

Apply

6.0 - 9.0 years

10 - 11 Lacs

Bengaluru

Work from Office

Role & responsibilities Job Title: Senior Devops Location: Marathalli,Bangalore Job Description : We are seeking a motivated DevOps Engineer with hands-on experience in cloud administration, container orchestration, and CI/CD pipeline automation. The ideal candidate will demonstrate a strong commitment to task ownership, a proactive approach to problem-solving, and the ability to learn and adapt quickly to evolving technologies. This role will play a key part in ensuring reliable, scalable infrastructure and efficient software delivery processes. Key Responsibilities : Manage and administer cloud environments using Azure or AWS, ensuring security, scalability, and performance. Build, deploy, and maintain containerized applications using Kubernetes and Docker. Design, implement, and optimize CI/CD pipelines to streamline software delivery and deployment. Collaborate with development and operations teams to clarify requirements and drive continuous improvement. Take accountability for assigned tasks, proactively identifying and resolving impediments to meet sprint goals. Apply conceptual and visual thinking to document workflows, system architectures, and process improvements. Adapt quickly to new tools, technologies, and methodologies to enhance operational efficiency. Contribute to sprint velocity by completing assigned story points and maintaining consistent productivity. Required Skills: Proven experience in Azure or AWS cloud administration. Strong hands-on skills with Kubernetes and Docker container orchestration and management. Experience in building and maintaining CI/CD pipelines. Ability to understand the broader impact of DevOps tasks on users, teams, and systems. Proactive mindset with strong clarity and initiative in task execution. Ability to quickly learn new technologies and adapt to changing requirements. Preferred attributes : Experience with PowerShell or Shell scripting (optional). Knowledge of incident response metrics such as Mean Time to Resolve (MTTR). Familiarity with best practices in code quality, documentation, and automated testing. Nice to have : Clear communication and effective collaboration within cross-functional teams. Accountability and integrity in owning work and outcomes. Proactive problem-solving and taking initiative without needing close supervision. Educational Qualification : Bachelor's degree (BE/B.Tech) in Computer Science or equivalent. Working mode : Working from office - Monday to Thursday. Friday is optional. Interested Candidate send updated CV:nandini@sapienceminds.com

Posted 5 days ago

Apply

6.0 - 9.0 years

15 - 25 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

6 to 9 years of experience with minimum 2+ of experience with Kubernetes in designing, building and managing large scale production application infrastructure.Kubernetes, GCP, Linux. Email : Mahalakshmi.a@livecjobs.com * JOB IN PAN INDIA* Required Candidate profile Kubernates, GKS, Helm,Openshift, Terraforms

Posted 5 days ago

Apply

7.0 - 12.0 years

30 - 35 Lacs

Pune

Work from Office

About The Role : Job TitleDevOps Engineer, AVP LocationPune, India Role Description As a DevOps Engineer you will work as part of a multi-skilled agile team, dedicated to improved automation and tooling to support continuous delivery. Your team will work hard to foster increased collaboration and create a Devops culture. You will make a crucial contribution to our efforts to be able to release our software more frequently, efficiently and with less risk. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your Key Responsibilities Work with other engineers to support our adoption of continuous delivery, automating the building, packaging, testing and deployment of applications. Create the tools required to deploy and manage applications effectively in production, with minimal manual effort Help teams to adopt modern delivery practices, such as extensive use of automated testing, continuous integration, more frequent releases, blue/green deployment, canary releases, etc. Configure and manage code repositories, continuous builds, artifact repositories, cloud platforms and other tools Contribute towards a culture of learning and continuous improvement within your team and beyond. Share skills and knowledge in a wide range of topics relating to Devops and software delivery Your skills and experience Good knowledge Springboot application (build and deployment). Setting up application on any cloud environment ( GCP will be plus) Extensive experience with configuration management toolsAnsible, Terraform, Docker, Helm or similar tools. Hands on experience on tools Like UDeploy for Automation Extensive experience in understanding networking concept e.g. Firewall, Loadbancing, data transfer Extensive experience building CI/CD pipelines using TeamCity or similar. Experience with a range of tools and techniques that can be used to make software delivery faster and more reliable such as experience in creating and maintaining automated builds, using tools such as Team City, Jenkins and Bamboo etc and using repositories such as Nexus and Artifactory to manage and distribute binary artefacts. Good knowledge of scripting languages, such as Maven, shell Or python. Good understanding of git version control systems, branching and merging, etc. Good understanding of Release Management, Change management concept. Experience working in an agile team, practicing Scrum, Kanban or XP or SAFe How well support you

Posted 5 days ago

Apply

3.0 - 5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Profile: Confluent Developer/Admin Experience: 3 to 5 Years Budget: As per Company Norms Location: Gurugram/ Noida/ New Delhi Job Summary : We are looking for a skilled Confluent Developer with hands-on experience in Apache Kafka and Confluent Platform to develop and maintain scalable, real-time data streaming solutions. The ideal candidate will be responsible for designing, implementing, and managing event-driven architecture and microservices that rely on high-throughput messaging systems. Key Responsibilities : Design and develop Kafka producers, consumers, and stream processing applications using Kafka Streams, KSQL, or Kafka Connect. Build and manage data pipelines using Confluent tools (Schema Registry, Kafka Connect, KSQL, etc.). Integrate Confluent Platform with external systems (databases, APIs, cloud storage). Ensure high availability, scalability , and performance of Kafka clusters. Monitor, troubleshoot, and optimize Kafka-based applications. Collaborate with software engineering, data engineering, and DevOps teams. Develop and maintain CI/CD pipelines for Kafka applications. Apply best practices in schema design, message serialization (Avro, Protobuf), and versioning . Required Skills & Qualifications : Strong experience with Apache Kafka and Confluent Platform . Proficiency in Java , Scala , or Python . Experience with Kafka Streams , KSQL , Kafka Connect , and Schema Registry . Solid understanding of distributed systems and event-driven architecture . Experience with message serialization formats like Avro, Protobuf, or JSON. Familiarity with DevOps tools (Docker, Kubernetes, Helm) and cloud platforms (AWS, Azure, GCP). Knowledge of monitoring tools (Prometheus, Grafana, Confluent Control Center). Bachelor’s degree in Computer Science or related field. Preferred Qualifications : Confluent Certified Developer for Apache Kafka (CCDAK) or similar certifications. Experience with data engineering tools (Spark, Flink, Hadoop). Familiarity with real-time analytics and data lakes . Contributions to open-source Kafka projects.

Posted 5 days ago

Apply

2.0 - 5.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Project Role : Integration Engineer Project Role Description : Provide consultative Business and System Integration services to help clients implement effective solutions. Understand and translate customer needs into business and technology solutions. Drive discussions and consult on transformation, the customer journey, functional/application designs and ensure technology and business solutions represent business requirements. Must have skills : Infrastructure As Code (IaC) Good to have skills : Google Cloud Storage, Microsoft Azure Databricks, Ansible on Microsoft AzureMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Integration Engineer, you will provide consultative Business and System Integration services to assist clients in implementing effective solutions. Your typical day will involve engaging with clients to understand their needs, facilitating discussions on transformation, and ensuring that the technology and business solutions align with their requirements. You will work collaboratively with various teams to translate customer needs into actionable plans, driving the customer journey and application designs to achieve optimal outcomes. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate workshops and meetings to gather requirements and feedback from stakeholders.- Develop and maintain documentation related to integration processes and solutions.- Infrastructure as Code (IaC):Knowledge of tools like Terraform, Terraform linkage, Helm, Ansible, ansible tower dependency and package management- Broad knowledge of operating systems- Network management knowledge and understanding of network protocols, configuration, and troubleshooting. Proficiency in configuring and managing network settings within cloud platforms- Security:Knowledge with cybersecurity principles and practices, implementing security frameworks that ensure secure workloads and data protection- Expert proficiency in Linux CLI- Monitoring of the environment from technical perspective.- Monitoring the costs of the development environment. Professional & Technical Skills: - Must To Have Skills: Proficiency in Infrastructure As Code (IaC).- Good To Have Skills: Experience with Hitachi Data Systems (HDS), Google Cloud Storage, Microsoft Azure Databricks.- Strong understanding of cloud infrastructure and deployment strategies.- Experience with automation tools and frameworks for infrastructure management.- Familiarity with version control systems and CI/CD pipelines.- Solid understanding of Data Modelling, Data warehousing and Data platforms design.- Working knowledge of databases and SQL.- Proficient with version control such as:Git, GitHub or GitLab- Solid understanding of Data warehousing and Data platforms design.- Experience supporting BAT teams and BAT test environments.- Experience with workflow and batch scheduling. Added advantage of Control-M and Informatica experience.- Good know-how on Financial Markets. Know-how on Clearing, Trading and Risk business process will be added advantage- Know-How on Java, Spark & BI reporting will be an added advantage.- Know-how of cloud platform and affinity towards modern technology an added advantage.- Experience in CI/CD pipeline and exposure to DevOps methodologies will be considered as added advantage. Additional Information:- The candidate should have minimum 5 years of experience in Infrastructure As Code (IaC).- This position is based in Hyderabad.- A 15 years full time education is required. Qualification 15 years full time education

Posted 5 days ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Pune, Gurugram, Bengaluru

Work from Office

Job Title: API & Services Engineer Experience: 5-10 years Type: Full-time Employment About the Role Were looking for a passionate and experienced API & Services Engineer to design and develop high-performance backend systems using Python (FastAPI/Flask), ensure scalable database interactions, and manage deployments on cloud platforms like Azure or GCP. This role also includes CI/CD automation, containerization, and exposure to frontend frameworks. Key Responsibilities Design, build, and maintain RESTful APIs using FastAPI or Flask . Manage data operations and schema design using MySQL or PostgreSQL . Deploy and manage services on Azure or Google Cloud Platform (GCP) . Implement and maintain CI/CD pipelines for smooth automated deployments. Write unit and integration tests using standard Python test frameworks . Containerize applications with Docker and orchestrate them using Kubernetes and Helm . Collaborate with frontend developers (React.js) and contribute to API contracts. Must-Have Skills Python (FastAPI or Flask) SQL Databases (MySQL/PostgreSQL) Cloud Platforms: Azure or GCP CI/CD Pipelines: GitHub Actions, GitLab CI, etc. Unit Testing Frameworks: Pytest or equivalent Containers & Orchestration: Docker, Kubernetes, Helm Good-to-Have Skills Frontend Understanding: React.js for API integration Security Best Practices in API design and deployment Performance tuning and observability of microservices

Posted 5 days ago

Apply

5.0 - 8.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Infrastructure As Code (IaC) Good to have skills : Microsoft Azure Architecture, Google Cloud Platform AdministrationMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Integration Engineer, you will provide consultative Business and System Integration services to assist clients in implementing effective solutions. Your typical day will involve engaging with clients to understand their needs, facilitating discussions on transformation, and ensuring that the technology and business solutions align with their requirements. You will work collaboratively with various teams to translate customer needs into actionable plans, driving the customer journey and application designs to achieve optimal outcomes. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate workshops and meetings to gather requirements and feedback from stakeholders.- Develop and maintain documentation related to integration processes and solutions.- Infrastructure as Code (IaC):Knowledge of tools like Terraform, Terraform linkage, Helm, Ansible, ansible tower dependency and package management- Broad knowledge of operating systems- Network management knowledge and understanding of network protocols, configuration, and troubleshooting. Proficiency in configuring and managing network settings within cloud platforms- Security:Knowledge with cybersecurity principles and practices, implementing security frameworks that ensure secure workloads and data protection- Expert proficiency in Linux CLI- Monitoring of the environment from technical perspective.- Monitoring the costs of the development environment. Professional & Technical Skills: - Must To Have Skills: Proficiency in Infrastructure As Code (IaC).- Good To Have Skills: Experience with Hitachi Data Systems (HDS), Google Cloud Storage, Microsoft Azure Databricks.- Strong understanding of cloud infrastructure and deployment strategies.- Experience with automation tools and frameworks for infrastructure management.- Familiarity with version control systems and CI/CD pipelines.- Solid understanding of Data Modelling, Data warehousing and Data platforms design.- Working knowledge of databases and SQL.- Proficient with version control such as:Git, GitHub or GitLab- Solid understanding of Data warehousing and Data platforms design.- Experience supporting BAT teams and BAT test environments.- Experience with workflow and batch scheduling. Added advantage of Control-M and Informatica experience.- Good know-how on Financial Markets. Know-how on Clearing, Trading and Risk business process will be added advantage- Know-How on Java, Spark & BI reporting will be an added advantage.- Know-how of cloud platform and affinity towards modern technology an added advantage.- Experience in CI/CD pipeline and exposure to DevOps methodologies will be considered as added advantage. Additional Information:- The candidate should have minimum 5 years of experience in Infrastructure As Code (IaC).- This position is based in Hyderabad.- A 15 years full time education is required. Qualification 15 years full time education

Posted 5 days ago

Apply

7.0 - 10.0 years

25 - 30 Lacs

Bengaluru

Remote

Lead Golang Engineer (Go, AWS/GCP/Any cloud, Kubernetes, Microservices, Kafka, Docker) Location: WFH from India (EU Shift) Domain: IT Services (Not mandatory) Exp: 8+ yrs total including 6+ yrs relevant exp in Golang Mode: Permanent under us Annual Salary: INR 20-30 LPA Notice: Immediate 30 days Interview rounds: 4 (all virtual) What will you do? Design, develop and support backend system serving for endpoint security Help us with initiatives around moving our services to a new deployment model. Tuning and adjusting our services to work with new CPU architectures. Improving the observability for our services. Responsibilities: Cross-Team Collaboration: Work closely with product, validation, and front-end engineering teams to deliver and maintain high-quality features. Code Quality and Maintenance: Write clean, maintainable code. Stay up-to-date with the latest advancements in backend technologies and security best practices. Innovation and Creativity: Bring creativity to the table. Explore new solutions and technologies to improve our product continuously. Lead code reviews, establish coding standards and best practices Partner with Product, QA, DevOps, and Security teams to align on requirements, release timelines, and operational concerns. Requirements: Hands-on experience with Python and/or Go, or similar Ability to quickly dive into new products and understand their inner working Self-driven individual Experience with Docker, Helm & Kubernetes Familiarity with AWS and/or other cloud platforms Your main tools: Python (Flask, SQLAlchemy, Marshmallow) and Golang (we're using for new development); AWS&GCP PostgreSQL, Redis, Kafka; Kubernetes, Docker; GitHub etc.

Posted 5 days ago

Apply

5.0 - 8.0 years

4 - 8 Lacs

Pune

Work from Office

BMC is the leader in the AIOps domain and continuously transforms our customers' landscapes through the Autonomous Digital Enterprise (ADE). BMC was founded on a deeply felt principle: help customers maximize their technology and drive better business outcomes. We do that by connecting and optimizing digital operations to create an AI-powered engine for continuous innovation. Here is how, through this exciting role, YOU will contribute to BMC's and your own success: As a Product Developer-II, you'll work on complex problems where analyzing situations or data requires an in-depth evaluation of various factors. You'll manage a varied workload of multiple ongoing tasks, including new function development and product maintenance. You'll communicate with various teams to help resolve customer issues. As part of the team, it will be your responsibility to develop and debug software products. In this role, you'll develop and maintain SaaS applications as part of the IA team to ensure seamless user experiences. To ensure youre set up for success, you will bring the following skillset & experience: 5-8 years of experience as a Full Stack Developer, preferably in enterprise software companies. Experience in designing, building, and maintaining complex microservices-based applications using NodeJS, Java, React/Angular, etc. In-depth proficiency in NodeJS and Java (including Spring Boot framework). Experience in front-end technologies, such as HTML, CSS, JavaScript, and Typescript. Proficiency in modern frameworks like React & Angular. Collaborate with UI/UX designers to implement visually appealing web applications based on design concepts. Utilize Kubernetes and containerization tools (e.g., Docker, Helm) for microservices deployment, scaling, and management in a cloud-native environment. Design and optimize databases (e.g., MySQL, PostgreSQL) for efficient data storage and retrieval in a microservices environment. Conduct thorough testing and debugging to ensure high-quality, bug-free software. Use version control systems (e.g., Git) to manage the codebase and collaborate effectively with team members. Work agile within a Scrum team to meet deadlines and deliver high-quality features. Foster effective collaboration with other teams for joint feature development. Strong analytical and problem-solving abilities to tackle complex technical challenges and provide effective solutions. Excellent communication and interpersonal skills.

Posted 5 days ago

Apply

7.0 years

0 Lacs

Thiruvananthapuram

On-site

Required Qualifications & Skills: 7+ years in DevOps, SRE, or Infrastructure Engineering . Strong expertise in Cloud (AWS/GCP/Azure) & Infrastructure-as-Code (Terraform, CloudFormation) . Proficient in Docker & Kubernetes . Hands-on with CI/CD tools & scripting (Bash, Python, or Go) . Strong knowledge of Linux, networking, and security best practices . Experience with monitoring & logging tools (ELK, Prometheus, Grafana). Familiarity with GitOps, Helm charts & automation . Job Types: Full-time, Permanent Application Question(s): How many years of experience do you have working with cloud platforms (AWS, GCP, or Azure)? Have you implemented monitoring/logging using Prometheus, Grafana, ELK Stack, or Datadog? current monthly salary? Least expected monthly salary? How early you can join? Experience: DevOps: 5 years (Required) Azure: 5 years (Required) Kubernetes: 4 years (Required) Terraform: 4 years (Required) Location: Thiruvananthapuram, Kerala (Required) Work Location: In person Speak with the employer +91 9072049595

Posted 5 days ago

Apply

7.0 - 10.0 years

0 Lacs

Thane, Maharashtra, India

On-site

Company Description Watchyourhealth.com is a technology company that enables and empowers partners to engage their clients through smart technology. We use technology innovations designed to optimize savings and efficiency from the current insurance industry model. Our goal is to disrupt and innovate the insurance industry by offering ultra-customized tools. We use new streams of data from internet-enabled devices to dynamically price premiums according to observed behavior. http://presentation.watchyourhealth.com/ Head of Product 📍 Location: Thane West, Mumbai (On-site) 🕒 Experience: 7 to 10 years 💼 Department: Technology Job Summary: We are looking for a highly driven and strategic Head of Product to lead the design, development, and delivery of scalable digital products. The ideal candidate must have strong full-stack technical knowledge, excellent English communication skills , and proven experience in external client interaction and project delivery . This role is crucial in shaping our product roadmap, ensuring cross-functional collaboration, and delivering high-quality solutions that meet both client expectations and business goals. Key Responsibilities: Define and own the product vision, strategy, and roadmap . Lead the entire product development lifecycle – from ideation and planning to deployment and post-launch optimization. Communicate directly with external clients to understand requirements, manage expectations, and ensure successful product delivery. Guide and mentor cross-functional teams including engineering, design, QA, and business functions. Design and review full-stack architectures using Angular, .NET Core, and Node.js , with a working understanding of Azure . Drive Agile processes using tools like JIRA , ensuring on-time and high-quality releases. Analyze product performance, user feedback, and industry trends to drive continuous improvement. Conduct demos, product walkthroughs, and stakeholder updates, both internally and with clients. Must-Have Skills and Qualifications: 8-9 years of experience in product management or technology leadership roles . Strong hands-on understanding of full-stack technologies : Angular, .NET Core, Node.js. Working knowledge of Azure cloud platform (basics acceptable) . Proven success in external client communication and delivery management . Excellent English communication skills – written and verbal. In-depth experience with Agile/Scrum methodologies and JIRA . Ability to lead cross-functional teams, manage multiple projects, and drive outcomes. Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Good to Have: Familiarity with tools like Figma. Experience in SaaS or mobile-first product development Strong UI/UX collaboration background Why Join Us? Be at the helm of product innovation and delivery Work directly with global clients and make a visible impact Lead a passionate, collaborative, and growing product team Opportunity to own and shape products from the ground up 📩 Ready to lead, deliver, and innovate? Apply now! #Hiring #HeadOfProduct #ProductLeadership #ClientDelivery #ProductJobs #MumbaiJobs #Agile #FullStackProduct #NowHiring #DigitalProducts

Posted 5 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Software Engineer In this role you will be Design, develop, and maintain scalable backend services using Python and frameworks like Django, Flask, or FastAPI Build responsive and interactive UIs using React.js, Vue.js, or Angular. Develop and consume RESTful APIs, and contribute to API contract definitions, including Gen AI/Open AI integration where applicable. Collaborate closely with UI/UX designers, product managers, and fellow engineers to translate business requirements into technical solutions. Ensure performance, security, and responsiveness of web applications across platforms and devices. Write clean, modular, and testable code following industry best practices and participate in code reviews. Architect, build, and maintain distributed systems and microservices, ensuring maintainability and scalability. Implement and manage CI/CD pipelines using tools such as Docker, Kubernetes (HELM), Jenkins, and Ansible. Use observability tools such as Grafana and Prometheus tools to monitor application performance and troubleshoot production issues. Requirements To be successful in this role, you should meet the following requirements: 5+ years of experience in full-stack development. Strong proficiency in Python, with hands-on experience using Django, Flask, or FastAPI. Solid front-end development skills in HTML5, CSS3, and JavaScript, with working knowledge of frameworks like React, Vue, or Angular. Proven experience designing and implementing RESTful APIs and integrating third-party APIs/services. Experience working with Kubernetes, Docker, Jenkins, and Ansible for containerization and deployment. Familiarity with both SQL and NoSQL databases, such as PostgreSQL, MySQL, or MongoDB. Comfortable with unit testing, debugging, and using logging tools for observability. Experience with monitoring tools such as Grafana and Prometheus utilities. Basic experience in data handling, including managing, processing, and integrating data within full-stack applications to ensure seamless backend and frontend functionality. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI

Posted 5 days ago

Apply

7.0 - 14.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Position Type Full time Type Of Hire Experienced (relevant combo of work and education) Education Desired Bachelor of Computer Engineering Travel Percentage 0% Are you curious, motivated, and forward-thinking? At FIS you’ll have the opportunity to work on some of the most challenging and relevant issues in financial services and technology. Our talented people empower us, and we believe in being part of a team that is open, collaborative, entrepreneurial, passionate and above all fun. About The Team This role is a part of our OPF team. FIS Open Payment Framework (OPF) is a set of reusable and extensible components, frameworks, andtechnical services which can be assembled in different configurations to build a personalized Payment Processing System. From the Open PaymentFramework, FIS has created predefined solutions around the bank payment hub, including Domestic & International payments (XCT) , SEPA DirectDebits & Credit Transfers (SEPA) , SCT INST ,UK Faster Payments ,Immediate Payments ,eBanking (EBK) ,Business Payments (BP), NPP,BACS ,US ACH. What You Will Be Doing Develop application code for java programs Design, implement and maintain java application phases Designing, coding, and debugging and maintenance of Java, J2EE application systems Object-oriented Design and Analysis (OOA and OOD) Evaluate and identify new technologies for implementation Ability to convert business requirement into executable code solution Provide leadership to technical team What You Bring Must have 7 to 14 years of experience in Java Technologies Must have experience on Banking domain Proficiency in Core Java, J2EE, ANSI SQL, XML, Struts, Hibernate, Spring and Springboot Good experience in Database concepts (Oracle/DB2), docker (helm), kubernates, Core Java Language (Collections, Concurrency/Multi-Threading, Localization, JDBC), microservices Hands on experience in Web Technologies (Either Spring or Struts, Hibernate, JSP, HTML/DHTML, Rest Web services, JavaScript) Must have knowledge of one J2EE Application Server e.g.~ WebSphere Process Server, WebLogic, jboss Working Knowledge of JIRA or equivalent What We Offer You An exciting opportunity be a part of World’s Leading FinTech Product MNC To be a part of vibrant team and to build up a career on core banking/payments domain Competitive salary and attractive benefits including GHMI/ Hospitalization coverage for employee and direct dependents A multifaceted job with a high degree of responsibility and a broad spectrum of opportunities Privacy Statement FIS is committed to protecting the privacy and security of all personal information that we process in order to provide services to our clients. For specific information on how FIS protects personal information online, please see the Online Privacy Notice . Sourcing Model Recruitment at FIS works primarily on a direct sourcing model; a relatively small portion of our hiring is through recruitment agencies. FIS does not accept resumes from recruitment agencies which are not on the preferred supplier list and is not responsible for any related fees for resumes submitted to job postings, our employees, or any other part of our company. #pridepass

Posted 5 days ago

Apply

0.0 - 4.0 years

0 Lacs

Thiruvananthapuram, Kerala

On-site

Required Qualifications & Skills: 7+ years in DevOps, SRE, or Infrastructure Engineering . Strong expertise in Cloud (AWS/GCP/Azure) & Infrastructure-as-Code (Terraform, CloudFormation) . Proficient in Docker & Kubernetes . Hands-on with CI/CD tools & scripting (Bash, Python, or Go) . Strong knowledge of Linux, networking, and security best practices . Experience with monitoring & logging tools (ELK, Prometheus, Grafana). Familiarity with GitOps, Helm charts & automation . Job Types: Full-time, Permanent Application Question(s): How many years of experience do you have working with cloud platforms (AWS, GCP, or Azure)? Have you implemented monitoring/logging using Prometheus, Grafana, ELK Stack, or Datadog? current monthly salary? Least expected monthly salary? How early you can join? Experience: DevOps: 5 years (Required) Azure: 5 years (Required) Kubernetes: 4 years (Required) Terraform: 4 years (Required) Location: Thiruvananthapuram, Kerala (Required) Work Location: In person Speak with the employer +91 9072049595

Posted 5 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Hi, I’m reaching out regarding a Hybrid Work IT Support Engineer - Level 3 opportunity with one of our companies based in Bengaluru. Karnataka . Please let me know if you’re interested in discussing this further. Thank you! Title: IT Support Engineer - Level 3 Job Type: Full Time Employment Type of Work: Hybrid Work Company Location: Bengaluru. Karnataka Work Hours: 1st Shift & 2nd Shift Description: Majority of the work demand supporting Customer with Production Issues during automation of the Network deployment. Challenging roles of real time debugging of the issue and quick resolution for networks automation Job Responsibilities Actively involved in troubleshooting production defects in customer environment Escalate to next level support based on the priority of the issue. Understand customer demands and communicate effectively on the nature of the issue and possible service impact. Competencies Required: Overall 5+ years of relevant IT experience, in area of Software development and test Automation. Should have end to end working experience in Automation development in cloud native environment. Automation using Python, JavaScript. Awareness of SQL is an additional advantage. Networking knowledge is added advantage. Knowledge of Automation using TOSCA, mistral engine. OpenShift, Docker, Helm and container technology experience is essential Experience troubleshooting cloud-native applications Prior experience working in Linux environment is must.

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies