Jobs
Interviews

36 Istio Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

0 Lacs

ranchi, jharkhand

On-site

You are a Middle Python Developer joining our international team to contribute to the development of our products. We are seeking individuals with a high-energy level and a passion for continuous learning, balancing work and personal life while striving to deliver exceptional productivity that directly impacts the success of our clients. Your responsibilities will include writing well-designed, testable, and efficient code, along with creating unit tests for each module. You will have hands-on experience in coding full applications, managing errors effectively, and working with RESTful and gRPC based microservices & pub/sub messaging. Additionally, you will be involved in implementing self-contained User Stories deployable via Kubernetes, participating in code reviews, exploring new technologies, and suggesting technical improvements. Designing and developing messaging-based Applications for URLLC Services through Event-Queues, Event-Audit, and Caching will also be part of your duties. To be successful in this role, you should have at least 2 years of development experience using the latest frameworks, fluency in Python (2.6+ and 3.3+), proficiency in Linux, and experience with frameworks such as Flask/Django, Bottle, uWSGI, Nginx, Jenkins/CI, etc. Knowledge of rapid-prototyping, RESTful and gRPC based microservices & pub/sub messaging, as well as familiarity with technologies like API Gateway Kong, Apigee, Firebase, OAUTH, 2-MFA, JWT, etc, is essential. Experience with data storage tools including RDBMS (Oracle, MySQL Server, MariaDB), Kafka, Pulsar, Redis Streams, ORM (SQLAlchemy, Mongoose, JPA, etc), and intermediate English skills are required. Experience with Containers and Cloud PaaS (K8S, Istio, Envoy, Helm, Azure, etc), Docker, CI/CD, developing instrumentation, ASYNCIO-function in TRY/CATCH block with Error Tracepoints, expertise in building microservice architecture, a software engineering degree or equivalent, and familiarity with Agile methodologies will be considered a plus. In return, we offer a competitive salary based on your experience, opportunities for career growth, a flexible work schedule, minimal bureaucracy, professional skills development programs, paid sick leaves, vacation days, and corporate events. Additionally, you will have the possibility to work remotely.,

Posted 1 day ago

Apply

0.0 - 4.0 years

0 Lacs

delhi

On-site

As a DevOps Intern at LiaPlus AI, you will play a crucial role in building, automating, and securing our AI-driven infrastructure. You will be working closely with our engineering team to optimize cloud operations, enhance security & compliance, and streamline deployments using DevOps & MLOps best practices. Your primary responsibilities will include infrastructure management by deploying and managing cloud resources on Azure as the primary platform. You will also be responsible for setting up robust CI/CD pipelines for seamless deployments, ensuring systems align with ISO, GDPR, and SOC 2 compliance for security & compliance, setting up monitoring dashboards and logging mechanisms for system observability, managing and optimizing PostgreSQL, MongoDB, Redis for performance, writing automation scripts using Terraform, Ansible, and Bash for automation & scripting, managing API gateways like Kong and Istio for network & API gateway management, implementing failover strategies for disaster recovery & HA to ensure system reliability, and deploying and monitoring AI models using Kubernetes and Docker for AI model deployment & MLOps. Requirements: - Currently pursuing or recently completed a Bachelor's/Master's in Computer Science, IT, or a related field. - Hands-on experience with Azure, CI/CD tools, and scripting languages (Python, Bash). - Understanding of security best practices and cloud compliance (ISO, GDPR, SOC 2). - Knowledge of database optimization techniques (PostgreSQL, MongoDB, Redis). - Familiarity with containerization, orchestration, and AI model deployment (Docker, Kubernetes). - Passion for automation, DevOps, and cloud infrastructure. Benefits: - Competitive compensation package. - Opportunity to work with cutting-edge technologies in AI-driven infrastructure. - Hands-on experience in a fast-paced and collaborative environment. - Potential for growth and learning opportunities in the field of DevOps and MLOps. Join us at LiaPlus AI to be part of a dynamic team that is reshaping the future of AI infrastructure through innovative DevOps practices.,

Posted 1 day ago

Apply

12.0 - 16.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Principal Site Reliability Engineer, you will be responsible for leading all infrastructure aspects of a new cloud-native, microservice-based security platform. This platform is fully multi-tenant, operates on Kubernetes, and utilizes the latest cloud-native CNCF technologies such as Istio, Envoy, NATS, Fluent, Jaeger, and Prometheus. Your role will involve technically leading an SRE team to ensure high-quality SLA for a global solution running in multiple regions. Your responsibilities will include building tools and frameworks to enhance developer efficiency on the platform and abstracting infrastructure complexities. Automation and utilities will be developed to streamline service operation and monitoring. The platform handles large amounts of machine-generated data daily and is designed to manage terabytes of data from numerous customers. You will actively participate in platform design discussions with development teams, providing infrastructure insights and managing technology and business tradeoffs. Collaboration with global engineering teams will be crucial as you contribute to shaping the future of Cybersecurity. At GlobalLogic, we prioritize a culture of caring, where people come first. You will experience an inclusive environment promoting acceptance, belonging, and meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Continuous learning and development are essential at GlobalLogic. You will have access to numerous opportunities to expand your skills, advance your career, and grow personally and professionally. Our commitment to your growth includes programs, training curricula, and hands-on experiences. GlobalLogic is recognized for engineering impactful solutions worldwide. Joining our team means working on projects that make a difference, stimulating your curiosity and problem-solving skills. You will engage in cutting-edge solutions that shape the world today. We value balance and flexibility, offering various career paths, roles, and work arrangements to help you achieve a harmonious work-life balance. At GlobalLogic, integrity is key, and we uphold a high-trust environment focused on ethics and reliability. You can trust us to provide a safe, honest, and ethical workplace dedicated to both employees and clients. GlobalLogic, a Hitachi Group Company, is a leading digital engineering partner to top global companies. With a history of digital innovation since 2000, we collaborate with clients to create innovative digital products and experiences, driving business transformation and industry redefinition through intelligent solutions.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

The ideal candidate should be a very experienced full-stack software developer in the field of web applications with several years of practical experience with frontend and backend technologies, and well versed in software development processes, software architectures, build procedures and CI/CD. You will take the initiative to architect technical solutions and drive implementation, align with respective product owners and team members to understand the business requirements of our customers, take ownership and responsibility for features from development until production, help shape an engineering culture by applying best practices like TDD, CI/CD, and be an integral part of the agile development team while being an active sponsor of the agile mindset and change culture throughout the whole company. Your Skills should include excellent knowledge and experience in Java and common Java open-source frameworks like Spring, Spring Boot - OSGI and Eclipse Extension Point concept (will be considered as a plus), being completely at ease with topics like Maven, JUnit, APIs, document and relational databases like MongoDB and PostgreSQL, having experience with microservice architecture and established patterns for it, understanding event-driven architectures and data streaming with Kafka, and bringing experience in AWS, CI/CD Jenkins, SonarQube, Docker, Kubernetes, CDK8S, and Istio (will be considered as a plus). Your benefits include a hybrid work model which recognizes the value of striking a balance between in-person collaboration and remote working incl. up to 25 days per year working from abroad, a compensation and benefits package that includes a company bonus scheme, pension, employee shares program, and multiple employee discounts (details vary by location), career development and digital learning programs, international career mobility, lifelong learning opportunities for employees worldwide, an environment where innovation, delivery, and empowerment are fostered, flexible working, health, and wellbeing offers (including healthcare and parental leave benefits) that support balancing family and career and help people return from career breaks with experience that nothing else can teach. About Allianz Technology: Allianz Technology is the global IT service provider for Allianz and delivers IT solutions that drive the digitalization of the Group. With more than 13,000 employees located in 22 countries around the globe, Allianz Technology works together with other Allianz entities in pioneering the digitalization of the financial services industry. They oversee the full digitalization spectrum from one of the industry's largest IT infrastructure projects that includes data centers, networking, and security, to application platforms that span from workplace services to digital interaction, delivering full-scale, end-to-end IT solutions for Allianz in the digital age. Join us. Let's care for tomorrow.,

Posted 2 days ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a key member of the Cloud and Productivity Engineering Organisation at London Stock Exchange Group, you will be responsible for owning and delivering modern application solutions using containerization. Your role will be pivotal in driving innovation to meet business changes, enhance security measures, and align with the digital strategy. Your leadership in this role will be demonstrated by defining and implementing the Container strategy, standards, processes, methodologies, and architecture. You will collaborate closely with various teams, including Security, Engineering, and Identity, to develop solutions that best suit the project requirements. Key Responsibilities: - Drive the acceleration, adoption, and migration of applications to the public cloud by utilizing containerization as the core technology. - Analyze, design, and implement Container infrastructure solutions in alignment with LSEG standards and procedures. - Design and implement infrastructure processes such as service requests and capacity management for container platforms. - Monitor resource utilization rates, identify potential bottlenecks, and implement improvement points to enhance efficiency and savings. - Support knowledge management through documentation creation, maintenance, and improvement of solution design documents, knowledge articles, Wikis, and other artifacts. Manage the lifecycle of all Container platforms. - Develop long-term technical design and architecture for LSEG services, creating roadmaps for container platforms and peripherals. - Collaborate with the Group CISO and IT security teams to enhance security controls. - Define container strategy in collaboration with the container product team, establishing standards, blueprints, processes, and patterns. - Establish consistent architecture across all Digital platforms in collaboration with the Engineering community to meet LSEG's future technology needs. - Build relationships with cloud platform customers and engage with senior stakeholders up to C Level. - Act as an Agile "Product Owner" for the container product, ensuring feedback and learning are incorporated effectively. Candidate Profile / Key Skills: - Demonstrated technical expertise in infrastructure technologies. - Experience in SDLC, Continuous Integration & Delivery, Application Security, Quality Assurance, Istio, Serverless, Kubernetes, Agile, Lean, Product Development, DevSecOps, Continuous Change, software engineering with exposure to high-performance computing, big data analytics, machine learning. - Proficiency in multiple programming languages such as C, C++, C#, Java, Rust, Go, Python. - Strong background working in a senior technology role within a public cloud environment, ideally with AWS or Azure. - Ability to drive technological and cultural change towards rapid technology adoption and absorption. - Team player with a track record of delivering successful business outcomes. - Excellent planning and communication skills, capable of leading conversations with development and product teams. - Thrives in a fast-paced environment, with strong influencing and negotiation skills. - Experience in team building, coaching, and motivating global teams. - Exposure to modern-day programming languages, PaaS/SaaS/IaaS, and best practices in public cloud. - Proficiency in operating systems, network infrastructures, RDBMS, infrastructure-as-code software, and continuous integration/continuous deployment pipelines. - Deep knowledge of Azure, AWS, and GCP services. Join London Stock Exchange Group, a trusted expert in global financial markets, and play a vital role in driving financial stability and sustainable growth through innovative technology solutions. Be part of a diverse and collaborative culture that values individuality and encourages new ideas while committing to sustainability. Together, we aim to support sustainable economic growth and the just transition to net zero, creating inclusive economic opportunities for all.,

Posted 2 days ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Role We are looking for an experienced DevOps Engineer to join our engineering team. This role involves setting up, managing, and scaling development, staging, and production environments both on AWS cloud and on-premise (open source stack) . You will be responsible for CI/CD pipelines, infrastructure automation, monitoring, container orchestration, and model deployment workflows for our enterprise applications and AI platform. Key Responsibilities Infrastructure Setup & Management Design and implement cloud-native architectures on AWS and be able to manage on-premise open source environments when required . Automate infrastructure provisioning using tools like Terraform or CloudFormation. Maintain scalable environments for dev, staging, and production . CI/CD & Release Management Build and maintain CI/CD pipelines for backend, frontend, and AI workloads. Enable automated testing, security scanning, and artifact deployments. Manage configuration and secret management across environments. Containerization & Orchestration Manage Docker-based containerization and Kubernetes clusters (EKS, self-managed K8s) . Implement service mesh, auto-scaling, and rolling updates. Monitoring, Security, and Reliability Implement observability (logging, metrics, tracing) using open source or cloud tools. Ensure security best practices across infrastructure, pipelines, and deployed services. Troubleshoot incidents, manage disaster recovery, and support high availability. Model DevOps / MLOps Set up pipelines for AI/ML model deployment and monitoring (LLMOps). Support data pipelines, vector databases, and model hosting for AI applications. Required Skills and Qualifications Cloud & Infra Strong expertise in AWS services : EC2, ECS/EKS, S3, IAM, RDS, Lambda, API Gateway, etc. Ability to set up and manage on-premise or hybrid environments using open source tools. DevOps & Automation Hands-on experience with Terraform / CloudFormation . Strong skills in CI/CD tools such as GitHub Actions, Jenkins, GitLab CI/CD, or ArgoCD. Containerization & Orchestration Expertise with Docker and Kubernetes (EKS or self-hosted). Familiarity with Helm charts, service mesh (Istio/Linkerd). Monitoring / Observability Tools Experience with Prometheus, Grafana, ELK/EFK stack, CloudWatch . Knowledge of distributed tracing tools like Jaeger or OpenTelemetry. Security & Compliance Understanding of cloud security best practices . Familiarity with tools like Vault, AWS Secrets Manager. Model DevOps / MLOps Tools (Preferred) Experience with MLflow, Kubeflow, BentoML, Weights & Biases (W&B) . Exposure to vector databases (pgvector, Pinecone) and AI pipeline automation . Preferred Qualifications Knowledge of cost optimization for cloud and hybrid infrastructures . Exposure to infrastructure as code (IaC) best practices and GitOps workflows. Familiarity with serverless and event-driven architectures . Education Bachelors degree in Computer Science, Engineering, or related field (or equivalent experience). What We Offer Opportunity to work on modern cloud-native systems and AI-powered platforms . Exposure to hybrid environments (AWS and open source on-prem). Competitive salary, benefits, and growth-oriented culture. Show more Show less

Posted 3 days ago

Apply

15.0 - 19.0 years

0 Lacs

pune, maharashtra

On-site

At Tarana, you will play a crucial role in the development of a cutting-edge cloud product - a management system for wireless networks that is designed to scale to millions of devices. The project utilizes modern cloud-native architecture and open-source technologies. Your primary responsibility will involve designing and implementing distributed software within a microservices architecture. This encompasses various tasks such as requirements gathering, high-level design, implementation, integrations, operations, troubleshooting, performance tuning, and scaling. As a key member of the team, you will provide technical and engineering leadership to an R&D team responsible for multiple microservices end-to-end. Your role will involve working on Proof of Concepts (PoCs), customer pilots, and production releases within an agile engineering environment. Expect daily challenges that will push you to enhance your skills continuously and meet high standards of quality and performance. Rest assured, we will provide the necessary mentoring to support your success. This position is based in Pune, requiring your in-person presence in the office to collaborate effectively with team members. **Job Responsibilities:** - Hold a Bachelor's degree (or higher) in Computer Science or a closely-related field from a reputable university. A Master's or Ph.D. is preferred. - Possess at least 15 years of software development experience, with a minimum of 5 years working on large-scale distributed software projects. - Demonstrate expertise in product architecture, design, and providing technical leadership to engineering teams. - Have experience in developing SaaS product offerings or IoT applications. - Experience in not only developing but also operating and managing complex systems would be advantageous. **Required Skills & Experience:** - Proficiency in software design and development using Java and its associated ecosystem (e.g., Spring Boot, Hibernate, etc.). - Strong knowledge of Microservices and RESTful APIs including design, implementation, and consumption. - Comprehensive understanding of distributed systems, clustering, asynchronous messaging, scalability, performance, data consistency, and high availability. - Familiarity with distributed messaging systems such as Kafka/Confluent, Kinesis, or Google Pub/Sub. - Mastery of databases (relational, NoSQL, search engines), caching mechanisms, and distributed persistence technologies. Experience with Elastic Search or any time series databases is beneficial. - Experience with cloud-native platforms like Kubernetes and service-mesh technologies like Istio. - Proficient in network protocols (TCP/IP, HTTP), standard network architectures, and RPC mechanisms (e.g., gRPC). - Knowledge of secure coding practices, network security, and application security best practices. Join us at Tarana, a company founded in 2009 with a mission to accelerate the global access to fast and affordable internet services. With over a decade of research and significant investment, we have developed a groundbreaking fixed wireless access technology, exemplified by our commercial platform, Gigabit 1 (G1). G1 has revolutionized broadband economics and has been adopted by over 160 service providers worldwide. Headquartered in Milpitas, California, with additional R&D operations in Pune, India, we are seeking talented individuals to contribute to our innovative solutions and uphold our commitment to customer satisfaction and innovation. If you are a problem solver passionate about making a meaningful impact on the world, apply now and be part of our dynamic and rapidly growing team at Tarana.,

Posted 3 days ago

Apply

10.0 - 14.0 years

0 Lacs

haryana

On-site

You lead the way. We've got your back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities, and each other. Here, you'll learn and grow as we help you create a career journey that's unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you'll be recognized for your contributions, leadership, and impact. Every colleague has the opportunity to share in the company's success. Together, we'll win as a team, striving to uphold our company values and powerful backing promise to provide the world's best customer experience every day. And we'll do it with the utmost integrity, in an environment where everyone is seen, heard, and feels like they belong. Join Team Amex and let's lead the way together. About Enterprise Architecture: Enterprise Architecture is an organization within the Chief Technology Office at American Express and is a key enabler of the company's technology strategy. The four pillars of Enterprise Architecture include: - Architecture as Code: This pillar owns and operates foundational technologies leveraged by engineering teams across the enterprise. - Architecture as Design: This pillar includes the solution and technical design for transformation programs and business critical projects requiring architectural guidance and support. - Governance: Responsible for defining technical standards and developing innovative tools that automate controls to ensure compliance. - Colleague Enablement: Focused on colleague development, recognition, training, and enterprise outreach. What you will be working on: We are looking for a Senior Engineer to join our Enterprise Architecture team. In this role, you will be designing and implementing highly scalable real-time systems following best practices and using cutting-edge technology. This role is best suited for experienced engineers with a broad skillset who are open, curious, and willing to learn. Qualifications: What you will Bring: - Bachelor's degree in computer science, computer engineering, or a related field, or equivalent experience. - 10+ years of progressive experience demonstrating strong architecture, programming, and engineering skills. - Firm grasp of data structures, algorithms with fluency in programming languages like Java, Kotlin, Go. - Ability to lead, partner, and collaborate cross-functionally across engineering organizations. - Experience in building real-time large-scale, high-volume, distributed data pipelines on top of data buses (Kafka). - Hands-on experience with large-scale distributed NoSQL databases like Elasticsearch. - Knowledge and/or experience with containerized environments, Kubernetes, Docker. - Knowledge and/or experience with public cloud platforms like AWS, GCP. - Experience in implementing and maintaining highly scalable microservices in Rest, GRPC. - Experience in working with infrastructure layers like service mesh, Istio, Envoy. - Appetite for trying new things and building rapid POCs. Preferred Qualifications: - Knowledge of Observability concepts like Tracing, Metrics, Monitoring, Logging. - Knowledge of Prometheus. - Knowledge of OpenTelemetry / OpenTracing. - Knowledge of observability tools like Jaeger, Kibana, Grafana, etc. - Open-source community involvement. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones" physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: - Competitive base salaries. - Bonus incentives. - Support for financial well-being and retirement. - Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location). - Flexible working model with hybrid, onsite, or virtual arrangements depending on role and business need. - Generous paid parental leave policies (depending on your location). - Free access to global on-site wellness centers staffed with nurses and doctors (depending on location). - Free and confidential counseling support through our Healthy Minds program. - Career development and training opportunities. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

You will be responsible for working with API related tasks such as defining quotas, implementing security measures, enforcing governance policies, and ensuring system resilience. Your expertise will be required in managing API Gateways like Kong, Apigee, Tyk, or Istio. Proficiency in working with Cloud platforms such as GCP or Azure and experience with Kubernetes clusters will be essential for this role. Knowledge and hands-on experience in technologies like JWT, OAuth2, and OpenID Connect are required. Familiarity with Redis, New Relic, IAM, and RBAC will be beneficial in fulfilling your responsibilities effectively. The role is based in Gurgaon. If you are passionate about API management, cloud technologies, and ensuring the security and performance of IT systems, this opportunity is perfect for you. Join our team and contribute to the success of our projects by leveraging your skills and knowledge in the specified technologies.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Software Architect at our organization, you will be responsible for owning the software architecture vision, principles, and technology standards across the organization. Working closely with engineering leadership and product management, you will craft roadmaps and success criteria to ensure alignment with the wider target architecture. Your primary responsibilities will include developing and leading the architectural model for a unit, directing and leading teams, and designing interaction points between application components and applications. You will be required to evaluate and recommend toolsets, standardize the use of third-party components and libraries, and facilitate developers to understand business and functional requirements. Additionally, you will periodically review scalability and resiliency of application components, recommend steps for refinement and improvement, and enable reusable components to be shared across the enterprise. In this role, you will devise technology and architecture solutions that propel engineering excellence across the organization, simplify complex problems, and address key aspects such as portability, usability, scalability, and security. You will also extend your influence across the organization, leading distributed teams to make strong architecture decisions independently through documentation, mentorship, and training. Moreover, you will be responsible for driving engineering architecture definition using multi-disciplinary knowledge, including cloud engineering, middleware engineering, data engineering, and security engineering. Understanding how to apply Agile, Lean, and principles of fast flow to drive engineering department efficiency and productivity will be essential. You will provide and oversee high-level estimates for scoping large features utilizing Wideband Delphi and actively participate in the engineering process to evolve an Architecture practice to support the department. To excel in this role, you should have the ability to depict technical information conceptually, logically, and visually, along with a strong customer and business focus. Your leadership, communication, and problem-solving skills will play a crucial role in influencing and retaining composure under pressure in environments of rapid change. A forward-thinking mindset to keep the technology modern for value delivery will be key. In terms of qualifications, you should have a minimum of 10 years of software engineering experience, primarily in back-end or full-stack development, and at least 5 years of experience as a Software Senior Architect or Principal Architect using microservices. Experience in a Lean Agile development environment, deep understanding of event-driven architectures, and knowledge of REST, gRPC, and GraphQL architecture are required. Extensive background in Public Cloud platforms, modular Java Script frameworks, databases, caching solutions, and search technologies is also essential. Additionally, strong skills in containerization, including Docker, Kubernetes, and Service Mesh, as well as the ability to articulate an architecture or technical design concept, are desired for this role.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

maharashtra

On-site

As a Kubernetes Administrator/DevOps Senior Consultant, you will be responsible for designing, provisioning, and managing Kubernetes clusters for applications based on micro-services and event-driven architectures. Your role will involve ensuring seamless integration of applications with Kubernetes orchestrated environments and configuring and managing Kubernetes resources such as pods, services, deployments, and namespaces. Monitoring and troubleshooting Kubernetes clusters to identify and resolve performance issues, system errors, and other operational challenges will be a key aspect of your responsibilities. You will also be required to implement infrastructure as code (IAC) using tools like Ansible and Terraform for configuration management. Furthermore, you will design and implement cluster and application monitoring using tools like Prometheus, Grafana, OpenTelemetry, and Datadog. Managing and optimizing AWS cloud resources and infrastructure for Managed containerized environments (ECR, EKS, Fargate, EC2) will be a part of your daily tasks. Ensuring high availability, scalability, and security of all infrastructure components, monitoring system performance, identifying bottlenecks, and implementing necessary optimizations are also crucial responsibilities. Your role will involve troubleshooting and resolving complex issues related to the DevOps stack, developing and maintaining documentation for DevOps processes and best practices, and staying current with industry trends and emerging technologies to drive continuous improvement. Creating and managing DevOps pipelines, IAC, CI/CD, and Cloud Platforms will also be part of your duties. **Required Skills:** - 4-5 years of extensive hands-on experience in Kubernetes Administration, Docker, Ansible/Terraform, AWS, EKS, and corresponding cloud environments. - Hands-on experience in designing and implementing Service Discovery, Service Mesh, and Load Balancers. - Extensive experience in defining and creating declarative files in YAML for provisioning. - Experience in troubleshooting containerized environments using a combination of Monitoring tools/logs. - Scripting and automation skills (e.g., Bash, Python) for managing Kubernetes configurations and deployments. - Hands-on experience with Helm charts, API gateways, ingress/egress gateways, and service meshes (ISTIO, etc.). - Hands-on experience in managing Kubernetes Network (Services, Endpoints, DNS, Load Balancers) and storages (PV, PVC, Storage Classes, Provisioners). - Design, enhance, and implement additional services for centralized Observability Platforms, ensuring efficient log management based on the Elastic Stack, and effective monitoring and alerting powered by Prometheus. - Design and Implement CI/CD pipelines, hands-on experience in IAC, git, monitoring tools like Prometheus, Grafana, Kibana, etc. **Good to Have Skills:** - Relevant certifications (e.g., Certified Kubernetes Administrator CKA / CKAD) are a plus. - Experience with cloud platforms (e.g., AWS, Azure, GCP) and their managed Kubernetes services. - Perform capacity planning for Kubernetes clusters and optimize costs in On-Prem and cloud environments. **Preferred Experience:** - 4-5 years of experience in Kubernetes, Docker/Containerization.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

We are looking for a DevOps + Kubernetes Engineer to join our team in Bengaluru or Hyderabad. Your main responsibility will be to build, maintain, and scale our infrastructure using Kubernetes and DevOps best practices. You will collaborate with development and operations teams to implement automation processes, manage CI/CD pipelines, and ensure efficient infrastructure management for scalable and reliable applications. Your key responsibilities will include designing, implementing, and maintaining Kubernetes clusters for production, staging, and development environments. You will also manage CI/CD pipelines for automated application deployment and infrastructure provisioning, utilizing tools such as Helm, Terraform, or Ansible. Monitoring and optimizing performance, scalability, and availability of applications and infrastructure will be part of your duties, as well as collaborating with software engineers to enhance system performance and optimize cloud infrastructure. Troubleshooting, debugging, and resolving production environment issues in a timely manner, implementing security best practices for managing containerized environments and DevOps workflows, and contributing to the continuous improvement of development and deployment processes using DevOps tools are also essential aspects of the role. The ideal candidate should have 6-8 years of experience in DevOps with a strong focus on Kubernetes and containerized environments. Expertise in Kubernetes cluster management and orchestration, proficiency in CI/CD pipeline tools like Jenkins, GitLab CI, or CircleCI, and strong experience with cloud platforms such as AWS, Azure, or GCP are required. Knowledge of Docker for containerization, Helm for managing Kubernetes applications, infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible, and familiarity with monitoring and logging tools such as Prometheus, Grafana, ELK stack, or Datadog are also important. Strong scripting skills in Bash, Python, or Groovy, experience with version control systems like Git, excellent problem-solving and troubleshooting skills, especially in distributed environments, and a good understanding of security best practices in cloud and containerized environments are necessary qualifications for this role.,

Posted 1 week ago

Apply

7.0 - 12.0 years

0 Lacs

noida, uttar pradesh

On-site

The company is looking to hire 4 individuals for a Full-time position in Gurugram/Noida, India. The ideal candidate should have 7 - 12 years of experience working with AWS/GCP, EKS/GKE, Terraform, Istio, Help, and ArgoCD. If you meet the experience requirements and have expertise in the mentioned technologies, we encourage you to apply for this opportunity. For further information or to apply for the position, please contact us at 450 Century Pkwy Suite 250, Allen, TX 75013, USA or call +1 (469) 570-8638.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

ranchi, jharkhand

On-site

We are seeking a Senior Python Developer to join our international team and contribute to the development of our products. We are looking for individuals who are enthusiastic, lifelong learners, prioritize work-life balance, and are committed to delivering high productivity to ensure the success of our customers. Responsibilities: - Maintain high-quality development standards. - Collaborate with the Customer Solutions department and Product Manager on the design and creation of User Stories. - Implement self-contained User Stories that are ready for containerization and deployment via Kubernetes. - Work with the QA team to ensure test planning and automated test implementations, driving continuous improvement cycles. - Conduct prototyping and explore new technology areas such as image recognition, robotic controls, high-speed messaging, and K8S orchestration. - Develop full applications with robust error management, RESTful and gRPC based microservices, and pub/sub messaging. - Participate in code reviews, test new concepts, suggest technical improvements, and lead by example in coding techniques. - Design and develop messaging-based Applications for URLLC Services via Event-Queues, Event-Audit, and Caching. - Demonstrate expertise in state machines, workflows, refactoring strategies, error management, and software instrumentation. Required experience and skills: - 4+ years of development experience using the latest frameworks. - Proficiency in Python (Python 2.6+, Python 3.3+). - Experience with frameworks like Flask/Django, Bottle, uWSGI, Nginx, Jenkins/CI, etc. - Hands-on experience with rapid prototyping, RESTful and gRPC based microservices, and pub/sub messaging. - Ability to create Python backend microservices. - Familiarity with technologies such as API Gateway Kong, Apigee, Firebase, OAUTH, 2-MFA, JWT, etc. - Experience with data storage tools like RDBMS (Oracle, MySQL Server, MariaDB), Kafka, Pulsar, Redis Streams, etc. - Knowledge of ORM tools (SQLAlchemy, Mongoose, JPA, etc). - Expertise in building microservice architecture. - Upper-Intermediate English proficiency or above. Additional qualifications that would be a plus: - Experience with Containers and Cloud PaaS technologies like K8S, Istio, Envoy, Helm, Azure, etc. - Proficiency in Docker, CI/CD. - Experience in developing instrumentation, using ASYNCIO-function in TRY/CATCH block with Error Tracepoints. - Experience as a Tech Lead. - Familiarity with Agile methodologies. We offer: - Competitive salary based on your professional experience. - Career growth opportunities. - Flexible work schedule. - Minimal bureaucracy. - Professional skills development and training programs. - Paid sick leaves, vacation days, and public holidays. - Participation in corporate events. - Remote work possibilities.,

Posted 1 week ago

Apply

3.0 - 6.0 years

4 - 6 Lacs

Hyderabad, Telangana, India

On-site

Key Responsibilities: API Management & Development: Design, develop, and implement scalable APIs using the Apigee Edge platform for both public and internal applications. Configure API proxies , API products , and API versions in the Apigee platform. Develop and maintain API policies for tasks such as authentication , rate-limiting , logging , error handling , and caching . API Integration: Work closely with development and operations teams to integrate APIs with backend systems and third-party services. Ensure seamless integration of Apigee with other enterprise tools and systems, such as CI/CD pipelines , monitoring tools, and databases. Security & Authentication: Implement API security protocols including OAuth2 , JWT , API key management , and IP whitelisting to secure APIs and sensitive data. Ensure compliance with best practices in API security, data privacy, and protection against threats such as DDoS attacks and unauthorized access. Monitoring & Optimization: Set up monitoring dashboards and analytics in Apigee for tracking API usage, performance, and error rates. Analyze API traffic and performance metrics to ensure API reliability and optimization. Identify and resolve performance bottlenecks and security vulnerabilities. API Versioning & Lifecycle Management: Manage and coordinate the versioning and lifecycle of APIs, including handling deprecation, updates, and backward compatibility. Collaborate with stakeholders to ensure smooth transitions between different API versions and minimize disruption to users. Documentation & Support: Document the API development process, API designs, and policies within Apigee to ensure that stakeholders have clear, up-to-date information. Provide technical support for developers using APIs, helping them with issues related to the API gateway or API policies. Governance & Policy Enforcement: Implement API governance practices to ensure compliance with organizational standards and policies. Enforce rate limiting , throttling , caching , and other policies to optimize API performance and prevent misuse. Collaboration & Best Practices: Work closely with cross-functional teams, including backend developers, DevOps, security engineers, and business analysts to design and deploy efficient APIs. Advocate for best practices in API design, security, and performance across the organization. Required Qualifications: Bachelor's degree in Computer Science , Engineering , or a related field. 3+ years of experience in API management , with at least 1-2 years of hands-on experience using Apigee . Proficiency in configuring and managing Apigee Edge platform, including API proxies , API policies , and API analytics . Knowledge of API protocols (REST, SOAP), web services , and API security mechanisms (OAuth, JWT, API keys, etc.). Experience in API versioning , management , and monitoring tools (such as Apigee Analytics , Google Cloud Monitoring ). Solid understanding of cloud infrastructure (AWS, Azure, GCP) and containerized applications using Docker and Kubernetes . Familiarity with scripting languages such as JavaScript , Python , or Java to write API policies and customize integrations. Preferred Qualifications: Certification in Apigee API or Google Cloud is a plus. Experience with CI/CD pipeline integration for automating API deployments and testing. Familiarity with microservices architecture and containerized applications . Experience with API testing tools such as Postman or SoapUI . Experience in working with cloud-native technologies (e.g., Kubernetes , Istio , Envoy ).

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

ahmedabad, gujarat

On-site

You should have at least 4+ years of experience in DevOps technologies like Azure DevOps, CD/CI, infrastructure automation, Infrastructure as code, containerization, and orchestration. You must possess strong hands-on experience with production systems in AWS/Azure and CI/CD process engineering for a minimum of 2 years. Additionally, you should have a minimum of 1 year of experience managing infrastructure as code using tools like Terraform, Ansible, and Git. Your responsibilities will include working with Container platforms and Cloud Services, handling Identity, Storage, Compute, Automation, and Disaster Recovery using Docker, Kubernetes, Helm, and Istio. You will be expected to work on IAAS tools like AWS CloudFormation and Azure ARM, as well as maintain strong hands-on experience with Linux. Moreover, you should be familiar with DevOps release and build management, orchestration, and automation scripts using tools like Jenkins, Azure DevOps, Bamboo, and Sonarqube. Proficiency in scripting with Bash and Python is necessary. Strong communication and collaboration skills are essential for this role. Furthermore, you will need to have practical experience working with Web Servers such as Apache, Nginx, and Tomcat, along with a good understanding of Java and PHP Applications implementation. Experience with databases like MySQL, PostgreSQL, and MongoDB is also expected. If you meet these qualifications and are interested in this position, please email your resume to career@tridhyain.com.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a Senior DevOps Engineer at TechBlocks, you will be responsible for designing and managing robust, scalable CI/CD pipelines, automating infrastructure with Terraform, and improving deployment efficiency across GCP-hosted environments. With 5-8 years of experience in DevOps engineering roles, your expertise in CI/CD, infrastructure automation, and Kubernetes will be crucial for the success of our projects. In this role, you will own the CI/CD strategy and configuration, implement DevSecOps practices, and drive an automation-first culture within the team. Your key responsibilities will include designing and implementing end-to-end CI/CD pipelines using tools like Jenkins, GitHub Actions, and Argo CD for production-grade deployments. You will also define branching strategies and workflow templates for development teams, automate infrastructure provisioning using Terraform, Helm, and Kubernetes manifests, and manage secrets lifecycle using Vault for secure deployments. Collaborating with engineering leads, you will review deployment readiness, ensure quality gates are met, and integrate DevSecOps tools like Trivy, SonarQube, and JFrog into CI/CD workflows. Monitoring infrastructure health and capacity planning using tools like Prometheus, Grafana, and Datadog, you will implement alerting rules, auto-scaling, self-healing, and resilience strategies in Kubernetes. Additionally, you will drive process documentation, review peer automation scripts, and provide mentoring to junior DevOps engineers. Your role will be pivotal in ensuring the reliability, scalability, and security of our systems while fostering a culture of innovation and continuous learning within the team. TechBlocks is a global digital product engineering company with 16+ years of experience, helping Fortune 500 enterprises and high-growth brands accelerate innovation, modernize technology, and drive digital transformation. We believe in the power of technology and the impact it can have when coupled with a talented team. Join us at TechBlocks and be part of a dynamic, fast-moving environment where big ideas turn into real impact, shaping the future of digital transformation.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

punjab

On-site

As a Senior Software Developer with 5-7 years of experience in software development, including at least 2 years in a senior or lead role, you will be responsible for designing, developing, and deploying full-stack web applications using the MEAN/MERN stack. Your expertise in Node.js with its asynchronous programming model, in-depth knowledge of Express.js for building RESTful APIs, and proficiency with MongoDB for database design and management, including experience with aggregation pipelines and indexing, will be crucial for success in this role. You should have a solid command of a modern front-end framework such as React.js (including Hooks, Context API, and Redux) or Angular (2+), along with a strong understanding of JavaScript/TypeScript, including ES6+ features. Experience with Git for version control and familiarity with containerization technologies like Docker will be expected. Your leadership skills will be put to the test as you lead a small team of developers (2-5 people), demonstrating strong written and verbal communication skills for technical and non-technical audiences. Your problem-solving and analytical skills will be essential, along with your ability to mentor and provide constructive feedback to junior developers, conduct code reviews, and enforce coding standards. Experience working in an Agile/Scrum development environment will be advantageous for this role. Desirable qualifications include experience with cloud platforms such as AWS, Google Cloud Platform (GCP), or Microsoft Azure, as well as expertise in microservices architecture and familiarity with other databases like SQL (e.g., PostgreSQL, MySQL). Knowledge of state management libraries like Redux Toolkit (for React) or Ngrx (for Angular) and experience with real-time applications using WebSockets (e.g., Socket.IO) will be beneficial. Technical skills such as knowledge of server-side rendering (SSR) frameworks like Next.js (for React) or Nuxt.js (for Vue.js), experience with continuous integration/continuous deployment (CI/CD) pipelines (e.g., Jenkins, GitLab CI/CD, CircleCI), and familiarity with Kubernetes (K8s) and container orchestration are desirable. Understanding related technologies like Helm, Istio, or other service meshes, knowledge of testing frameworks and methodologies (e.g., Jest, Mocha, Cypress), and experience with GraphQL will be advantageous. Your ability to handle multiple projects and priorities simultaneously, along with knowledge of project management tools like Jira or Asana, will further contribute to your success in this role.,

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

The Senior DevOps, Platform, and Infra Security Engineer opportunity at FICO's highly modern and innovative analytics and decision platform involves shaping the next generation security for FICO's Platform. You will address cutting-edge security challenges in a highly automated, complex, cloud & microservices-driven environment inclusive of design challenges and continuous delivery of security functionality and features to the FICO platform as well as the AI/ML capabilities used on top of the FICO platform, as stated by the VP of Engineering. In this role, you will secure the design of the next-generation FICO Platform, its capabilities, and services. You will provide full-stack security architecture design from cloud infrastructure to application features for FICO customers. Collaborating closely with product managers, architects, and developers, you will implement security controls within products. Your responsibilities will also include developing and maintaining Kyverno policies for enforcing security controls in Kubernetes environments and defining and implementing policy-as-code best practices in collaboration with platform, DevOps, and application teams. As a Senior DevOps, Platform, and Infra Security Engineer, you will stay updated with emerging threats, Kubernetes security features, and cloud-native security tools. You will define required controls and capabilities for the protection of FICO products and environments, build and validate declarative threat models in a continuous and automated manner, and prepare the product for compliance attestations while ensuring adherence to best security practices. The ideal candidate for this role should have 10+ years of experience in architecture, security reviews, and requirement definition for complex product environments. Strong knowledge and hands-on experience with Kyverno and OPA/Gatekeeper are preferred. Familiarity with industry regulations, frameworks, and practices (e.g., PCI, ISO 27001, NIST) is required. Experience in threat modeling, code reviews, security testing, vulnerability detection, and remediation methods is essential. Hands-on experience with programming languages such as Java, Python, and securing cloud environments, preferably AWS, is necessary. Moreover, experience in deploying and securing containers, container orchestration, and mesh technologies (e.g., EKS, K8S, ISTIO), Crossplane for managing cloud infrastructure declaratively via Kubernetes, and certifications in Kubernetes or cloud security (e.g., CKA, CKAD, CISSP) are desirable. Proficiency with CI/CD tools (e.g., GitHub Actions, GitLab CI, Jenkins, Crossplane) is important. The ability to independently drive transformational security projects across teams and organizations and experience with securing event streaming platforms like Kafka or Pulsar are valued. Hands-on experience with ML/AI model security, IaC (e.g., Terraform, Cloudformation, Helm), and CI/CD pipelines (e.g., Github, Jenkins, JFrog) will be beneficial. Joining FICO as a Senior DevOps, Platform, and Infra Security Engineer offers you an inclusive culture reflecting core values, the opportunity to make an impact and develop professionally, highly competitive compensation and benefits programs, and an engaging, people-first work environment promoting work/life balance, employee resource groups, and social events to foster interaction and camaraderie.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As a Project Manager in the Production IT Digital Manufacturing team at EDAG Production Solutions India, you will have the opportunity to play a key role in the implementation, extension, and rollout of Manufacturing Operation Management/Manufacturing Executive System (MES/MOM) projects. Your responsibilities will include technical conceptual design, development, and implementation of global systems used in digital manufacturing, as well as collaboration on large agile IT projects within the Digital Manufacturing environment. To succeed in this role, you should have completed university studies in technical computer science or general computer science, or possess a comparable qualification. With 5-7 years of experience, you should be well-versed in the development of manufacturing operation management systems, designing IT solutions for the process industry, and requirements engineering. Knowledge of IATF standards, automotive or chemical industry experience, as well as familiarity with modern IT delivery processes and agile requirements engineering will be beneficial. Experience in cloud platforms, cloud native technologies, technical management and rollout of MES/MOM projects, and integration and project management between automation levels, process control levels, MES, and SAP are desirable skills for this role. Your ability to work collaboratively in interdisciplinary projects and your networked end-to-end mindset will be crucial for success in this position. At EDAG Production Solutions India, we value diversity and believe that gender, age, nationality, and religion are not relevant factors. What matters most to us is your passion, expertise, and commitment to driving digital manufacturing forward. If you are ready for the next challenge in your career and possess the necessary qualifications and skills, we encourage you to apply by sending your documents via email, marked "Production IT Digital Manufacturing." Join us in shaping the future of digital manufacturing and be part of a dynamic and multicultural team that is dedicated to creating innovative solutions and driving growth together.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As a Project Manager in Production IT Digital Manufacturing at EDAG Production Solutions India in Gurgaon, you will play a key role in the implementation, extension, and rollout of MES/MOM systems from conception to go-live. Your responsibilities will include technical conceptual design, development, and further enhancement of global systems used in digital manufacturing, as well as collaboration on large agile IT projects. You will also contribute to the strategic design of cloud-based system architecture and the development of cross-service concepts. To excel in this role, you should hold a university degree in technical computer science or a related field, along with 5-7 years of experience. Your experience in developing manufacturing operation management systems, designing IT solutions for the process industry, and knowledge of IATF standards will be valuable. Proficiency in requirements engineering, agile methodologies, and cloud platforms such as Azure is essential. Additionally, familiarity with modern IT delivery processes like DevOps and technologies like Kubernetes and microservices architecture will be beneficial. Your role will involve technical management and rollout of MES/MOM systems, coordination between automation levels, process control, and SAP, and integration of various systems. At EDAG, we value diversity and focus on your skills and expertise above all else. If you are seeking a challenging opportunity to drive digital manufacturing initiatives forward, we encourage you to apply by sending your documents via email, marked "Production IT Digital Manufacturing." Join us at EDAG Production Solutions India and be part of a dynamic team passionate about innovation and excellence.,

Posted 2 weeks ago

Apply

8.0 - 13.0 years

12 - 17 Lacs

Bengaluru

Work from Office

Sr.DevOps Engineer . They will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs. If you have a solid background in software engineering and are familiar with AWS EKS, ISTIO/Services Mesh/tetrate, Terraform,Helm Charts, KONG API Gateway, Azure DevOps, SpringBoot , Ansible, Kafka/MOngoDB wed love to speak with you. Objectives of this Role Building and setting up new development tools and infrastructure Understanding the needs of stakeholders and conveying this to developers Working on ways to automate and improve development and release processes Testing and examining code written by others and analyzing results Identifying technical problems and developing software updates and fixes Working with software developers and software engineers to ensure that development follows established processes and works as intended Monitoring the systems and setup required Tools Daily and Monthly Responsibilities Deploy updates and fixes Provide Level 3 technical support Build tools to reduce occurrences of errors and improve customer experience Develop software to integrate with internal back-end systems Perform root cause analysis for production errors Investigate and resolve technical issues Develop scripts to automate visualization Design procedures for system troubleshooting and maintenance Skills and Qualifications B.Tech in Computer Science, Engineering or relevant field Experience as a DevOps Engineer or similar software engineering role minimum 5- 8 Yrs Proficient with git and git workflows Good knowledge of Kubernets EKS,Teraform,CICD ,AWS Problem-solving attitude Collaborative team spirit

Posted 1 month ago

Apply

10.0 - 12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

FICO (NYSE: FICO) is a leading global analytics software company, helping businesses in 100+ countries make better decisions. Join our world-class team today and fulfill your career potential! The Opportunity VP of Engineering. What You'll Contribute Secure the design of next next-generation FICO Platform, its capabilities, and services. Provide full-stack security architecture design from cloud infrastructure to application features for FICO customers. Work closely with product managers, architects, and developers on implementing the security controls within products. Develop and maintain Kyverno policies for enforcing security controls in Kubernetes environments. Collaborate with platform, DevOps, and application teams to define and implement policy-as-code best practices. Contribute to automation efforts for policy deployment, validation, and reporting. Stay current with emerging threats, Kubernetes security features, and cloud-native security tools. Define required controls and capabilities for the protection of FICO products and environments. Build & validate declarative threat models in a continuous and automated manner. Prepare the product for compliance attestations and ensure adherence to best security practices. What We're Seeking 10+ years of experience in architecture, security reviews, and requirement definition for complex product environments. Strong knowledge and hands-on experience with Kyverno and OPA/Gatekeeper (optional but a plus). Familiarity with industry regulations, frameworks, and practices. For example, PCI, ISO 27001, NIST, etc. Experience in threat modeling, code reviews, security testing, vulnerability detection, attacker exploit techniques, and methods for their remediation. Hands-on experience with programming languages, such as Java, Python, etc. Experience in deploying services and securing cloud environments, preferably AWS Experience deploying and securing containers, container orchestration, and mesh technologies (such as EKS, K8S, ISTIO). Experience with Crossplane to manage cloud infrastructure declaratively via Kubernetes. Certifications in Kubernetes or cloud security (e.g., CKA, CKAD, CISSP) are desirable Proficiency with CI/CD tools (e.g., GitHub Actions, GitLab CI, Jenkins, Crossplane, ). Independently drive transformational security projects across teams and organizations. Experience with securing event streaming platforms like Kafka or Pulsar. Experience with ML/AI model security and adversarial techniques within the analytics domains. Hands-on experience with IaC (Such as Terraform, Cloudformation, Helm) and with CI/CD pipelines (such as Github, Jenkins, JFrog). Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Why Make a Move to FICO At FICO, you can develop your career with a leading organization in one of the fastest-growing fields in technology today - Big Data analytics. You'll play a part in our commitment to help businesses use data to improve every choice they make, using advances in artificial intelligence, machine learning, optimization, and much more. FICO makes a real difference in the way businesses operate worldwide: . Credit Scoring - FICO Scores are used by 90 of the top 100 US lenders. . Fraud Detection and Security - 4 billion payment cards globally are protected by FICO fraud systems. . Lending - 3/4 of US mortgages are approved using the FICO Score. Global trends toward digital transformation have created tremendous demand for FICO's solutions, placing us among the world's top 100 software companies by revenue. We help many of the world's largest banks, insurers, retailers, telecommunications providers and other firms reach a new level of success. Our success is dependent on really talented people - just like you - who thrive on the collaboration and innovation that's nurtured by a diverse and inclusive environment. We'll provide the support you need, while ensuring you have the freedom to develop your skills and grow your career. Join FICO and help change the way business thinks! Learn more about how you can fulfil your potential at FICO promotes a culture of inclusion and seeks to attract a diverse set of candidates for each job opportunity. We are an equal employment opportunity employer and we're proud to offer employment and advancement opportunities to all candidates without regard to race, color, ancestry, religion, sex, national origin, pregnancy, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Research has shown that women and candidates from underrepresented communities may not apply for an opportunity if they don't meet all stated qualifications. While our qualifications are clearly related to role success, each candidate's profile is unique and strengths in certain skill and/or experience areas can be equally effective. If you believe you have many, but not necessarily all, of the stated qualifications we encourage you to apply. Information submitted with your application is subject to the FICO Privacy policy at

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Navi Mumbai

Work from Office

Job Summary: We are looking for a skilled DevSecOps Engineer to join our team and help integrate security practices into our DevOps processes. You will work closely with development, operations, and security teams to build secure, scalable, and automated systems. Your role will be crucial in ensuring that security is embedded in every stage of the software development lifecycle. Key Responsibilities: Collaborate with development, QA, and operations teams to integrate security best practices into CI/CD pipelines. Design, implement, and maintain automated security testing tools such as SAST, DAST, and vulnerability scanning. Manage and configure security tools for container security, cloud security, and infrastructure security. Conduct threat modeling, risk assessments, and security audits for applications and infrastructure. Monitor security alerts and respond to incidents, performing root cause analysis and remediation. Implement and manage identity and access management (IAM) policies. Maintain infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible, ensuring compliance with security standards. Stay updated with the latest security vulnerabilities, threats, and mitigation techniques. Educate and mentor development and operations teams on security best practices and secure coding principles. Participate in compliance audits and contribute to documentation related to security policies and procedures. Required Skills and Qualifications: Bachelors degree in Computer Science, Information Security, or a related field (or equivalent experience). Proven experience in DevSecOps, DevOps, or Security Engineering roles. Strong knowledge of CI/CD tools such as Jenkins, GitLab CI, CircleCI, or others. Hands-on experience with security tools: SAST (e.g., SonarQube, Checkmarx), DAST (e.g., OWASP ZAP, Burp Suite), vulnerability scanners (e.g., Nessus, Qualys). Proficient in scripting languages like Python, Bash, or PowerShell. Experience with container security tools (e.g., Aqua Security, Twistlock, Clair). Deep understanding of cloud platforms (AWS, Azure, GCP) and their security features. Experience with infrastructure as code (Terraform, CloudFormation, Ansible). Familiarity with compliance frameworks such as PCI-DSS, SOC 2, ISO 27001, GDPR, etc. Strong problem-solving skills and ability to work independently or within a team. Excellent communication and collaboration skills. Role & responsibilities Preferred candidate profile

Posted 1 month ago

Apply

12.0 - 18.0 years

15 - 30 Lacs

Thane, Navi Mumbai

Work from Office

Job Summary: We are looking for a highly experienced DevOps Architect with hands-on expertise in containerization, Kubernetes/OpenShift, and CI/CD pipeline design. The ideal candidate will have a deep understanding of Microservices architecture and messaging queues, along with a proven track record of deploying scalable and resilient cloud-native solutions. Key Responsibilities: Design and implement robust DevOps architecture to support microservices-based solutions. Deploy, manage, and optimize workloads using Openshift , Istio , and Helm . Develop and maintain CI/CD pipelines using GitLab . Integrate and manage Microservices deployments ensuring scalability and reliability. Implement monitoring and observability solutions using Loki stack . Configure and manage messaging queue services as part of distributed systems. Implement automation scripts and Infrastructure-as-Code tools. Lead design and optimization of cloud-native DevOps processes and tools. Provide guidance and leadership to engineering teams in best practices related to automation, security, and scaling. Must Have Skills & Keywords: Openshift , Istio , Helm , CI/CD , GitLab , Microservices architecture , Messaging Queue (any) , Loki stack Good to Have Skills: Kafka , Gemfire , ArgoCD Qualifications & Experience: Bachelors/Master’s Degree in Computer Science, IT, or related field. 12-18 years of hands-on experience in DevOps, with a focus on Kubernetes/OpenShift. Proven expertise in automation, containerization, and continuous delivery. Experience working in agile and fast-paced environments. Work Location: Mumbai Notice Period: Immediate to 7 DaysJoiners Only

Posted 1 month ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies