Jobs
Interviews

2805 Helm Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 9.0 years

5 - 5 Lacs

Bengaluru

Work from Office

Role Proficiency: Act under guidance of DevOps; leading more than 1 Agile team. Outcomes: Interprets the DevOps Tool/feature/component design to develop/support the same in accordance with specifications Adapts existing DevOps solutions and creates relevant DevOps solutions for new contexts Codes debugs tests and documents and communicates DevOps development stages/status of DevOps develop/support issues Selects appropriate technical options for development such as reusing improving or reconfiguration of existing components Optimises efficiency cost and quality of DevOps process tools and technology development Validates results with user representatives; integrates and commissions the overall solution Helps Engineers troubleshoot issues that are novel/complex and are not covered by SOPs Design install and troubleshoot CI/CD pipelines and software Able to automate infrastructure provisioning on cloud/in-premises with the guidance of architects Provides guidance to DevOps Engineers so that they can support existing components Good understanding of Agile methodologies and is able to work with diverse teams Knowledge of more than 1 DevOps toolstack (AWS Azure GCP opensource) Measures of Outcomes: Quality of Deliverables Error rate/completion rate at various stages of SDLC/PDLC # of components/reused # of domain/technology certification/ product certification obtained SLA/KPI for onboarding projects or applications Stakeholder Management Percentage achievement of specification/completeness/on-time delivery Outputs Expected: Automated components : Deliver components that automates parts to install components/configure of software/tools in on premises and on cloud Deliver components that automates parts of the build/deploy for applications Configured components: Configure tools and automation framework into the overall DevOps design Scripts: Develop/Support scripts (like Powershell/Shell/Python scripts) that automate installation/configuration/build/deployment tasks Training/SOPs : Create Training plans/SOPs to help DevOps Engineers with DevOps activities and to in onboarding users Measure Process Efficiency/Effectiveness: Deployment frequency innovation and technology changes. Operations: Change lead time/volume Failed deployments Defect volume and escape rate Meantime to detection and recovery Skill Examples: Experience in design installation and configuration to to troubleshoot CI/CD pipelines and software using Jenkins/Bamboo/Ansible/Puppet /Chef/PowerShell /Docker/Kubernetes Experience in Integrating with code quality/test analysis tools like Sonarqube/Cobertura/Clover Experience in Integrating build/deploy pipelines with test automation tools like Selenium/Junit/NUnit Experience in Scripting skills (Python Linux/Shell Perl Groovy PowerShell) Experience in Infrastructure automation skill (ansible/puppet/Chef/Poweshell) Experience in repository Management/Migration Automation - GIT BitBucket GitHub Clearcase Experience in build automation scripts - Maven Ant Experience in Artefact repository management - Nexus/Artifactory Experience in Dashboard Management & Automation- ELK/Splunk Experience in configuration of cloud infrastructure (AWS Azure Google) Experience in Migration of applications from on-premises to cloud infrastructures Experience in Working on Azure DevOps ARM (Azure Resource Manager) & DSC (Desired State Configuration) & Strong debugging skill in C# C Sharp and Dotnet Setting and Managing Jira projects and Git/Bitbucket repositories Skilled in containerization tools like Docker & Kubernetes Knowledge Examples: Knowledge of Installation/Config/Build/Deploy processes and tools Knowledge of IAAS - Cloud providers (AWS Azure Google etc.) and their tool sets Knowledge of the application development lifecycle Knowledge of Quality Assurance processes Knowledge of Quality Automation processes and tools Knowledge of multiple tool stacks not just one Knowledge of Build and release Branching/Merging Knowledge about containerization Knowledge of Agile methodologies Knowledge of software security compliance (GDPR/OWASP) and tools (Blackduck/ veracode/ checkmarxs) Additional Comments: Must Have - DevOps: Git, Jenkins Pipeline, Test Automation - Orchestration: Docker, Kubernetes - Scripting: Python, Linux Shell - OS: Linux (in-depth), Windows - Networking: network configuration and debugging - Cloud: AWS/Azure - Cloud: Azure/OpenShift, Public Cloud management - Security Practices: Knowledge of critical cyber security controls Would Be Nice - Cloud: M365, AWS, etc. - Orchestration: Terraform, Cloudify, other IaC - SRE: Flux, Grafana, Splunk - Agile: Jira - Products: BigID Professional Skills - Experience working within Agile teams - Experience in product integration e.g., taking open-source products/tools and deploying/integrating them into an enterprise environment using DevOps methodologies - Knowledge of IT Service Management (ITIL) - Ability to quickly learn and understand proprietary technologies in a complex regulated environment - Self-starter with proven & demonstrable experience in technical & application support of enterprise systems - Excellent verbal and written communication skills coupled with a collaborative approach - An automation/orchestration mind-set, enable the product squads to spend more time coding and less time on manual processes Attributes - Passionate around automation, DevOps & SRE - Being comfortable with frequent, incremental code testing and deployment - Take a hands-on approach to implementing DevOps processes right from requirements analysis, test design, automation and analysis - Own the quality and timeliness of delivery - Communicate key issues and progress updates in a regular, accurate, timely fashion Additional Notes: 6-12 Years - One Position (at lead level and should be able to guide the juniors), more than 12 years not needed Handling of dockers, images and work experience (3 months - 2 years hands-on) Kubernetes Must and should be very strong with fundamentals, basics and real hands-on Native Kubernetes concepts - AKS, AWS, on-premise, red hat OpenShift anything is fine Linux, shell scripting - moderate level experience required Observability - Grafana, Elastic search (anything is good) - need minimum experience Automated Pipeline (Jenkins to deploy - real hands-on) required Should be aware of basic Networking concepts Awareness of certifications is needed though certification is not mandatory (for Kubernetes and cloud) Helm / Helm chart is experience is preferably good orchestration wise - Kubernetes is needed and must Deployment of microservices - hands-on Okay with any cloud (AWS/AZURE) SSL Concepts, Moderate security knowledge ITSM concepts (incident, problem, change and release) is desirable Required Skills Kubernetes,Devops,Dockers

Posted 1 week ago

Apply

6.0 - 9.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Job Requirements Job Description Development experience in Java and Node JS / Typescript. Must have hands on experience in Kubernetes or Helm Basic knowledge in AWS cloud infrastructure and services.(Optional) Develop and maintain CI/CD pipelines using tools like Jenkins Automate deployment and configuration management using tools such as Terraform, Ansible. Monitor system performance and troubleshoot issues to ensure high availability and reliability. Collaborate with development teams to integrate DevOps practices into the software development lifecycle. Implement security best practices for cloud environments. Maintain documentation of systems, processes, and procedures. Stay updated on industry trends and emerging technologies in DevOps and cloud computing. Work Experience Required Skills and Experience Strong development background in Java and Node JS or Typescript Must have hands on experience in Kubernetes or Helm Expertise in cloud technologies like AWS(Optional). Expertise in Shell scripting.(Optional) Expertise in developing and managing Pipelines. Expertise in Source code management using GIT/Bitbucket. Experience in container technologies like Docker. Experience in Build/Release management. Good understanding of Agile processes Cloud Platforms: Extensive experience on AWS platform -EC2, EKS, ECS, S3, RDS, IAM, Kubernetes, Helm(Desirable). Monitoring Tools: Knowledge of monitoring and observability tools : Grafana.(Optional) Automation: should be comfortable with infrastructure-as-code (e.g., Terraform, Ansible).(Optional) Problem-Solving: Strong analytical skills to troubleshoot complex issues. Deployments Know-How : CI/CD, pods management, SonarQube, Git, etc

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Project Role : Engineering Services Practitioner Project Role Description : Assist with end-to-end engineering services to develop technical engineering solutions to solve problems and achieve business objectives. Solve engineering problems and achieve business objectives using scientific, socio-economic, technical knowledge and practical experience. Work across structural and stress design, qualification, configuration and technical management. Must have skills : 5G Wireless Networks & Technologies Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Job Title: 5G Core Network Ops Senior Engineer Summary: We are seeking a skilled 5G Core Network Senior Engineer to join our team. The ideal candidate will have extensive experience with Nokia 5G Core platforms and will be responsible for fault handling, troubleshooting, session and service investigation, configuration review, performance monitoring, security support, change management, and escalation coordination. Roles and Responsibilities: 1. Fault Handling & Troubleshooting: Provide Level 2 (L2) support for 5G Core SA network functions in production environment. Nokia EDR Operations & Support, Monitor and maintain the health of Nokia EDR systems. Perform log analysis and troubleshoot issues related to EDR generation, parsing, and delivery. Ensure EDRs are correctly generated for all relevant 5G Core functions (AMF, SMF, UPF, etc.) and interfaces (N4, N6, N11, etc.). Validate EDR formats and schemas against 3GPP and Nokia specifications. NCOM Platform Operations Operate and maintain the Nokia Cloud Operations Manager (NCOM) platform. Manage lifecycle operations of CNFs, VNFs, and network services (NSs) across distributed Kubernetes and OpenStack environments. Analyze alarms from NetAct/Mantaray, or external monitoring tools. Correlate events using Netscout, Mantaray, and PM/CM data. Troubleshoot and resolve complex issues related to registration, session management, mobility, policy, charging, DNS, IPSec and Handover issues. Handle node-level failures (AMF/SMF/UPF/NRF/UDM/UDR/PCF/CHF restarts, crashes, overload). Perform packet tracing (Wireshark) or core trace (PCAP, logs) and Nokia PCMD trace capturing and analysis. Perform root cause analysis (RCA) and implement corrective actions. Handle escalations from Tier-1 support and provide timely resolution. 2. Automation & Orchestration Automate deployment, scaling, healing, and termination of network functions using NCOM. Develop and maintain Ansible playbooks, Helm charts, and GitOps pipelines (FluxCD, ArgoCD). Integrate NCOM with third-party systems using open APIs and custom plugins. 3. Session & Service Investigation: Trace subscriber issues (5G attach, PDU session, QoS). Use tools like EDR, Flow Tracer, Nokia Cloud Operations Manager (COM). Correlate user-plane drops, abnormal release, bearer QoS mismatch. Work on Preventive measures with L1 team for health check & backup. 4. Configuration and Change Management: Create a MOP for required changes, validate MOP with Ops teams, stakeholders before rollout/implementation. Maintain detailed documentation of network configurations, incident reports, and operational procedures. Support software upgrades, patch management, and configuration changes. Maintain documentation for known issues, troubleshooting guides, and standard operating procedures (SOPs). Audit NRF/PCF/UDM etc configuration & Database. Validate policy rules, slicing parameters, and DNN/APN settings. Support integration of new 5G Core nodes and features into the live network. 5. Performance Monitoring: Use KPI dashboards (NetAct/NetScout) to monitor 5G Core KPIs e.g registration success rate, PDU session setup success, latency, throughput, user-plane utilization. Proactively detect degrading KPIs trends. 6. Security & Access Support: Application support for Nokia EDR and CrowdStrike. Assist with certificate renewal, firewall/NAT issues, and access failures. 7. Escalation & Coordination: Escalate unresolved issues to L3 teams, Nokia TAC, OSS/Core engineering. Work with L3 and care team for issue resolution. Ensure compliance with SLAs and contribute to continuous service improvement. 8. Reporting Generate daily/weekly/monthly reports on network performance, incident trends, and SLA compliance. Technical Experience and Professional Attributes: 5–9 years of experience in Telecom industry with hands on experience. Mandatory experience with Nokia 5G Core-SA platform. Handson Experience on Nokia EDR Operations & Support, Monitor and maintain the health of Nokia EDR systems. Perform log analysis and troubleshoot issues related to EDR generation, parsing, and delivery. Experience on NCOM Platform Operations Operate and maintain the Nokia Cloud Operations Manager (NCOM) platform NF deployment and troubleshooting experience on deployment, scaling, healing, and termination of network functions using NCOM. Solid understanding for 5G Core Packet Core Network Protocol such as N1, N2, N3, N6, N7, N8, 5G Core interfaces, GTP-C/U, HTTPS and including ability to trace, debug the issues. Hands-on experience with 5GC components: AMF, SMF, UPF, NRF, AUSF, NSSF, UDM, PCF, CHF, SDL, NEDR, Provisioning and Flowone. In-depth understanding of 3GPP call flows for 5G-SA, 5G NSA, Call routing, number analysis, system configuration, call flow, Data roaming, configuration and knowledge of Telecom standards e.g. 3GPP, ITU-T and ANSI. Familiarity with policy control mechanisms, QoS enforcement, and charging models (event-based, session-based). Hands-on experience with Diameter, HTTP/2, REST APIs, and SBI interfaces. Strong analytical and troubleshooting skills. Proficiency in monitoring and tracing tools (NetAct, NetScout, PCMD tracing). And log management systems (e.g., Prometheus, Grafana). Knowledge of network protocols and security (TLS, IPsec). Excellent communication and documentation skills. Educational Qualification: BE / BTech 15 Years Full Time Education Additional Information: Nokia certifications (e.g., NCOM, NCS, NSP, Kubernetes). Experience in Nokia Platform 5G Core, NCOM, NCS, Nokia Private cloud and Public Cloud (AWS preferred), cloud-native environments (Kubernetes, Docker, CI/CD pipelines). Cloud Certifications (AWS)/ Experience on AWS Cloud

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

#Work from office in Gurgaon #Immediate joiners only We are seeking a skilled DevSecOps Engineer with 3–5 years of hands-on experience to join our growing team. The ideal candidate will be responsible for embedding security into every phase of the development lifecycle, automating infrastructure, and ensuring observability and performance across cloud-native environments. Key Responsibilities: Security Integration: Integrate security controls into CI/CD pipelines using tools like Jenkins to enable secure delivery of applications. Infrastructure as Code (IaC): Automate infrastructure provisioning using Terraform, Ansible, or similar tools. Monitoring & Observability: Deploy and manage monitoring and logging tools like Prometheus, Grafana, CloudWatch, and Azure Application Insights. Containerization & Orchestration: Build and manage containerized applications using Docker and Kubernetes, including Helm chart creation. Scripting & Automation: Write automation scripts using Bash, Shell, or similar to streamline operational tasks. Security Audits & Compliance: Perform regular audits and assessments to ensure systems meet internal and external security standards. Collaboration & Knowledge Sharing: Work closely with development and operations teams to advocate secure coding practices and support incident response readiness. Telemetry & Dashboards: Configure telemetry in Azure for diagnostics and usage insights, build proactive dashboards, and create alerts to detect anomalies and bottlenecks. Qualifications & Skills: Bachelor’s degree in computer science, Engineering, or a related field. 3–5 years of experience in DevSecOps, DevOps, or Cloud Infrastructure roles. Strong experience with CI/CD tools (e.g., Jenkins, GitHub Actions). Hands-on expertise in Terraform, Ansible, or other IAC tools. Proficiency in Docker, Kubernetes, and Helm. Familiarity with monitoring tools such as Prometheus, Grafana, and Azure Application Insights. Solid understanding of security frameworks and compliance standards. Excellent scripting skills in Bash/Shell. Good communication and cross-functional collaboration skills.

Posted 1 week ago

Apply

9.0 - 14.0 years

6 - 9 Lacs

Pune, Bengaluru

Work from Office

Role Overview: Trellix is looking for quality engineers who are self-driven and passionate to work on on-prem/cloud products that cover SIEM, EDR, and XDR technologies. This job involves manual, automated testing (including automation development), non-functional (performance, stress, soak), security testing and much more. Work smartly by using cutting edge technologies and AI driven solutions. About the role: Peruse requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas Mentors and leads junior engineers and drives the team to meet and beat expected outcomes. Continuously looks for optimization in automation cycles, come up with solutions for gap areas Works closely with developers/architects and sets the quality bar for the team. Identify critical issues and communicate them effectively in a timely manner Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is must. Filing defects effectively, i.e., noting all the relevant details that reduces the back-and-forth, and aids quick turnaround with bug fixing is an essential trait for this job Automate using popular frameworks suitable for backend code, APIs and frontend. Hands-on with automation programming languages (Python, go, Java, etc) is a must Execute, monitor and debug automation runs Willing to explore and increase understanding on Cloud/ On-prem infrastructure About you: 9-15 years of experience in a SDET role with a relevant degree in Computer Science or Information Technology is required Show ability to quickly learn a product or concept, viz., its feature set, capabilities, and functionality. Solid fundamentals in any programming language (preferably, Python or go). Sound knowledge of popular automation frameworks such as selenium, Playwright, Postman, PyTest, etc. Hands-on with any of the popular CI/CD tools such as Teamcity, Jenkins or similar is a must RESTful API testing using tools such as Postman or similar is a must Familiarity and exposure to AWS and its offerings, such as S3, EC2, EBS, EKS, IAM, etc., is an added advantage. Exposure to Kubernetes, Docker, helm, GitOps is a must. strong foundational knowledge in working on Linux based systems. Hands-on with non-functional testing, such as, performance and load, is desirable. Some proficiency with prometheus, grafana, service metrics and analysis is highly desirable. Understanding of cyber security concepts would be helpful. Company Benefits and Perks: We believe that the best solutions are developed by teams who embrace each other's unique experiences, skills, and abilities. We work hard to create a dynamic workforce where we encourage everyone to bring their authentic selves to work every day. We offer a variety of social programs, flexible work hours and family-friendly benefits to all of our employees. Retirement Plans Medical, Dental and Vision Coverage Paid Time Off Paid Parental Leave Support for Community Involvement

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

Remote

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Lead Software Engineer Overview We are looking for a Lead Software Engineer to join an award-winning team with a proven track record of combining data science techniques with an intimate knowledge of payments data to aid Financial Institutions in their fight against money laundering and fraud. We craft bespoke services that help our clients gain an understanding of the underlying criminal behaviour that drives financial crime, empowering them to take action. As part of the application development team, your role will focus on creating and maintaining products across the whole lifecycle. Role Establish and enforce best practices for microservices architecture, ensuring scalability, reliability, and maintainability of our solutions. Collaborate with cross-functional teams to define project requirements and deliver scalable solutions. Mentor team members on microservices design principles, patterns and technologies. Take personal responsibility for creating and maintaining microservices, primarily in Golang. Iterate design and build to solve bugs, improve performance, and add new features. Containerise your services and make ready for deployment onto a k8s environment using helm charts. Ensure resilience and reliability of services. Develop a complete understanding of end-to-end technical architecture and dependency systems. Apply that understanding in code. Write tests with high coverage including unit, contract, e2e, and integration. Version control code with git and build, test and deploy using ci/cd pipelines. Build and test remotely on your own machine and deploy to low-level envs. Review team members code, identifying errors and improving performance and readability. Drive code design and process trade-off discussions within team when required. Report status and manage risks within your primary application/service. Perform demos and join acceptance discussions with analysts, developers and product owners. Assist in task planning and review as part of a sprint-based workflow. Estimate and own delivery tasks (design, dev, test, deployment, configuration, documentation) to meet the business requirements. The role is hybrid, and the expectation is that you attend the office according to Mastercard policy. All About You First and foremost, you enjoy building products to solve real, pressing problems for your customers. You enjoy working in a team and have an interest in data science and how advanced algorithms may be deployed as product offerings. You are detail oriented and enjoy writing and reviewing code to a high standard with tests to prove it. Demonstrable ability to write Golang, Python and SQL in a production context. You are happy to learn new programming languages and frameworks as necessary. Experience with large volumes of data and high throughput, low latency solutions. You have experience with, and are interested in, contemporary approaches to service design, including the use of containers and container orchestration technologies, streaming data platforms, APIs, and in-memory/NoSQL stores. You have experience in resolving different solutions and approaches to problems and can choose between pragmatic and rigorous solutions depending on the situation. You are comfortable working in a devops-based software development workflow, including building, testing, and continuous integration/deployment. You are also happy to be evolve along with the development process and contribute to its success. You are comfortable communicating with a range of stakeholders, including subject matter experts, data scientists, software engineers, devops and security professionals. You have the ability to engage with best practices for code review, version control, and change control, balancing the need for a quality codebase with the unique and particular demands of scale-up stage software engineering. You have experience optimising solution performance with a constrained set of technologies. You have experience or are keen to engage with productionising machine learning technologies. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Kochi, Kerala, India

Remote

ClockHash Technologies is looking for an experienced Senior Backend Developer with strong expertise in Python or Node.js. You will be part of a dedicated R&D team from France, working on cutting-edge solutions that drive innovation in network management systems. Our dynamic team thrives on collaboration, autonomy, and continuous growth . Education: Bachelor’s degree in a relevant field (ICT, Computer Engineering, Computer Science, or Information Systems preferred). Experience Minimum 5+ years working with modern WebGUI technologies based on Python, Node.js, or MVC frameworks. Work Location: Bangalore Work mode: Hybrid, 2 days per week in the office Preferred Skills: Primary Technologies: Strong expertise in Python or Node.js , with a deep understanding of backend development, API design, and system architecture. Microservices & Cloud: Hands-on experience with microservices architecture, container-based deployments, and RESTful APIs . Deployment & Orchestration: Proficiency in using Helm for Kubernetes deployments. Operating Systems: Strong knowledge of Linux concepts Database: Experience with MySQL databases. Soft Skills: Autonomous, proactive, and curious personality. Strong communication and collaboration skills. Language: Fluency in English, both oral and written. Key Responsibilities Design, develop, and maintain web applications for large-scale data handling. Ensure application performance, security, and reliability. Develop and deploy microservices-based applications using containerization technologies. Ensure proper deployment of container-based applications with Docker Swarm or Kubernetes, providing necessary artifacts and documentation, and manage Kubernetes deployments using Helm. Work with RESTful APIs for seamless system integrations. Maintain and optimize MySQL database solutions. Participate in Agile processes, including sprint planning, code reviews, and daily stand-ups. Troubleshoot, debug, and enhance existing applications. Nice to Have Experience with modern Web UI frameworks and libraries. Familiarity with Laravel and other MVC frameworks for structured web development. Exposure to CI/CD pipelines and DevOps practices. Experience with cloud platforms like AWS, GCP, or Azure. Knowledge of message queue systems like RabbitMQ or Kafka. Knowledge of front-end technologies such as React or Vue.js . Networking: Familiarity with networking technologies is appreciated. What We Offer Friendly environment with good work-life balance. Opportunity to grow and visibility for your work. Health Insurance. Work from Home support (covering Internet Bill, Gym, or Recreational activities costs). Educational allowances (Certification fees reimbursement). Rich engagement culture with regular team events. ClockHash Technologies is an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, pregnancy, age, marital status, disability, or status as a protected veteran. Please note: The initial screening call will be conducted by our AI assistant.

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

surat, gujarat

On-site

As a Senior Full Stack Developer at Sakrat, you will be an integral part of our team, contributing your expertise to both the frontend and backend development of robust and scalable platforms. Your responsibilities will include working with React/TypeScript for the frontend and Python/FastAPI/Celery for the backend, ensuring a seamless integration between the two layers. You will play a key role in developing the visual editor using react-flow, enhancing the backend logic with Celery for asynchronous job queues, and optimizing REST APIs for efficient performance. Additionally, you will be involved in containerizing the system for streamlined multi-user deployments using Docker and Kubernetes. For the backend aspect of the role, you should have a strong proficiency in Python (3.10+), hands-on experience in building REST APIs with FastAPI/Uvicorn, and familiarity with tools like Celery, Redis, and RabbitMQ for job queues and worker scaling. Your database skills should encompass PostgreSQL, SQLite, and ORMs like SQLAlchemy or Tortoise. Knowledge of Docker, Docker Compose, and container-based deployments, along with exposure to Kubernetes or Helm, will be advantageous. Moreover, you should be well-versed in environment-based configuration, .env patterns, and secure secrets management. On the frontend side, you should demonstrate expertise in React (functional components, hooks) and TypeScript, with experience in using react-flow to build visual editors. A good understanding of frontend build tools like npm, webpack, and CSS frameworks such as Tailwind CSS is essential. You should also have the ability to create responsive, accessible, and dynamic UI components that contribute to an intuitive user experience. In addition to your technical skills, you will be expected to architect clean frontend-backend integrations, deploy full-stack applications in production environments, and understand CI/CD pipelines, versioning, and testing. Experience with multi-user architecture, session handling, and security best practices will be beneficial in this role. If you possess bonus skills such as familiarity with LangChain, RAG, or agent-based LLM pipelines, have contributed to open-source projects, or have prior experience in flow-based editors or chat widgets, it will be considered a plus. At Sakrat, we are a product engineering and digital transformation partner focused on building high-performance software systems for startups, scaleups, and enterprises. You will collaborate closely with founders, CTOs, and product leaders to deliver clean MVPs, modernize legacy platforms, and optimize cloud infrastructure. Our projects are led by experts and senior engineers with extensive experience in platform development, SaaS, AI, and enterprise systems. We prioritize secure, scalable, and well-documented systems that avoid technical debt by following clean architecture, agile practices, and automated pipelines.,

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Senior DevOps Engineer (SRE2) Location: Gurugram Experience: 3+ Years About HaaNaa HaaNaa is a skill-based opinion trading platform that lets users trade their opinions on diverse topics using simple Yes/No choices. From politics, crypto, and finance to sports, entertainment, and current affairs—HaaNaa transforms opinions into assets. With a gamified interface, users get rewarded for informed predictions, while tracking real-time trends, analyzing insights, and engaging with a vibrant community. Role Overview We are looking for a Senior DevOps Engineer (SRE2) to lead and scale our infrastructure as we grow our real-time trading platform. This role demands a mix of hands-on DevOps skills and strong ownership of system reliability, scalability, and observability. Key Responsibilities Design, deploy, and manage scalable, secure, and resilient infrastructure on AWS, focusing on EKS (Elastic Kubernetes Service) for container orchestration. Implement and manage service mesh using Istio, enabling traffic control, observability, and security across microservices. Drive Infrastructure-as-Code (IaC) using Terraform for consistent and repeatable provisioning of cloud resources. Build and maintain robust CI/CD pipelines (GitHub Actions, Jenkins, or CircleCI) to ensure efficient and automated delivery workflows. Ensure high system availability, performance, and reliability—taking ownership of SLIs/SLOs/SLAs, alerts, and dashboards. Implement observability practices using tools like Prometheus, Grafana, ELK/EFK, or OpenTelemetry. Manage incident response, root cause analysis (RCA), and drive postmortem culture. Collaborate with cross-functional teams (engineering, QA, product) to ensure DevOps and SRE best practices are followed. Harden platform against security threats (including DDoS) using Cloudflare, Akamai, or equivalent. Automate repetitive tasks using scripting (Python, Bash) and tools like Ansible. Contribute to platform cost optimization, auto-scaling, and multi-region failover strategies. Requirements 3+ years of hands-on DevOps/SRE experience including team mentorship or leadership. Proven expertise in managing AWS cloud-native architecture, especially EKS, IAM, VPC, ALB/NLB, S3, RDS, CloudWatch. Hands-on with Istio for service mesh and microservice observability/security. Deep experience with Terraform for managing cloud infrastructure. Proficiency in CI/CD and automation tools (GitHub Actions, Jenkins, CircleCI, Ansible). Strong scripting skills in Python, Bash, or equivalent. Familiar with Kubernetes administration, Helm charts, and container orchestration. Strong understanding of monitoring, alerting, and logging systems. Experience handling DDoS mitigation, WAF rules, and CDN configuration. Excellent problem-solving and incident management skills with a proactive mindset. Strong collaboration and communication skills. Nice to Have Experience in high-growth startups or gaming platforms. Understanding of security best practices, IAM policies, and compliance frameworks (SOC2, ISO, etc.). Experience in backend performance tuning, horizontal scaling, and chaos engineering. Familiarity with progressive delivery techniques like Canary deployments or Blue/Green strategies. Why Join HaaNaa? Ownership: Play a key role in shaping the platform’s infrastructure and reliability. Innovation: Work on scalable, low-latency systems powering real-time gamified trading. Teamwork: Join a dynamic, talented team solving complex engineering challenges. Growth: Be part of a rapidly expanding company with leadership growth opportunities. Perks & Benefits: Competitive salary, health insurance, and the freedom to experiment with the latest cloud-native tools. Skills: devops,terraform,ci/cd,cloudformation,go,networking,datadog,aws,grafana,sre,kubernetes,azure,security,prometheus,infrastructure-as-code,gcp,bash,docker,python,linux system administration,elk stack

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

Your journey at Crowe starts here: At Crowe, you can build a meaningful and rewarding career. With real flexibility to balance work with life moments, you're trusted to deliver results and make an impact. We embrace you for who you are, care for your well-being, and nurture your career. Everyone has equitable access to opportunities for career growth and leadership. Over our 80-year history, delivering excellent service through innovation has been a core part of our DNA across our audit, tax, and consulting groups. That's why we continuously invest in innovative ideas, such as AI-enabled insights and technology-powered solutions, to enhance our services. Join us at Crowe and embark on a career where you can help shape the future of our industry. About the Role: As an ML Release Engineer, you will be responsible for managing the release process for machine learning solutions, ensuring that updates are deployed seamlessly to both test and production environments. Your role will focus on automating processes, improving deployment methodologies, and ensuring compliance with security and regulatory standards. - Own and manage release checklists for deploying version updates of ML solutions, ensuring compliance with SOC standards and conducting thorough security checks. - Deploy updated model versions through CI/CD pipelines in a GitLab environment, ensuring smooth transitions and minimal downtime. - Manage documentation for the Change Review Board (CRB) and represent the Applied AI and Machine Learning team at CRB meetings to ensure visibility, alignment, and approval for releases. - Oversee CI/CD pipelines and the deployment process, identifying opportunities for automation and process improvements to enhance efficiency and reliability. - Collaborate with partner teams to coordinate release timing and manage dependencies, ensuring effective communication and synchronization across projects. Required Skills: - Proficiency in managing containers and understanding containerization as it relates to deployment processes (Kubernetes, Helm, Docker). - Strong knowledge of compliance requirements and experience in implementing compliance checks within the release process. - Experience with build tooling, including Git and package management systems, to manage version control and dependencies. - Experience working in GitHub or similar development platform (we use GitLab). Preferred Skills: - Experience with automation tools and scripting to streamline deployment processes. - Solid communication skills, capable of effectively coordinating with multiple teams and stakeholders. - Proactive problem-solving attitude, with a focus on continuous improvement and innovation in release management practices. - You enjoy machine learning and have working knowledge of common machine learning models beyond ChatGPT. We expect the candidate to uphold Crowe's values of Care, Trust, Courage, and Stewardship. These values define who we are. We expect all of our people to act ethically and with integrity at all times. Our Benefits: At Crowe, we know that great people are what makes a great firm. We value our people and offer employees a comprehensive benefits package. Learn more about what working at Crowe can mean for you! How You Can Grow: We will nurture your talent in an inclusive culture that values diversity. You will have the chance to meet on a consistent basis with your Career Coach that will guide you in your career goals and aspirations. Learn more about where talent can prosper! More about Crowe: Crowe Horwath IT Services Private Ltd. is a wholly owned subsidiary of Crowe LLP (U.S.A.), a public accounting, consulting and technology firm with offices around the world. Crowe LLP is an independent member firm of Crowe Global, one of the largest global accounting networks in the world. The network consists of more than 200 independent accounting and advisory firms in more than 130 countries around the world. Crowe does not accept unsolicited candidates, referrals or resumes from any staffing agency, recruiting service, sourcing entity or any other third-party paid service at any time. Any referrals, resumes or candidates submitted to Crowe, or any employee or owner of Crowe without a pre-existing agreement signed by both parties covering the submission will be considered the property of Crowe, and free of charge.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

maharashtra

On-site

As a Senior Infrastructure Specialist in the IT department, you will be responsible for leading and managing scalable infrastructure and container-based environments. Your focus will be on Kubernetes orchestration, automation, and ensuring the security, reliability, and efficiency of platform services. Your role is crucial in modernizing infrastructure systems through DevOps practices and promoting the adoption of containerization and cloud-native technologies within the organization. Your key responsibilities will include designing, automating, and maintaining CI/CD pipelines, managing hypervisor templates, scaling container platforms, administering Kubernetes clusters, and enhancing solutions related to container platforms, edge computing, and virtualization. You will lead the transition to a Kubernetes-based virtualization architecture from VMware OVAs and prioritize platform automation using Infrastructure as Code to minimize manual tasks. Ensuring security hardening and compliance for all infrastructure components is also a key aspect of your role, along with collaborating closely with development, DevOps, and security teams to drive container adoption and lifecycle management. To be successful in this role, you should have at least 8 years of infrastructure engineering experience, deep expertise in Kubernetes architecture, strong Linux systems administration skills, proficiency in cloud platforms like AWS, Azure, and GCP, hands-on experience with Infrastructure as Code tools such as Terraform and Ansible, and familiarity with CI/CD development tools like GitLab, Jenkins, and ArgoCD. Key skills required include Kubernetes management, containerization, cloud-native infrastructure, Linux system engineering, Infrastructure as Code, DevOps, automation tools, and security and compliance in container platforms. Soft skills such as a proactive and solution-oriented mindset, strong communication and collaboration abilities, analytical thinking, and time management skills are also essential for this role. Preferred qualifications include CKA/CKAD certification, cloud certifications, experience with container security and compliance tools, exposure to GitOps tools, and monitoring and alerting experience. Your success in this role will be measured by the uptime and reliability of container platforms, reduction in manual deployment tasks, successful Kubernetes migration, cluster performance, security compliance, and team enablement and automation adoption. Your alignment with the competency framework includes mastery in Kubernetes, infrastructure automation, containerization leadership, strategic execution, and collaboration with various teams and external partners.,

Posted 1 week ago

Apply

8.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Description: Senior Full Stack Developer Position: Senior Full Stack Developer Location: Gurugram Relevant Experience Required: 8+ years Employment Type: Full-time About The Role We are looking for a Senior Full Stack Developer who can build end-to-end web applications with strong expertise in both front-end and back-end development. The role involves working with Django, Node.js, React, and modern database systems (SQL, NoSQL, and Vector Databases), while leveraging real-time data streaming, AI-powered integrations, and cloud-native deployments. The ideal candidate is a hands-on technologist with a passion for modern UI/UX, scalability, and performance optimization. Key Responsibilities Front-End Development Build responsive and user-friendly interfaces using HTML5, CSS3, JavaScript, and React. Implement modern UI frameworks such as Next.js, Tailwind CSS, Bootstrap, or Material-UI. Create interactive charts and dashboards with D3.js, Recharts, Highcharts, or Plotly. Ensure cross-browser compatibility and optimize for performance and accessibility. Collaborate with designers to translate wireframes and prototypes into functional components. Back-End Development Develop RESTful & GraphQL APIs with Django/DRF and Node.js/Express. Design and implement microservices & event-driven architectures. Optimize server performance and ensure secure API integrations. Database & Data Management Work with structured (PostgreSQL, MySQL) and unstructured databases (MongoDB, Cassandra, DynamoDB). Integrate and manage Vector Databases (Pinecone, Milvus, Weaviate, Chroma) for AI-powered search and recommendations. Implement sharding, clustering, caching, and replication strategies for scalability. Manage both transactional and analytical workloads efficiently. Real-Time Processing & Visualization Implement real-time data streaming with Apache Kafka, Pulsar, or Redis Streams. Build live features (e.g., notifications, chat, analytics) using WebSockets & Server-Sent Events (SSE). Visualize large-scale data in real time for dashboards and BI applications. DevOps & Deployment Deploy applications on cloud platforms (AWS, Azure, GCP). Use Docker, Kubernetes, Helm, and Terraform for scalable deployments. Maintain CI/CD pipelines with GitHub Actions, Jenkins, or GitLab CI. Monitor, log, and ensure high availability with Prometheus, Grafana, ELK/EFK stack. Good To Have AI & Advanced Capabilities Integrate state-of-the-art AI/ML models for personalization, recommendations, and semantic search. Implement Retrieval-Augmented Generation (RAG) pipelines with embeddings. Work on multimodal data processing (text, image, and video). Preferred Skills & Qualifications Core Stack Front-End: HTML5, CSS3, JavaScript, TypeScript, React, Next.js, Tailwind CSS/Bootstrap/Material-UI Back-End: Python (Django/DRF), Node.js/Express Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB, Vector Databases (Pinecone, Milvus, Weaviate, Chroma) APIs: REST, GraphQL, gRPC State-of-the-Art & Advanced Tools Streaming: Apache Kafka, Apache Pulsar, Redis Streams Visualization: D3.js, Highcharts, Plotly, Deck.gl Deployment: Docker, Kubernetes, Helm, Terraform, ArgoCD Cloud: AWS Lambda, Azure Functions, Google Cloud Run Monitoring: Prometheus, Grafana, OpenTelemetry

Posted 1 week ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Description: Senior MLOps Engineer Position: Senior MLOps Engineer Location: Gurugram Relevant Experience Required: 6+ years Employment Type: Full-time About The Role We are seeking a Senior MLOps Engineer with deep expertise in Machine Learning Operations, Data Engineering, and Cloud-Native Deployments . This role requires building and maintaining scalable ML pipelines , ensuring robust data integration and orchestration , and enabling real-time and batch AI systems in production. The ideal candidate will be skilled in state-of-the-art MLOps tools , data clustering , big data frameworks , and DevOps best practices , ensuring high reliability, performance, and security for enterprise AI workloads. Key Responsibilities MLOps & Machine Learning Deployment Design, implement, and maintain end-to-end ML pipelines from experimentation to production. Automate model training, evaluation, versioning, deployment, and monitoring using MLOps frameworks. Implement CI/CD pipelines for ML models (GitHub Actions, GitLab CI, Jenkins, ArgoCD). Monitor ML systems in production for drift detection, bias, performance degradation, and anomaly detection. Integrate feature stores (Feast, Tecton, Vertex AI Feature Store) for standardized model inputs. Data Engineering & Integration Design and implement data ingestion pipelines for structured, semi-structured, and unstructured data. Handle batch and streaming pipelines with Apache Kafka, Apache Spark, Apache Flink, Airflow, or Dagster. Build ETL/ELT pipelines for data preprocessing, cleaning, and transformation. Implement data clustering, partitioning, and sharding strategies for high availability and scalability. Work with data warehouses (Snowflake, BigQuery, Redshift) and data lakes (Delta Lake, Lakehouse architectures). Ensure data lineage, governance, and compliance with modern tools (DataHub, Amundsen, Great Expectations). Cloud & Infrastructure Deploy ML workloads on AWS, Azure, or GCP using Kubernetes (K8s) and serverless computing (AWS Lambda, GCP Cloud Run). Manage containerized ML environments with Docker, Helm, Kubeflow, MLflow, Metaflow. Optimize for cost, latency, and scalability across distributed environments. Implement infrastructure as code (IaC) with Terraform or Pulumi. Real-Time ML & Advanced Capabilities Build real-time inference pipelines with low latency using gRPC, Triton Inference Server, or Ray Serve. Work on vector database integrations (Pinecone, Milvus, Weaviate, Chroma) for AI-powered semantic search. Enable retrieval-augmented generation (RAG) pipelines for LLMs. Optimize ML serving with GPU/TPU acceleration and ONNX/TensorRT model optimization. Security, Monitoring & Observability Implement robust access control, encryption, and compliance with SOC2/GDPR/ISO27001. Monitor system health with Prometheus, Grafana, ELK/EFK, and OpenTelemetry. Ensure zero-downtime deployments with blue-green/canary release strategies. Manage audit trails and explainability for ML models. Preferred Skills & Qualifications Core Technical Skills Programming: Python (Pandas, PySpark, FastAPI), SQL, Bash; familiarity with Go or Scala a plus. MLOps Frameworks: MLflow, Kubeflow, Metaflow, TFX, BentoML, DVC. Data Engineering Tools: Apache Spark, Flink, Kafka, Airflow, Dagster, dbt. Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB. Vector Databases: Pinecone, Weaviate, Milvus, Chroma. Visualization: Plotly Dash, Superset, Grafana. Tech Stack Orchestration: Kubernetes, Helm, Argo Workflows, Prefect. Infrastructure as Code: Terraform, Pulumi, Ansible. Cloud Platforms: AWS (SageMaker, S3, EKS), GCP (Vertex AI, BigQuery, GKE), Azure (ML Studio, AKS). Model Optimization: ONNX, TensorRT, Hugging Face Optimum. Streaming & Real-Time ML: Kafka, Flink, Ray, Redis Streams. Monitoring & Logging: Prometheus, Grafana, ELK, OpenTelemetry.

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Role We are looking for an experienced DevOps Engineer to join our engineering team. This role involves setting up, managing, and scaling development, staging, and production environments both on AWS cloud and on-premise (open source stack) . You will be responsible for CI/CD pipelines, infrastructure automation, monitoring, container orchestration, and model deployment workflows for our enterprise applications and AI platform. Key Responsibilities Infrastructure Setup & Management Design and implement cloud-native architectures on AWS and be able to manage on-premise open source environments when required . Automate infrastructure provisioning using tools like Terraform or CloudFormation. Maintain scalable environments for dev, staging, and production . CI/CD & Release Management Build and maintain CI/CD pipelines for backend, frontend, and AI workloads. Enable automated testing, security scanning, and artifact deployments. Manage configuration and secret management across environments. Containerization & Orchestration Manage Docker-based containerization and Kubernetes clusters (EKS, self-managed K8s) . Implement service mesh, auto-scaling, and rolling updates. Monitoring, Security, and Reliability Implement observability (logging, metrics, tracing) using open source or cloud tools. Ensure security best practices across infrastructure, pipelines, and deployed services. Troubleshoot incidents, manage disaster recovery, and support high availability. Model DevOps / MLOps Set up pipelines for AI/ML model deployment and monitoring (LLMOps). Support data pipelines, vector databases, and model hosting for AI applications. Required Skills and Qualifications Cloud & Infra Strong expertise in AWS services : EC2, ECS/EKS, S3, IAM, RDS, Lambda, API Gateway, etc. Ability to set up and manage on-premise or hybrid environments using open source tools. DevOps & Automation Hands-on experience with Terraform / CloudFormation . Strong skills in CI/CD tools such as GitHub Actions, Jenkins, GitLab CI/CD, or ArgoCD. Containerization & Orchestration Expertise with Docker and Kubernetes (EKS or self-hosted). Familiarity with Helm charts, service mesh (Istio/Linkerd). Monitoring / Observability Tools Experience with Prometheus, Grafana, ELK/EFK stack, CloudWatch . Knowledge of distributed tracing tools like Jaeger or OpenTelemetry. Security & Compliance Understanding of cloud security best practices . Familiarity with tools like Vault, AWS Secrets Manager. Model DevOps / MLOps Tools (Preferred) Experience with MLflow, Kubeflow, BentoML, Weights & Biases (W&B) . Exposure to vector databases (pgvector, Pinecone) and AI pipeline automation . Preferred Qualifications Knowledge of cost optimization for cloud and hybrid infrastructures . Exposure to infrastructure as code (IaC) best practices and GitOps workflows. Familiarity with serverless and event-driven architectures . Education Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience). What We Offer Opportunity to work on modern cloud-native systems and AI-powered platforms . Exposure to hybrid environments (AWS and open source on-prem). Competitive salary, benefits, and growth-oriented culture.

Posted 1 week ago

Apply

1.0 - 4.0 years

3 - 7 Lacs

Pune, Bengaluru

Work from Office

Role Overview: Trellix is looking for quality engineers who are self-driven and passionate to work on on-prem/cloud products that cover SIEM, EDR, and XDR technologies. This job involves manual, automated testing (including automation development), non-functional (performance, stress, soak), security testing and much more. Work smartly by using cutting edge technologies and AI driven solutions. About the role: Champion a quality-first mindset throughout the entire software development lifecycle. Develop and implement comprehensive test strategies and plans for a complex hybrid application, considering the unique challenges of both on-premise and cloud deployments. Collaborate with architects and development teams to understand system architecture, design, and new features to define optimal test approaches. Peruse the requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas. Design, develop, and maintain robust, scalable, and high-performance automated test frameworks and tools from scratch, utilizing industry-standard programming languages (e.g., Python, Java, Go). Manage and maintain test environments, including setting up and configuring both on-premise and cloud instances for testing. Execute new feature and regression cases manually, as needed for a product release. Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is essential. Filing defects effectively, i.e., noting all the relevant details that reduce the back-and-forth, and aids quick turnaround with bug fixing, is an essential trait for this job Identify cases that are automatable, and within this scope, segregate cases with high ROI from low-impact areas to improve testing efficiency Analyze test results, identify defects, and work closely with development teams to ensure timely resolution. Willing to explore and increase understanding on Cloud/ On-prem infrastructure About you: 1-4 years of experience in an SDET role with a relevant degree in Computer Science or Information Technology is required Show ability to quickly learn a product or concept, viz., its feature set, capabilities, and functionality. Solid fundamentals in any programming language (preferably, Python or go) and OOPS concepts. Also, hands-on with any of the popular CI/CD tools such as Teamcity, Jenkins or similar is a must RESTful API testing using tools such as Postman or similar is a must Familiarity and exposure to AWS and its offerings, such as, S3, EC2, EBS, EKS, IAM, etc., is required. Exposure to Docker, Helm, GitOps is an added advantage. Extensive experience designing, developing, and maintaining automated test frameworks (e.g., Playwright, Selenium, Cypress, TestNG, JUnit, Pytest). Experience with API testing tools and frameworks (e.g., Postman, Rest Assured, OpenAPI/Swagger). Good foundational knowledge in working on Linux based systems. This includes setting up git repos, user management, network configurations, use of package managers, etc. Hands-on with functional and non-functional testing, such as, performance and load, is desirable. Any level of proficiency with prometheus, grafana, service metrics, would be nice to have Understanding of cyber security concepts would be helpful.

Posted 1 week ago

Apply

6.0 - 9.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Job Title: Senior SDET Role Overview: Trellix is looking for SDETs who are self-driven and passionate to work on Endpoint Detection and Response (EDR) line of products. Tasks range from manual and, automated testing (including automation development), non-functional (performance, stress, soak), solution, security testing and much more. About the role: Peruse requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas Execute new feature and regression cases manually, as needed for a product release Identify critical issues and communicate them effectively in a timely manner Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is helpful. Filing defects effectively, i.e., noting all the relevant details that reduces the back-and-forth, and aids quick turnaround with bug fixing is an essential trait for this job Identify cases that are automatable, and within this scope segregate cases with high ROI from low impact areas to improve testing efficiency Hands-on with automation programming languages such as Python, Java, etc. is advantageous. Execute, monitor and debug automation runs Author automation code to improve coverage across the board Lead fellow team members and also aspects of the product end-to-end, while thinking of all aspects including enhancements, automation, performance, and others Write automation code to reduce repetitive tasks and improve regression coverage About you: 6-9 years of experience in a SDET role with a relevant degree in Computer Science or Information Technology is required Show ability to quickly learn a product or concept, viz., its feature set, capabilities, functionality and nitty-gritty Solid fundamentals in any programming language (preferably, Python ) and OOPS concepts. Also, hands-on with CI/CD with Jenkins or similar is a must RESTful API testing using tools such as Postman or similar is desired Knowledge of creating design patterns is desirable Strong foundational knowledge in working on Linux based systems and their administration is needed. This includes, setting up git repos, user management, network configurations, use of package managers, etc. Proficient with Kubernetes and AWS and its offerings, such as, S3, EC2, EBS, EKS, IAM, etc., is highly desired. Exposure to Docker, helm, argo CD is an added advantage Exposure to non-functional testing, such as, performance and load, is desired. Being hands-on with tools such as Locust and/or JMeter would be a huge advantage Any level of proficiency with Prometheus, grafana, service metrics, and such is desired Understanding of Endpoint security concepts around Endpoint Detection and Response (EDR) and hands-on experience working on SaaS based applications and platforms would be a plus Proven track record of taking ownership and driving aspects of product enhancements end-to-end

Posted 1 week ago

Apply

6.0 - 9.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Role Overview: Trellix is looking for SDETs who are self-driven and passionate to work on Endpoint Detection and Response (EDR) line of products. Tasks range from manual and, automated testing (including automation development), non-functional (performance, stress, soak), solution, security testing and much more. About the role: Peruse requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas Execute new feature and regression cases manually, as needed for a product release Identify critical issues and communicate them effectively in a timely manner Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is helpful. Filing defects effectively, i.e., noting all the relevant details that reduces the back-and-forth, and aids quick turnaround with bug fixing is an essential trait for this job Identify cases that are automatable, and within this scope segregate cases with high ROI from low impact areas to improve testing efficiency Hands-on with automation programming languages such as Python, Java, etc. is advantageous.Execute, monitor and debug automation runs Author automation code to improve coverage across the board Lead fellow team members and also aspects of the product end-to-end, while thinking of all aspects including enhancements, automation, performance, and others Write automation code to reduce repetitive tasks and improve regression coverage About you: 6-9 years of experience in a SDET role with a relevant degree in Computer Science or Information Technology is required Show ability to quickly learn a product or concept, viz., its feature set, capabilities, functionality and nitty-gritty Solid fundamentals in any programming language (preferably, Python ) and OOPS concepts. Also, hands-on with CI/CD with Jenkins or similar is a must RESTful API testing using tools such as Postman or similar is desired Knowledge of creating design patterns is desirable Strong foundational knowledge in working on Linux based systems and their administration is needed. This includes, setting up git repos, user management, network configurations, use of package managers, etc. Proficient with Kubernetes and AWS and its offerings, such as, S3, EC2, EBS, EKS, IAM, etc., is highly desired. Exposure to Docker, helm, argo CD is an added advantage Exposure to non-functional testing, such as, performance and load, is desired. Being hands-on with tools such as Locust and/or JMeter would be a huge advantage Any level of proficiency with Prometheus, grafana, service metrics, and such is desired Understanding of Endpoint security concepts around Endpoint Detection and Response (EDR) and hands-on experience working on SaaS based applications and platforms would be a plus Proven track record of taking ownership and driving aspects of product enhancements end-to-end

Posted 1 week ago

Apply

9.0 - 14.0 years

27 - 32 Lacs

Gurugram, Bengaluru

Hybrid

Perform the analysis of the existing solution from data platforms and its interface customers. Perform the analysis of alternative technologies if needed. Conduct impact analysis on the product toolchain to integrate the deployment of each layer

Posted 1 week ago

Apply

8.0 - 13.0 years

7 - 11 Lacs

Bengaluru

Work from Office

About the Role: Lead the design, development, and deployment of large-scale software systems in Python and Go. Understanding of data pipelines and event/log processing eg., syslog, JSON. Protobuf/MsgPack, gRPC, Apache Kafka, Pulsar, Red Panda, RabbitMQ, etc. Own end-to-end product features, from initial design through to production, with a focus on high-quality, maintainable code. Architect scalable, reliable, and secure software solutions with a focus on performance and usability. Contribute to system design decisions, optimizing for scalability, availability, and performance. Mentor and guide junior engineers, providing technical leadership and fostering a culture of excellence. Integrate with CI/CD pipelines, continuously improving and optimizing them for faster and more reliable software releases. Conduct code reviews to ensure best practices in coding, testing, and design patterns. Troubleshoot, debug, and resolve complex technical issues in production and development environments. About You : 8+ years of professional software development experience. Expertise in Golang and Python and design patterns. Hands-on experience with system design, architecture, and scaling of complex systems. Strong exposure to CI/CD practices and tools (e.g., ArgoCD, Github Actions). Deep knowledge of Kubernetes, e.g., CRD, Helm, Kustomize, design and implementation of k8s Operators Familiarity with infrastructure as code tools (e.g., Terraform, CloudFormation). Good understanding of Networking and storage e.g., Load balancers, proxies. Experience working in cloud environments (e.g., AWS, Azure, GCP) and containerization technologies (e.g., Docker, Kubernetes). Proficient in database design and optimization, with experience in both SQL and NoSQL databases (Eg., OpenSearch, ClickHouse, Apache Iceberg) Proven experience in Agile methodologies and working in cross-functional teams. Excellent problem-solving skills with the ability to break down complex problems into manageable solutions

Posted 1 week ago

Apply

4.0 - 9.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Role Overview: Trellix is looking for quality engineers who are self-driven and passionate to work on on-prem/cloud products that cover SIEM, EDR, and XDR technologies. This job involves manual, automated testing (including automation development), non-functional (performance, stress, soak), security testing and much more. Work smartly by using cutting edge technologies and AI driven solutions. About the role: Peruse requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas Execute new feature and regression cases manually, as needed for a product release Identify critical issues and communicate them effectively in a timely manner Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is helpful. Filing defects effectively, i.e., noting all the relevant details that reduces the back-and-forth, and aids quick turnaround with bug fixing is an essential trait for this job Automate using popular frameworks suitable for backend code, APIs and frontend. Hands-on with automation programming languages (Python, go, Java, etc). Execute, monitor and debug automation runs Willing to explore and increase understanding on Cloud/ On-prem infrastructure About you: 4-10 years of experience in a SDET role with a relevant degree in Computer Science or Information Technology is required Show ability to quickly learn a product or concept, viz., its feature set, capabilities, and functionality. Solid fundamentals in any programming language (preferably, Python or go). Sound knowledge of any of the popular automation frameworks such as selenium, Playwright, Postman, PyTest, etc. Hands-on with any of the popular CI/CD tools such as Teamcity, Jenkins or similar is a must RESTful API testing using tools such as Postman or similar is a must Familiarity and exposure to AWS and its offerings, such as S3, EC2, EBS, EKS, IAM, etc., is an added advantage. Exposure to Kubernetes, Docker, helm, GitOps is desired. strong foundational knowledge in working on Linux based systems. Hands-on with non-functional testing, such as, performance and load, is desirable. Some proficiency with prometheus, grafana, service metrics and analysis is highly desirable. Understanding of cyber security concepts would be helpful.

Posted 1 week ago

Apply

4.0 years

20 - 30 Lacs

Noida, Uttar Pradesh, India

On-site

About Us CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance. Our Values We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community. Equal Opportunity Statement CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Job Title : Senior DevOps Engineer Location : Noida(Hybrid) The Opportunity We are seeking a highly skilled and experienced Senior DevOps Engineer to join our team. The ideal candidate will have extensive expertise in modern DevOps tools and practices, particularly in managing CI/CD pipelines, infrastructure as code, and cloud-native environments. This role involves designing, implementing, and maintaining robust, scalable, and efficient infrastructure and deployment pipelines to support our development and operations teams. Mandatory Skills required: GCP, DevOps, Terraform, Kubernetes, Docker, CI/CD, GitHub Actions, Helm Charts Secondary Skills required: - Experience with other CI/CD tools (e.g., Jenkins, GitLab CI/CD) Knowledge of additional cloud platforms (e.g., AWS, Azure) Certification in Kubernetes (CKA/CKAD) or Google Cloud (GCP Professional DevOps Engineer) Required Skills And Experience 4+ years of experience in DevOps, infrastructure automation, or related fields. Advanced expertise in Terraform for infrastructure as code. Solid experience with Helmfor managing Kubernetes applications. Proficient with GitHub for version control, repository management, and workflows. Extensive experience with Kubernetes for container orchestration and management. In-depth understanding of Google Cloud Platform (GCP) services and architecture. Strong scripting and automation skills (e.g., Python, Bash, or equivalent). Excellent problem-solving skills and attention to detail. Strong communication and collaboration abilities in agile development environments. Preferred Qualifications Experience with other CI/CD tools (e.g., Jenkins, GitLab CI/CD). Knowledge of additional cloud platforms (e.g., AWS, Azure). Certification in Kubernetes (CKA/CKAD) or Google Cloud (GCP Professional DevOps Engineer). Skills:- DevOps, Google Cloud Platform (GCP), Terraform, Kubernetes, Docker, helm, Jenkins and GitHub

Posted 1 week ago

Apply

6.0 - 9.0 years

10 - 11 Lacs

Bengaluru

Work from Office

Role & responsibilities Job Title: Senior Devops Location: Marathalli,Bangalore Job Description : We are seeking a motivated DevOps Engineer with hands-on experience in cloud administration, container orchestration, and CI/CD pipeline automation. The ideal candidate will demonstrate a strong commitment to task ownership, a proactive approach to problem-solving, and the ability to learn and adapt quickly to evolving technologies. This role will play a key part in ensuring reliable, scalable infrastructure and efficient software delivery processes. Key Responsibilities : Manage and administer cloud environments using Azure or AWS, ensuring security, scalability, and performance. Build, deploy, and maintain containerized applications using Kubernetes and Docker. Design, implement, and optimize CI/CD pipelines to streamline software delivery and deployment. Collaborate with development and operations teams to clarify requirements and drive continuous improvement. Take accountability for assigned tasks, proactively identifying and resolving impediments to meet sprint goals. Apply conceptual and visual thinking to document workflows, system architectures, and process improvements. Adapt quickly to new tools, technologies, and methodologies to enhance operational efficiency. Contribute to sprint velocity by completing assigned story points and maintaining consistent productivity. Required Skills: Proven experience in Azure or AWS cloud administration. Strong hands-on skills with Kubernetes and Docker container orchestration and management. Experience in building and maintaining CI/CD pipelines. Ability to understand the broader impact of DevOps tasks on users, teams, and systems. Proactive mindset with strong clarity and initiative in task execution. Ability to quickly learn new technologies and adapt to changing requirements. Preferred attributes : Experience with PowerShell or Shell scripting (optional). Knowledge of incident response metrics such as Mean Time to Resolve (MTTR). Familiarity with best practices in code quality, documentation, and automated testing. Nice to have : Clear communication and effective collaboration within cross-functional teams. Accountability and integrity in owning work and outcomes. Proactive problem-solving and taking initiative without needing close supervision. Educational Qualification : Bachelor's degree (BE/B.Tech) in Computer Science or equivalent. Working mode : Working from office - Monday to Thursday. Friday is optional. Interested Candidate send updated CV:nandini@sapienceminds.com

Posted 1 week ago

Apply

6.0 - 9.0 years

15 - 25 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

6 to 9 years of experience with minimum 2+ of experience with Kubernetes in designing, building and managing large scale production application infrastructure.Kubernetes, GCP, Linux. Email : Mahalakshmi.a@livecjobs.com * JOB IN PAN INDIA* Required Candidate profile Kubernates, GKS, Helm,Openshift, Terraforms

Posted 1 week ago

Apply

7.0 - 12.0 years

30 - 35 Lacs

Pune

Work from Office

About The Role : Job TitleDevOps Engineer, AVP LocationPune, India Role Description As a DevOps Engineer you will work as part of a multi-skilled agile team, dedicated to improved automation and tooling to support continuous delivery. Your team will work hard to foster increased collaboration and create a Devops culture. You will make a crucial contribution to our efforts to be able to release our software more frequently, efficiently and with less risk. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your Key Responsibilities Work with other engineers to support our adoption of continuous delivery, automating the building, packaging, testing and deployment of applications. Create the tools required to deploy and manage applications effectively in production, with minimal manual effort Help teams to adopt modern delivery practices, such as extensive use of automated testing, continuous integration, more frequent releases, blue/green deployment, canary releases, etc. Configure and manage code repositories, continuous builds, artifact repositories, cloud platforms and other tools Contribute towards a culture of learning and continuous improvement within your team and beyond. Share skills and knowledge in a wide range of topics relating to Devops and software delivery Your skills and experience Good knowledge Springboot application (build and deployment). Setting up application on any cloud environment ( GCP will be plus) Extensive experience with configuration management toolsAnsible, Terraform, Docker, Helm or similar tools. Hands on experience on tools Like UDeploy for Automation Extensive experience in understanding networking concept e.g. Firewall, Loadbancing, data transfer Extensive experience building CI/CD pipelines using TeamCity or similar. Experience with a range of tools and techniques that can be used to make software delivery faster and more reliable such as experience in creating and maintaining automated builds, using tools such as Team City, Jenkins and Bamboo etc and using repositories such as Nexus and Artifactory to manage and distribute binary artefacts. Good knowledge of scripting languages, such as Maven, shell Or python. Good understanding of git version control systems, branching and merging, etc. Good understanding of Release Management, Change management concept. Experience working in an agile team, practicing Scrum, Kanban or XP or SAFe How well support you

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Profile: Confluent Developer/Admin Experience: 3 to 5 Years Budget: As per Company Norms Location: Gurugram/ Noida/ New Delhi Job Summary : We are looking for a skilled Confluent Developer with hands-on experience in Apache Kafka and Confluent Platform to develop and maintain scalable, real-time data streaming solutions. The ideal candidate will be responsible for designing, implementing, and managing event-driven architecture and microservices that rely on high-throughput messaging systems. Key Responsibilities : Design and develop Kafka producers, consumers, and stream processing applications using Kafka Streams, KSQL, or Kafka Connect. Build and manage data pipelines using Confluent tools (Schema Registry, Kafka Connect, KSQL, etc.). Integrate Confluent Platform with external systems (databases, APIs, cloud storage). Ensure high availability, scalability , and performance of Kafka clusters. Monitor, troubleshoot, and optimize Kafka-based applications. Collaborate with software engineering, data engineering, and DevOps teams. Develop and maintain CI/CD pipelines for Kafka applications. Apply best practices in schema design, message serialization (Avro, Protobuf), and versioning . Required Skills & Qualifications : Strong experience with Apache Kafka and Confluent Platform . Proficiency in Java , Scala , or Python . Experience with Kafka Streams , KSQL , Kafka Connect , and Schema Registry . Solid understanding of distributed systems and event-driven architecture . Experience with message serialization formats like Avro, Protobuf, or JSON. Familiarity with DevOps tools (Docker, Kubernetes, Helm) and cloud platforms (AWS, Azure, GCP). Knowledge of monitoring tools (Prometheus, Grafana, Confluent Control Center). Bachelor’s degree in Computer Science or related field. Preferred Qualifications : Confluent Certified Developer for Apache Kafka (CCDAK) or similar certifications. Experience with data engineering tools (Spark, Flink, Hadoop). Familiarity with real-time analytics and data lakes . Contributions to open-source Kafka projects.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies