Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Lead / Staff Software Engineer in Black Duck SRE team, you will play a key role in transforming our R&D products through the adoption of advanced cloud, Containerization, Microservices, modern software delivery and other cutting edge technologies. You will be a key member of the team, working independently to develop tools and scripts, automated provisioning, deployment, and monitoring. The position is based in Bangalore (Near Dairy Circle Flyover) with a Hybrid work mode. Key Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - Minimum of 5-7 years of experience in Site Reliability Engineering / DevOps Engineering. - Strong hands-on experience with Containerization & Orchestration using Docker, Kubernetes (K8s), Helm to Secure, optimize, and scale K8s. - Deep understanding of Cloud Platforms & Services in AWS / GCP / Azure (Preferably GCP) cloud to Optimize cost, security, and performance. - Solid experience with Infrastructure as Code (IaC) using Terraform / CloudFormation / Pulumi (Preferably Terraform) - Write modules, manage state. - Proficient in Scripting & Automation using Bash, Python / Golang - Automate tasks, error handling. - Experienced in CI/CD Pipelines & GitOps using Git / GitHub / GitLab / Bitbucket / ArgoCD, Harness.io - Implement GitOps for deployments. - Strong background in Monitoring & Observability using Prometheus / Grafana / ELK Stack / Datadog / New Relic - Configure alerts, analyze trends. - Good understanding in Networking & Security using Firewalls, VPN, IAM, RBAC, TLS, SSO, Zero Trust - Implement IAM, TLS, logging. - Experience with Backup & Disaster Recovery using Velero, Snapshots, DR Planning - Implement backup solutions. - Basic Understanding messaging concepts using RabbitMQ / Kafka / Pub,Sub / SQS. - Familiarity with Configuration Management using Ansible / Chef / Puppet / SaltStack - Run existing playbooks. Key Responsibilities: - Design and develop scalable, modular solutions that promote reuse and are easily integrated into our diverse product suite. - Collaborate with cross-functional teams to understand their needs and incorporate user feedback into the development. - Establish best practices for modern software architecture, including Microservices, Serverless computing, and API-first strategies. - Drive the strategy for Containerization and orchestration using Docker, Kubernetes, or equivalent technologies. - Ensure the platform's infrastructure is robust, secure, and compliant with industry standards. What We Offer: - An opportunity to be a part of a dynamic and innovative team committed to making a difference in the technology landscape. - Competitive compensation package, including benefits and flexible work arrangements. - A collaborative, inclusive, and diverse work environment where creativity and innovation are valued. - Continuous learning and professional development opportunities to grow your expertise within the industry.,
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
indore, madhya pradesh
On-site
Golden Eagle IT Technologies Pvt. Ltd. is looking for a skilled Data Engineer with 2 to 4 years of experience to join the team in Indore. The ideal candidate should have a solid background in data engineering, big data technologies, and cloud platforms. As a Data Engineer, you will be responsible for designing, building, and maintaining efficient, scalable, and reliable data pipelines. You will be expected to develop and maintain ETL pipelines using tools like Apache Airflow, Spark, and Hadoop. Additionally, you will design and implement data solutions on AWS, leveraging services such as DynamoDB, Athena, Glue Data Catalog, and SageMaker. Working with messaging systems like Kafka for managing data streaming and real-time data processing will also be part of your responsibilities. Proficiency in Python and Scala for data processing, transformation, and automation is essential. Ensuring data quality and integrity across multiple sources and formats will be a key aspect of your role. Collaboration with data scientists, analysts, and other stakeholders to understand data needs and deliver solutions is crucial. Optimizing and tuning data systems for performance and scalability, as well as implementing best practices for data security and compliance, are also expected. Preferred skills include experience with infrastructure as code tools like Pulumi, familiarity with GraphQL for API development, and exposure to machine learning and data science workflows, particularly using SageMaker. Qualifications for this position include a Bachelor's degree in Computer Science, Information Technology, or a related field, along with 2-4 years of experience in data engineering or a similar role. Proficiency in AWS cloud services and big data technologies, strong programming skills in Python and Scala, knowledge of data warehousing concepts and tools, as well as excellent problem-solving and communication skills are required.,
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
navi mumbai, maharashtra
On-site
At Kodo, we believe that managing a fast-growing company's finances and operations shouldn't feel like a juggling act. That's why we offer a single platform to streamline all purchase processes for businesses, providing them with everything they need to save time, cut costs, and scale easily. Trusted by companies such as Cars24, Mensa Brands, Zetwerk, and many more, Kodo transforms financial chaos into clarity. Our teams are empowered with flexible corporate processes while integrating effortlessly with their ERPs for real-time insights. We have raised $14M from investors such as Y Combinator, Brex, and other global investors. Our mission is to simplify the CFO stack for fast-growing businesses. We believe in creating exceptional products for our customers, an enriching environment for our team, and a solid business that grows profitably. What you'll be doing: - Ensure that our applications and environments are stable, scalable, secure, and performing as expected. - Proactively engage and work in alignment with cross-functional colleagues to understand their requirements, contributing to and providing suitable supporting solutions. - Develop and introduce systems to aid and facilitate rapid growth, including implementation of deployment policies, designing and implementing new procedures, configuration management, and planning of patches and capacity upgrades. - Ensure suitable levels of monitoring and alerting are in place to keep engineers aware of issues. - Establish runbooks and procedures to minimize outages, jumping in before users notice issues and automating processes for the future. - Automate tasks to ensure nothing is done manually in production. - Identify and mitigate reliability and security risks, preparing for peak times, DDoS attacks, and potential errors. - Troubleshoot issues across the entire stack - hardware, software, applications, and network. - Manage individual project priorities, deadlines, and deliverables as part of a self-organizing team. - Continuously learn and exchange knowledge by conducting code reviews, participating in retrospectives, and staying updated with new insights. You Must Have: - 2+ years of extensive experience in Linux server administration, including patching, packaging, performance tuning, networking, user management, and security. - 2+ years of implementing systems that are highly available, secure, scalable, and self-healing on the Azure cloud platform. - Strong understanding of networking, especially in cloud environments. - Prior experience implementing industry-standard security best practices, including those recommended by Azure. - Proficiency in Bash and any high-level scripting language. - Proficiency in Infrastructure as Code and Infrastructure Testing, preferably using Pulumi. - Hands-on experience in building and administering VMs and Containers using tools such as Docker/Kubernetes. - Excellent communication skills, both spoken and written, with the ability to articulate technical problems and projects to stakeholders. Extra credits for: - Experience with serverless infrastructure and Azure infra. - Experience in governance processes and compliance validation, especially for financial services. - Experience working in product startups. - Experience in administering and scaling PostgreSQL.,
Posted 4 days ago
5.0 - 8.0 years
10 - 15 Lacs
Ahmedabad
Work from Office
We are seeking an experienced and highly skilled Senior DevOps Engineer to join our dynamic team. The ideal candidate will have over 5 years of hands-on experience in designing, implementing & managing CI/CD pipelines, cloud infrastructures & (IaC).
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
Veeam, the global market leader in data resilience, envisions that businesses should have full control over their data whenever and wherever they require it. Veeam specializes in providing data resilience solutions encompassing data backup, recovery, portability, security, and intelligence. Headquartered in Seattle, Veeam serves over 550,000 customers worldwide, who rely on Veeam to ensure the continuity of their operations. As we progress together, learning, growing, and creating a significant impact for some of the world's most renowned brands, we present you with an opportunity to be part of this journey. We are in search of a Platform Engineer to join the Veeam Data Cloud team. The primary goal of the Platform Engineering team is to furnish a secure, dependable, and user-friendly platform that facilitates the development, testing, deployment, and monitoring of the VDC product. This role offers an exceptional opportunity for an individual with expertise in cloud infrastructure and software development to contribute to the development of the most successful and advanced data protection platform globally. Your responsibilities will include: - Developing and maintaining code to automate our public cloud infrastructure, software delivery pipeline, enablement tools, and internally consumed platform services - Documenting system design, configurations, processes, and decisions to support our asynchronous, distributed team culture - Collaborating with a team of remote engineers to construct the VDC platform - Utilizing a modern technology stack comprising containers, serverless infrastructure, public cloud services, and other cutting-edge technologies in the SaaS domain - Participating in an on-call rotation for product operations Technologies you will work with include: - Kubernetes, Azure AKS, AWS EKS, Helm, Docker, Terraform, Golang, Bash, Git, and more. Qualifications we seek from you: - Minimum of 3 years of experience in production operations for a SaaS or cloud service provider - Proficiency in automating infrastructure through code using tools like Pulumi or Terraform - Familiarity with GitHub Actions and a variety of public cloud services - Background in building and supporting enterprise SaaS products - Understanding of operational excellence principles in a SaaS environment - Proficiency in scripting languages such as Bash or Python - Knowledge and experience in implementing secure design principles in the cloud - Demonstrated ability to quickly learn new technologies and implement them effectively - Strong inclination towards taking action and maintaining direct, frequent communication - A technical degree from a university Desirable qualifications: - Experience with Azure - Proficiency in high-level programming languages like Go, Java, C/C++, etc. In return, we provide: - Family Medical Insurance - Annual flexible spending allowance for health and well-being - Life insurance and personal accident insurance - Employee Assistance Program - Comprehensive leave package, including parental leave - Meal Benefit Pass, Transportation Allowance, and Monthly Daycare Allowance - Veeam Care Days - additional 24 hours for volunteering activities - Professional training and education opportunities, including courses, workshops, internal meetups, and access to online learning platforms - Mentorship through our MentorLab program Please note: Veeam reserves the right to decline applications from candidates permanently located outside India. Veeam Software is dedicated to promoting diversity and equal opportunities and prohibits discrimination based on various factors. All personal data collected during the recruitment process will be handled in accordance with our Recruiting Privacy Notice. By applying for this position, you consent to the processing of your personal data as described in our Recruiting Privacy Notice. Your application and supporting documents should accurately represent your qualifications and experience. Any misrepresentation may lead to disqualification from employment consideration or termination if discovered after employment commences.,
Posted 6 days ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
You are looking for a DevOps Technical Lead who will play a crucial role in leading the development of an Infrastructure Agent powered by Generative AI (GenAI) technology. In this role, you will be responsible for designing and implementing an intelligent Infra Agent that can handle provisioning, configuration, observability, and self-healing autonomously. Your key responsibilities will include leading the architecture and design of the Infra Agent, integrating various automation frameworks to enhance DevOps workflows, automating infrastructure provisioning and incident remediation, developing reusable components and frameworks using Infrastructure as Code (IaC) tools, and collaborating with AI/ML engineers and SREs to create intelligent infrastructure decision-making logic. You will also be expected to implement secure and scalable infrastructure on cloud platforms such as AWS, Azure, and GCP, continuously improve agent performance through feedback loops, telemetry, and model fine-tuning, drive DevSecOps best practices, compliance, and observability, as well as mentor DevOps engineers and work closely with cross-functional teams. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, along with at least 8 years of experience in DevOps, SRE, or Infrastructure Engineering. You must have proven experience in leading infrastructure automation projects, expertise with cloud platforms like AWS, Azure, GCP, and deep knowledge of tools such as Terraform, Kubernetes, Helm, Docker, Jenkins, and GitOps. Hands-on experience with LLMs/GenAI APIs, familiarity with automation frameworks, and proficiency in programming/scripting languages like Python, Go, or Bash are also required. Preferred qualifications for this role include experience in building or fine-tuning LLM-based agents, contributions to open-source GenAI or DevOps projects, understanding of MLOps pipelines and AI infrastructure, and certifications in DevOps, cloud, or AI technologies.,
Posted 6 days ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
Join our digital revolution in NatWest Digital X. In everything we do, we work towards making digital experiences effortless and secure. Our organization revolves around three core principles: engineer, protect, and operate. We engineer simple solutions, protect our customers, and operate smarter. Our people have the flexibility to work differently based on their roles and requirements, with options like hybrid working and flexible hours that promote their well-being. This role is based in India, requiring all normal working days to be carried out in the country. As a Principal Engineer at NatWest Digital X, you will be responsible for driving the development of software and tools to achieve project and departmental objectives. In addition to managing the technical delivery of software engineering teams, you will lead participation in internal and industry-wide events, conferences, and other activities. Your role will involve planning, specifying, developing, and deploying high-performance, robust, and resilient systems that adhere to excellent architectural and engineering principles. Key Responsibilities: - Overseeing the productivity of software engineering teams - Ensuring consistent use of shared platform components and technologies - Leading engagements with senior stakeholders to propose technical solutions - Monitoring technical progress and providing updates to stakeholders - Delivering software components for platforms, applications, and services - Designing high-volume, high-performance applications and reusable libraries and APIs - Writing unit and integration tests within automated test environments Key Skills: - Background in software engineering, software design, or database design and architecture - Experience in developing software in an SOA or micro-services paradigm - Leading software development teams and executing technical strategies - Proficiency in Node.js, Java, Python, React, TypeScript, Next.js, or similar technologies - Hands-on experience in automating engineering workflows and cloud platforms like AWS, Azure, or GCP - Knowledge of Kubernetes, CI/CD pipelines, and Infrastructure as Code tools - Architecting scalable, secure APIs and implementing high-performance distributed systems - Focus on code quality, security, and automation - Ability to lead technical discussions, mentor engineers, and collaborate across teams Join us at NatWest Digital X and be part of shaping the future of digital experiences.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
As an Engineering Manager focusing on the OSS Platform & Infrastructure team, you will be responsible for leading and managing a team of engineers to ensure the successful development and maintenance of the organization's platform. Your role will require a deep understanding and practical experience in various technical domains. You should have hands-on expertise in Infrastructure as Code (IaC), Cloud Platforms, Continuous Integration/Continuous Deployment (CI/CD) Pipelines, Containerization & Orchestration, and Site Reliability Engineering (SRE) principles. Your experience should include working in a product-oriented environment with leadership responsibilities in engineering. In addition, you must demonstrate strong proficiency and practical experience with tools such as Ansible, Terraform, CloudFormation, and Pulumi. Knowledge of resource management frameworks like Apache Mesos, Kubernetes, and Yarn is essential. Expertise in Linux operating systems and experience in monitoring, logging, and observability using tools like Prometheus, Grafana, and ELK stack is also required. Furthermore, your programming skills should encompass at least one high-level language such as Python, Java, or Golang. A solid understanding of architectural and systems design, including scalability and resilience patterns, various databases (RDBMS & NoSQL), and familiarity with multi-cloud and hybrid-cloud architectures is crucial for this role. Additionally, highly valued skills for this position include expertise in Network and infrastructure operational product engineering. Knowledge of Network Protocols such as TCP/IP, UDP, HTTP/HTTPS, DNS, BGP, OSPF, VXLAN, IPSec, and having a CCNA or equivalent certification would be advantageous. Experience in Network Security, Network Automation, zero trust concepts, TLS/SSL, VPNs, and protocols like gNMI, gRPC, and RESTCONF is desirable. Proficiency in Agile Methodologies like Scrum and Kanban, backlog and workflow management, as well as SRE-specific reporting metrics (MTTR, deployment frequency, SLO, etc.), will also be beneficial for excelling in this role.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
Join our digital revolution in NatWest Digital X. In everything we do, we work towards one aim - to create digital experiences that are effortless and secure. Our organization is built around three core principles: engineer, protect, and operate. We engineer simple solutions, protect our customers, and operate smarter. Our approach varies based on job roles and needs, offering options like hybrid working and flexible hours to support our people's growth. This role is based in India, requiring all normal working days to be carried out in the country. Join us as a Principal Engineer at Vice President level. Your primary responsibilities will include driving the development of software and tools to achieve project and departmental goals by translating functional and non-functional requirements into a suitable design. Additionally, you will manage the technical delivery of one or more software engineering teams, leading participation in internal and industry-wide events, conferences, and other activities. Leading the planning, specification, development, and deployment of high-performance, robust, and resilient systems will be crucial, ensuring they adhere to excellent architectural and engineering principles and are fit for purpose. As a Principal Engineer, you will oversee the productivity of software engineering teams, ensuring consistent use of shared platform components and technologies. You will engage with senior stakeholders to explore and propose technical solutions to meet product feature requirements. Monitoring technical progress against plans, safeguarding functionality, scalability, and performance, and providing progress updates to stakeholders will also be part of your responsibilities. Additionally, you will deliver software components to support the delivery of platforms, applications, and services for the organization. Designing and developing high-volume, high-performance, high-availability applications using proven frameworks and technologies, as well as designing reusable libraries and APIs for organizational use, are key tasks. Writing unit and integration tests within automated test environments to ensure code quality is also essential. To excel in this role, you should have a background in software engineering, software design, or database design and architecture. Significant experience in developing software in an SOA or micro-services paradigm is required, along with experience in leading software development teams and executing technical strategies. Proficiency in one or more programming languages is necessary. Expertise in backend development (e.g., Node.js, Java, Python) and frontend development (e.g., React, TypeScript, Next.js) is essential. Hands-on experience in automating engineering workflows via portals, strong knowledge of AWS, Azure, or GCP, Kubernetes, CI/CD pipelines, and Infrastructure as Code like Terraform or Pulumi are desired. Experience in architecting scalable, secure APIs, designing and implementing high-performance distributed systems, and a focus on code quality, security, and automation are also expected. Demonstrated ability to lead technical discussions, mentor engineers, and collaborate across teams will be beneficial.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a Lead DevOps Engineer at GrowExx, you will collaborate with cross-functional teams to define, design, and implement DevOps infrastructure while adhering to best practices of Infrastructure as Code (IAC). Your primary goal will be to ensure a robust and stable CI/CD process that maximizes efficiency and achieves 100% automation. You will be responsible for analyzing system requirements comprehensively to develop effective Test Automation Strategies for applications. Additionally, your role will involve designing infrastructure using cloud platforms such as AWS, GCP, Azure, or others. You will also manage Code Repositories like GitHub, GitLab, or BitBucket, and automate software quality gateways using Sonarqube. In this position, you will design branching and merging strategies, create CI pipelines using tools like Jenkins, CircleCI, or Bitbucket, and establish automated build & deployment processes with rollback mechanisms. Identifying and mitigating infrastructure security and performance risks will be crucial, along with designing Disaster Recovery & Backup policies and Infrastructure/Application Monitoring processes. Your role will also involve formulating DevOps Strategies for projects with a focus on Quality, Performance, and Cost considerations. Conducting cost/benefit analysis for proposed infrastructures, automating software delivery processes for distributed development teams, and promoting software craftsmanship will be key responsibilities. You will be expected to identify new tools and processes, and train teams on their adoption. Key Skills: - Hands-on experience with LLM models and evaluation metrics for LLMs. - Proficiency in managing infrastructure on cloud platforms like AWS, GCP, or Azure. - Expertise in Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Pulumi. - Managing code repositories using GitHub, GitLab, or Bitbucket, and implementing effective branching and merging strategies. - Designing and maintaining robust CI/CD pipelines with tools like Jenkins, CircleCI, or Bitbucket Pipelines. - Automating software quality checks using SonarQube. - Understanding of automated build and deployment processes, including rollback mechanisms. - Knowledge of infrastructure security best practices and risk mitigation. - Designing disaster recovery and backup strategies. - Experience with monitoring tools like Prometheus, Grafana, ELK, Datadog, or New Relic. - Defining DevOps strategies aligned with project goals. - Conducting cost-benefit analyses for optimal infrastructure solutions. - Automating software delivery processes for distributed teams. - Passion for software craftsmanship and evangelizing DevOps best practices. - Strong leadership, communication, and training skills. Education and Experience: - B Tech or B. E./BCA/MCA/M.E degree. - 8+ years of relevant experience with team-leading experience. - Experience in Agile methodologies, Scrum & Kanban, project management, planning, risk identification, and mitigation. Analytical and Personal Skills: - Strong logical reasoning and analytical skills. - Effective communication in English (written and verbal). - Ownership and accountability in work. - Interest in new technologies and trends. - Multi-tasking and team management abilities. - Coaching and mentoring skills. - Managing multiple stakeholders and resolving conflicts diplomatically. - Forward-thinking mindset.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
You should have a minimum of 5 years of experience in DevOps, SRE, or Infrastructure Engineering. Your expertise should include a strong command of Azure Cloud and Infrastructure-as-Code using tools such as Terraform and CloudFormation. Proficiency in Docker and Kubernetes is essential. You should be hands-on with CI/CD tools and scripting languages like Bash, Python, or Go. A solid knowledge of Linux, networking, and security best practices is required. Experience with monitoring and logging tools such as ELK, Prometheus, and Grafana is expected. Familiarity with GitOps, Helm charts, and automation will be an advantage. Your key responsibilities will involve designing and managing CI/CD pipelines using tools like Jenkins, GitLab CI/CD, and GitHub Actions. You will be responsible for automating infrastructure provisioning through tools like Terraform, Ansible, and Pulumi. Monitoring and optimizing cloud environments, implementing containerization and orchestration with Docker and Kubernetes (EKS/GKE/AKS), and maintaining logging, monitoring, and alerting systems (ELK, Prometheus, Grafana, Datadog) are crucial aspects of the role. Ensuring system security, availability, and performance tuning, managing secrets and credentials using tools like Vault and Secrets Manager, troubleshooting infrastructure and deployment issues, and implementing blue-green and canary deployments will be part of your responsibilities. Collaboration with developers to enhance system reliability and productivity is key. Preferred skills include certification as an Azure DevOps Engineer, experience with multi-cloud environments, microservices, and event-driven systems, as well as exposure to AI/ML pipelines and data engineering workflows.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
The role of DevOps Specialist at our organization requires a seasoned professional with over 8 years of experience in DevOps & Infrastructure practices, specifically focusing on AWS, Azure, and GCP cloud environments. As a DevOps Specialist, you will be responsible for orchestrating and managing Docker and Kubernetes environments to ensure scalable application deployment. Your key responsibilities will include designing and implementing microservices-based architecture, supporting and optimizing CI/CD pipeline architecture, and managing infrastructure using Infrastructure-as-Code tools like Terraform, Pulumi, or CloudFormation. Additionally, you will be expected to maintain version control practices using Git and related tools. In this role, you will also be required to build, monitor, and maintain Development, Staging, and Production environments, develop automation scripts using Python, Bash, or similar scripting languages, and implement and support CI/CD pipelines using tools like Jenkins, GitHub Actions, or AWS CodePipeline. Monitoring and logging tools such as Prometheus and Grafana will be utilized to ensure system performance and reliability. Incident response, root cause analysis, and preventive measures will also be part of your responsibilities as you collaborate with Agile teams and follow DevOps best practices to drive continuous improvement. The primary skills required for this role include strong hands-on experience with Docker and Kubernetes, proficiency in configuring Kubernetes resources using YAML or GitOps, a solid understanding of microservices architecture and SDLC, experience with Ansible for configuration management, proficiency in using Infrastructure-as-Code tools, and a working knowledge of Git and version control systems. Secondary skills that will be beneficial for this role include experience with CI/CD tools, strong scripting skills, familiarity with monitoring and alerting tools, experience managing multiple environments and deployment pipelines, excellent analytical and problem-solving skills, and familiarity with Agile and DevOps methodologies. To qualify for this position, you should hold a Bachelor's degree in computer science, information technology, or a related field. If you are looking to be part of a dynamic team and lead the development and implementation of robust DevOps & Infrastructure practices, this role could be the perfect fit for you.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Senior Full Stack Engineer at Genloop, you will be responsible for building secure, elegant, and high-performance systems. You will have ownership of both the frontend experience and backend infrastructure, as well as cloud deployment for Genloop's enterprise platform. This platform is trusted to deliver reliable insights over structured data without any delays or dependency bottlenecks. We value engineers who prioritize sub-second load times, eliminate edge-case failures, and create systems that are both safe and seamless. Your key responsibilities will include architecting and deploying secure full-stack applications using technologies such as React/Next.js and Python/Node. You will define infrastructure as code (IaC), automate deployments, and ensure system uptime through robust monitoring and alerting. Additionally, you will be responsible for setting up access control, logging, compliance pipelines, and ensuring audit readiness for standards like SOC 2 and HIPAA. Leading efforts on performance, observability, and platform reliability will also be part of your role, along with translating research and user feedback into fast and intuitive user experiences. You will play a key role in guiding the engineering culture through activities like code reviews, testing, CI/CD, and documentation standards. To qualify for this role, you should have at least 4 years of experience in building and scaling full-stack, cloud-native applications. Deep expertise in modern web stacks and cloud tools such as Next.js, Node/Python, Docker, Kubernetes, and AWS/GCP is essential. Hands-on experience with IaC tools like Terraform or Pulumi is required, along with experience in building for regulated environments like SOC 2, HIPAA, and ISO27001. Strong product intuition and a focus on the user journey are important qualities for this position. Candidates from Tier 1 engineering colleges or those with a strong open-source/systems work background are preferred. Genloop is a research-first AI company focused on building customized, continuously learning AI systems. Our team comprises talented individuals from prestigious institutions like Stanford, Apple, IITs, and leading tech firms. We are united by a shared mission to make enterprise AI reliable, secure, and truly valuable. In terms of compensation and benefits, we offer competitive salaries, meaningful equity, and world-class benefits. Final compensation will be based on your experience, expertise, and location. Genloop is proud to be an Equal Opportunity Employer, as we believe that diversity leads to better products and a stronger team. We are committed to fostering a culture where everyone feels they belong.,
Posted 2 weeks ago
12.0 - 22.0 years
30 - 45 Lacs
Chennai
Work from Office
Kubernetes, Docker & multi-cloud orchestration tools. AWS, Azure, GCP, and private cloud environments, ensuring compatibility & interoperability. Code tools & cloud-neutral deployments. front-end frameworks & back-end development. SQL & NoSQL, CI/CD
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
Sportz Interactive is dedicated to leveraging cloud technologies for improved efficiency, scalability, and security. We are looking for a Senior DevOps Engineer to lay the technical foundation for our cloud infrastructure and DevOps practices. As the first dedicated DevOps Leader in our fast-growing team, you will have the opportunity to build critical capabilities from scratch, including CI/CD pipelines, observability, IAM, and containerized deployments. A key mandate for this role is to steer the organization towards a modern container-based architecture, with robust, zero-downtime deployment strategies like canary and blue-green rollouts. You will play a hands-on role in designing, implementing, and scaling a secure and highly automated AWS infrastructure. Responsibilities Containerization Strategy: Lead the adoption of containerization using Docker and Kubernetes (or ECS/EKS), including image build, storage, and runtime orchestration. Progressive Delivery: Implement advanced deployment strategies like canary, blue-green, and feature flags to enable safe, controlled releases. CI/CD Pipelines: Build and manage CI/CD pipelines using tools like GitHub Actions, GitLab CI, or similar to support rapid, reliable deployments. AWS Infrastructure: Own and optimize our AWS infrastructure, including VPCs, Subnets, NAT Gateways, Load Balancers (ALB/NLB), ECS/EKS/Fargate, EC2, S3, IAM, Route53, and CloudWatch. Observability: Set up and integrate logging, metrics, and tracing to provide full-stack visibility using tools like Datadog, Prometheus/Grafana, and CloudWatch. Security & IAM: Design and enforce IAM policies, secrets management, and access controls across environments. Infra-as-Code: Define infrastructure using tools like Terraform or Pulumi to ensure consistency and repeatability. Culture Building: Mentor engineers on DevOps practices, automate manual processes, and help foster a culture of operational excellence. Qualifications 5+ years of DevOps or Infrastructure Engineering experience. Strong proficiency in AWS, especially with core services like VPCs, ALB/NLB, IAM, ECS/EKS, CloudWatch, and Route53. Solid hands-on experience with Docker and container orchestration (Kubernetes, ECS, or Fargate). Deep understanding of modern deployment strategies, including canary, blue-green, and rolling updates. Proficiency with CI/CD tools like GitHub Actions, GitLab CI, and ArgoCD. Experience with infrastructure-as-code, with Terraform preferred. Strong scripting skills in Bash, Python, or similar languages. Clear track record of building DevOps capabilities in early-stage or fast-scaling teams. Good to have Experience with service mesh (e.g., Istio or AWS App Mesh) for traffic shaping in canary deployments. Familiarity with policy-as-code (e.g., OPA, AWS SCPs). Exposure to cost-optimization strategies on AWS. Prior experience with compliance-ready environments (SOC2, ISO27001).,
Posted 2 weeks ago
5.0 - 10.0 years
15 - 25 Lacs
Bengaluru
Remote
Seeking a DevOps Engineer with expertise in Pulumi and TypeScript to design, deploy, and manage scalable cloud infrastructure. Must have experience with Pulumi CLI, stack management, TypeScript testing, and CI/CD integration.
Posted 4 weeks ago
7.0 - 11.0 years
25 - 40 Lacs
Bengaluru
Work from Office
JD Role : Tech Lead/Developer Work Environment: Hybrid(4 Days Office+1@ Remote) Engagement: Full Time Permanent Loc : Madiwala(Bangalore) Required: Senior Node.js developer with hands-on experience building serverless applications using AWS Lambda, API Gateway, RDS, SQS, SNS, and Timestream. Database: Proficiency with Sequelize ORM for database modelling, queries, and migrations in production environments. Infrastructure: Must have experience with either SST (Serverless Stack Framework) or Pulumi for AWS infrastructure as code deployment and management. Architecture (Good to have): Understanding of event-driven serverless architecture, message queues, pub/sub patterns, and time-series data processing.
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Pune
Work from Office
What You'll Do Were hiring a Site Reliability Engineer to help build and maintain the backbone of Avalaras SaaS platforms. As part of our global Reliability Engineering team, youll play a key role in ensuring the performance, availability, and observability of critical systems used by millions of users. This role combines hands-on infrastructure expertise with modern SRE practices and the opportunity to contribute to the evolution of AI-powered operations. Youll work closely with engineering and operations teams across regions to drive automation, improve incident response, and proactively detect issues using data and machine learning. What Your Responsibilities Will Be Own the reliability and performance of production systems across multiple environments and multiple clouds (AWS, GCP, OCI). Use AI/ML-driven tools and automation to improve observability and incident response. Collaborate with development teams on CI/CD pipelines, infrastructure deployments, and secure practices. Perform root cause analysis, drive postmortems, and reduce recurring incidents. Contribute to compliance and security initiatives (SOX, SOC2, ISO 27001, access and controls). Participate in a global on-call rotation and knowledge-sharing culture. What You'll Need to be Successful 5+ years in SRE, DevOps, or infrastructure engineering roles. Expertise with AWS (GCP or OCI is a plus), AWS Certified Solutions Architect Associate or equivalent Strong scripting/programming skills (Python, Go, Bash, or similar) Experience with infrastructure as code (Terraform, CloudFormation, Pulumi). Proficiency in Linux environments, containers (Docker/Kubernetes), and CI/CD workflows. Strong written and verbal communications skills to support world wide collaboration.
Posted 1 month ago
4.0 - 9.0 years
30 - 40 Lacs
Bengaluru
Work from Office
Location :Bangalore At Practo, we are on a mission to simplify healthcare and ensure that every individual has access to quality care. As a leading digital healthcare platform, we connect millions of patients with healthcare providers, making healthcare services more accessible and efficient. Join our dynamic team and contribute to transforming the future of healthcare. Job Overview: Practo is looking for a skilled Site Reliability Engineer (SRE) to join our team. The SRE will play a critical role in maintaining the reliability, performance, and scalability of our services. This role involves working with cloud platforms such as AWS, Azure, and Oracle, managing Ubuntu-based systems, and ensuring seamless operation of our infrastructure. The ideal candidate will have a strong background in system administration, cloud technologies, and modern DevOps practices. Key Responsibilities: Infrastructure Management: Design, implement, and manage scalable, resilient, and secure infrastructure on cloud providers such as AWS, Azure, and Oracle. Oversee the administration of Ubuntu servers, ensuring optimal performance and uptime. Automation and Monitoring: Implement monitoring and alerting systems to proactively identify and resolve issues before they impact users. Automate repetitive tasks to improve system reliability and operational efficiency. Containerization and Orchestration: Deploy and manage containerized applications using Docker. Utilize Kubernetes for container orchestration, ensuring efficient and reliable application deployment and scaling. Performance Optimization: Analyze system performance metrics and optimize infrastructure to meet performance targets. Troubleshoot and resolve issues related to server performance, network latency, and other system bottlenecks. Collaboration and Support: Work closely with development teams to ensure new applications and features are designed with reliability and scalability in mind. Provide guidance and mentorship to junior engineers on best practices for system reliability and cloud management. Participate in on-call rotations to provide 24/7 support for critical issues. Security and Compliance: Implement security best practices across all infrastructure components, including firewalls, VPNs, and access controls. Ensure compliance with industry standards and internal policies for data protection and privacy. Technical Skills: Proven experience with cloud providers: AWS, Azure, and Oracle. Strong proficiency in managing and troubleshooting Ubuntu operating systems. Hands-on experience with Nginx, Kubernetes, and Docker. Familiarity with scripting languages (e.g., Bash, Python) for automation tasks. Experience with CI/CD pipelines and tools like Jenkins, GitLab CI, or equivalent. Knowledge of networking fundamentals and security best practices. Professional Experience: 2+ years of experience in a Site Reliability Engineer or similar role. Excellent problem-solving skills and attention to detail. Strong communication skills, with the ability to collaborate effectively with cross-functional teams. Self-motivated with the ability to work independently and as part of a team.
Posted 1 month ago
10.0 - 12.0 years
30 - 35 Lacs
Chennai
Work from Office
Kubernetes, Docker & multi-cloud orchestration tools. AWS, Azure, GCP, and private cloud environments, ensuring compatibility & interoperability. Code tools & cloud-neutral deployments. front-end frameworks & back-end development. SQL & NoSQL, CI/CD
Posted 2 months ago
3 - 7 years
8 - 12 Lacs
Ahmedabad
Remote
Design, implement, and maintain secure, scalable infrastructure using Azure and AWS services Implement and manage Zero Trust Security architectures, access controls, and policy enforcement Required Candidate profile Proven experience with both AWS and Azure cloud platforms Strong understanding and practical implementation of Zero Trust Security frameworks Exp. with IaC tools like Terraform, Ansible, Pulumi, etc
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough