Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking skilled and dedicated Lead AWS Cloud & DevOps Engineers to join our teams across multiple locations in India. In this role, you will contribute to critical client applications and internal projects, serving as an AWS Systems Engineer specializing in Infrastructure as Code, Containers, or DevOps solutions. Responsibilities Provide fault-tolerance, high-availability, scalability, and security on AWS infrastructure and platforms Build and implement CI/CD pipelines with automated systems for testing and deployment Facilitate production deployment utilizing various deployment strategies such as Blue-Green and Canary Automate AWS infrastructure and platform provisioning using Infrastructure as Code solutions Configure systems using management tools for automation Optimize AWS network architecture including VPCs, subnets, routers, and transit gateways Troubleshoot and monitor cloud performance with observability services like CloudWatch and VPC Flow Logs Establish security frameworks within AWS environments using IAM roles, WAFs, and CloudTrail Develop scripts in Linux, Python, or PowerShell to streamline operational tasks Requirements Minimum 8 years of experience, including 5 years in cloud and DevOps roles Production experience in Linux/Windows system engineering Proficiency with AWS Compute services: EC2, Lambda, Autoscaling, and Load Balancers Competency in AWS Storage services: S3, EFS, EBS, and archival options like Glacier Expertise in AWS Security services: IAM, KMS encryption, and monitoring tools like AWS Config Familiarity with AWS Networking services: VPC setups, VPN usage, and endpoints Hands-on with observability tools such as CloudWatch Alarms, ECS/EKS Monitoring, and VPC Flow Logs Skills in automation with orchestration tools like Terraform or AWS CloudFormation Expertise in container orchestration using Docker and platforms such as Kubernetes on EKS/ECS Flexibility to implement various deployment strategies including in-place modifications and blue-green approaches
Posted 4 weeks ago
8.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
We are seeking skilled and dedicated Lead AWS Cloud & DevOps Engineers to join our teams across multiple locations in India. In this role, you will contribute to critical client applications and internal projects, serving as an AWS Systems Engineer specializing in Infrastructure as Code, Containers, or DevOps solutions. Responsibilities Provide fault-tolerance, high-availability, scalability, and security on AWS infrastructure and platforms Build and implement CI/CD pipelines with automated systems for testing and deployment Facilitate production deployment utilizing various deployment strategies such as Blue-Green and Canary Automate AWS infrastructure and platform provisioning using Infrastructure as Code solutions Configure systems using management tools for automation Optimize AWS network architecture including VPCs, subnets, routers, and transit gateways Troubleshoot and monitor cloud performance with observability services like CloudWatch and VPC Flow Logs Establish security frameworks within AWS environments using IAM roles, WAFs, and CloudTrail Develop scripts in Linux, Python, or PowerShell to streamline operational tasks Requirements Minimum 8 years of experience, including 5 years in cloud and DevOps roles Production experience in Linux/Windows system engineering Proficiency with AWS Compute services: EC2, Lambda, Autoscaling, and Load Balancers Competency in AWS Storage services: S3, EFS, EBS, and archival options like Glacier Expertise in AWS Security services: IAM, KMS encryption, and monitoring tools like AWS Config Familiarity with AWS Networking services: VPC setups, VPN usage, and endpoints Hands-on with observability tools such as CloudWatch Alarms, ECS/EKS Monitoring, and VPC Flow Logs Skills in automation with orchestration tools like Terraform or AWS CloudFormation Expertise in container orchestration using Docker and platforms such as Kubernetes on EKS/ECS Flexibility to implement various deployment strategies including in-place modifications and blue-green approaches
Posted 4 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking skilled and dedicated Lead AWS Cloud & DevOps Engineers to join our teams across multiple locations in India. In this role, you will contribute to critical client applications and internal projects, serving as an AWS Systems Engineer specializing in Infrastructure as Code, Containers, or DevOps solutions. Responsibilities Provide fault-tolerance, high-availability, scalability, and security on AWS infrastructure and platforms Build and implement CI/CD pipelines with automated systems for testing and deployment Facilitate production deployment utilizing various deployment strategies such as Blue-Green and Canary Automate AWS infrastructure and platform provisioning using Infrastructure as Code solutions Configure systems using management tools for automation Optimize AWS network architecture including VPCs, subnets, routers, and transit gateways Troubleshoot and monitor cloud performance with observability services like CloudWatch and VPC Flow Logs Establish security frameworks within AWS environments using IAM roles, WAFs, and CloudTrail Develop scripts in Linux, Python, or PowerShell to streamline operational tasks Requirements Minimum 8 years of experience, including 5 years in cloud and DevOps roles Production experience in Linux/Windows system engineering Proficiency with AWS Compute services: EC2, Lambda, Autoscaling, and Load Balancers Competency in AWS Storage services: S3, EFS, EBS, and archival options like Glacier Expertise in AWS Security services: IAM, KMS encryption, and monitoring tools like AWS Config Familiarity with AWS Networking services: VPC setups, VPN usage, and endpoints Hands-on with observability tools such as CloudWatch Alarms, ECS/EKS Monitoring, and VPC Flow Logs Skills in automation with orchestration tools like Terraform or AWS CloudFormation Expertise in container orchestration using Docker and platforms such as Kubernetes on EKS/ECS Flexibility to implement various deployment strategies including in-place modifications and blue-green approaches
Posted 4 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role, you will: To be successful in this role, you should meet the following requirements: In this role, you will anchor performance Engineering and testing efforts across Banking engineering disciplines and provide ongoing input into the overall process improvement of the Performance Engineering discipline within Digital / Channels Transformation Opportunity to build Performance assurance procedures with the latest feasible tools and techniques, establish Performance test automation process to improve testing productivity. You are expected to support multiple Cloud (AWS) migration and Production initiatives using a wide range of tools and utilities, you identify performance related issues in the applications and systems and present your findings to other teams in the organization to ensure system reliability. Represent testing at Scrum meetings and all other key project meetings and provide a single point of accountability and escalation for testing within the scrum teams Advise on needed infrastructure and Performance Engineering and testing guidelines & be responsible for performance risk assessment of various platform features This is a largely cross-functional opportunity working with software product, development and support teams, capable of handling tasks to accelerate the testing delivery and to improve the quality for Applications at HSBC Work across all global activities and support the Performance Engineering team in ensuring any testing-related dependencies/ touchpoints are in place. You will be a Performance Engineering SME, as a result, you will have exposure to a broader set of problems, understanding customer experience, migrations, new cloud initiatives, improving platform performance, Optimize environments. Establish effective working relationships across other areas of HSBC, e.g. Business Product Owner, Digital Delivery Team, Transformation, and IT Provide recommendations to the Product Owner and/or other project stakeholders on the product readiness to go live. Requirements To be successful in this role, you should meet the following requirements: Performance Engineering, testing and tuning cloud hosted digital platforms (e.g. AWS) Working knowledge (preferably with an AWS Solutions Architect certification) on Cloud Platforms like AWS and AWS Key Services and DevOps tools like CloudFormation, Terraform Performance engineering and testing of web Apps (Linux) Performance testing and tuning web-based applications Performance engineering toolsets such as JMeter, LoadRunner,Microfocus Performance Center, BrowserStack, Taurus, Lighthouse, Monitoring/logging tools (such as AppDynamics, New Relic, Splunk, DataDog) Windows / UNIX / Linux / Web / Database / Network performance monitors to diagnose performance issues along with JVM tuning and Heap analysing skills Docker, Kubernetes and Cloud-native development and container orchestration frameworks, Kubernetes clusters, pods & nodes, vertical/horizontal pod autoscaling concepts, High availability Performance Testing and Engineering activity planning, estimating, designing, executing and analysing output from performance tests Working in an agile environment, "DevOps" team or a similar multi-skilled team in a technically demanding function Jenkins and CI-CD Pipelines including Pipeline scripting Programming and scripting language skills in Java, Shell, Scala, Groovy, Python and knowledge of security mechanisms such as OAuth etc. Tools like GitHub, Jira & Confluence Assisting with Resiliency Production support teams and Performance Incident Root Cause Analysis Ability to prioritize work effectively and deliver within agreed service levels in a diverse and ever-changing environment High levels of judgment and decision making, being able to rationalize and present the background and reasoning for direction taken Strong stakeholder management and excellent communication skills. Extensive knowledge of risk management and mitigation Strong analytical and problem-solving skills You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 4 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Software Engineer In this role, you will: Anchor performance Engineering and testing efforts across Banking engineering disciplines and provide ongoing input into the overall process improvement of the Performance Engineering discipline within Digital / Channels Transformation Opportunity to build Performance assurance procedures with the latest feasible tools and techniques, establish Performance test automation process to improve testing productivity. You are expected to support multiple Cloud (AWS) migration and Production initiatives using a wide range of tools and utilities, you identify performance related issues in the applications and systems and present your findings to other teams in the organization to ensure system reliability. Represent testing at Scrum meetings and all other key project meetings and provide a single point of accountability and escalation for testing within the scrum teams Advise on needed infrastructure and Performance Engineering and testing guidelines & be responsible for performance risk assessment of various platform features This is a largely cross-functional opportunity working with software product, development and support teams, capable of handling tasks to accelerate the testing delivery and to improve the quality for Applications at HSBC Work across all global activities and support the Performance Engineering team in ensuring any testing-related dependencies/ touchpoints are in place. You will be a Performance Engineering SME, as a result, you will have exposure to a broader set of problems, understanding customer experience, migrations, new cloud initiatives, improving platform performance, Optimize environments. Establish effective working relationships across other areas of HSBC, e.g. Business Product Owner, Digital Delivery Team, Transformation, and IT Provide recommendations to the Product Owner and/or other project stakeholders on the product readiness to go live. Strong sense of ownership and accountability for quality deliverables and Performance engineering-related activities within the agile development lifecycle. Design and implement solutions to evaluate and improve performance and scalability of Web Apps / Platforms and platform level applications Represent Performance Engineering across the project and be accountable for defining, shaping and agreeing on the testing schedules in the context of the whole project schedules Accountable for the successful launch of scalable Platform Products, Engineering Initiatives, alignment for Cloud deployments Ability to resolve Performance testing related impediments together with the scrum team/ pods Provide technical expertise in performance requirements analysis, design, effort estimation, testing and delivery of scalable solutions Ability to engage with senior stakeholders and be able to build effective relationships, trust and understanding through the management of testing and the related risks Participate in design and architectural review of the Engineering eco-system to voice performance and scalability concerns Active contribution to evolving the overall Digital/ Channels Transformation’s Performance Engineering and test strategy Develop tools and processes to performance test software applications using various industry-standard tools to automate simulation of expected user workloads to identify performance bottlenecks with the usage of monitoring tools Execute and Analyze test results and establish reliable statistical models for response time, throughput, network utilization and other application performance metrics Ability to build relationships and successful teams located in other geographies and deliver maximum productivity Identify bottlenecks in the hardware and software platform, application code stack, network and measure and document reliable predictions on potential bottlenecks as computing platforms and workloads change Participate and support in design and evaluation of new tools, frameworks, techniques to enhance system performance, scalability and stability techniques Product focused mindset and detailed root cause analysis of test failures, and performance and scalability issues. Identify gaps, issues, or other areas of concern, and proactively define, propose, and enact process and workflow improvements to mitigate such issues. Experience in collaborating with Development, SRE, Prod support teams in evaluating performance issues and solutions for the infrastructure of the entire HSBC Digital stack Requirements To be successful in this role, you should meet the following requirements: Performance Engineering, testing and tuning cloud hosted digital platforms (e.g. AWS) Working knowledge (preferably with an AWS Solutions Architect certification) on Cloud Platforms like AWS and AWS Key Services and DevOps tools like CloudFormation, Terraform Performance engineering and testing of web Apps (Linux) Performance testing and tuning web-based applications Performance engineering toolsets such as JMeter, LoadRunner,Microfocus Performance Center, BrowserStack, Taurus, Lighthouse, Monitoring/logging tools (such as AppDynamics, New Relic, Splunk, DataDog) Windows / UNIX / Linux / Web / Database / Network performance monitors to diagnose performance issues along with JVM tuning and Heap analyzing skills Docker, Kubernetes and Cloud-native development and container orchestration frameworks, Kubernetes clusters, pods & nodes, vertical/horizontal pod autoscaling concepts, High availability Performance Testing and Engineering activity planning, estimating, designing, executing and analyzing output from performance tests Working in an agile environment, "DevOps" team or a similar multi-skilled team in a technically demanding function Jenkins and CI-CD Pipelines including Pipeline scripting Programming and scripting language skills in Java, Shell, Scala, Groovy, Python and knowledge of security mechanisms such as OAuth etc. Tools like GitHub, Jira & Confluence Assisting with Resiliency Production support teams and Performance Incident Root Cause Analysis Ability to prioritize work effectively and deliver within agreed service levels in a diverse and ever-changing environment High levels of judgment and decision making, being able to rationalize and present the background and reasoning for direction taken Strong stakeholder management and excellent communication skills. Extensive knowledge of risk management and mitigation Strong analytical and problem-solving skills You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 4 weeks ago
8.0 years
6 - 8 Lacs
Hyderābād
On-site
EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking skilled and dedicated Lead AWS Cloud & DevOps Engineers to join our teams across multiple locations in India. In this role, you will contribute to critical client applications and internal projects, serving as an AWS Systems Engineer specializing in Infrastructure as Code, Containers, or DevOps solutions. Responsibilities Provide fault-tolerance, high-availability, scalability, and security on AWS infrastructure and platforms Build and implement CI/CD pipelines with automated systems for testing and deployment Facilitate production deployment utilizing various deployment strategies such as Blue-Green and Canary Automate AWS infrastructure and platform provisioning using Infrastructure as Code solutions Configure systems using management tools for automation Optimize AWS network architecture including VPCs, subnets, routers, and transit gateways Troubleshoot and monitor cloud performance with observability services like CloudWatch and VPC Flow Logs Establish security frameworks within AWS environments using IAM roles, WAFs, and CloudTrail Develop scripts in Linux, Python, or PowerShell to streamline operational tasks Requirements Minimum 8 years of experience, including 5 years in cloud and DevOps roles Production experience in Linux/Windows system engineering Proficiency with AWS Compute services: EC2, Lambda, Autoscaling, and Load Balancers Competency in AWS Storage services: S3, EFS, EBS, and archival options like Glacier Expertise in AWS Security services: IAM, KMS encryption, and monitoring tools like AWS Config Familiarity with AWS Networking services: VPC setups, VPN usage, and endpoints Hands-on with observability tools such as CloudWatch Alarms, ECS/EKS Monitoring, and VPC Flow Logs Skills in automation with orchestration tools like Terraform or AWS CloudFormation Expertise in container orchestration using Docker and platforms such as Kubernetes on EKS/ECS Flexibility to implement various deployment strategies including in-place modifications and blue-green approaches We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)
Posted 4 weeks ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
Job Description Key Roles and responsibilities : Thorough understanding of OCI cloud concepts, environment, services Good Hands-on with OCI architecture and design Implementation of OCI IaaS and PaaS services Conducts business process analysis/design, needs assessments and cost benefit analysis related to the impact on the business. Understanding business needs, translating those needs into system requirements and architecture aligned with the scope of the solution. Technical Skills Hands-on administration skills on the OCI Cloud environment. Good Understanding of OCI cloud, network operations, private and hybrid cloud administration Good Understanding of IaaS, PaaS, SaaS and Cloud design Expertise in designing and planning cloud environment in enterprise environment including application dependencies, client presentation mechanism, network connectivity and overall virtualization strategies Good Understanding of virtualization management and configuration Knowledge of Autoscaling concepts (scale up and scale down) of VM, VM upgrades, configure availability domains/fault domains - Building a technical and security infrastructure in OCI cloud for selected apps/workloads Understanding of OCI Services - VCN, Subnets, Route tables, Dynamic Routing gateway, Service gateway, Security lists, NSG, Load Balancer, Storage buckets, Logging, Auditing, Monitoring, provisioning, security services(Cloud Gaurd, Network Firewall), IAM Understanding and ability to promptly diagnose and remedy cloud related problems and failures Hands-on experience with OCI backend infrastructure, troubleshooting, and root cause analysis Manage server, build commission and decommissions processes Logging & monitoring for IaaS/PaaS resources FastConnect setup and traffic flows experience with IPSEC/VPN tunneling working knowledge Knowledge of VCN peering and managing the Dynamic route gateway, security lists on OCI. Implement and maintain all OCI infrastructure and services VM's, OCI functions, Monitoring, Notifications Experienced in Deploying OCI VM's and managing the cloud workloads through OS management HUB Experienced in implementing DR drills on regular basis as per the need or request Mandatory Skills (Must Have)Primary Skills OCI Certification : Oracle Cloud Infrastructure Architect - Skills at least L2 or L2+ (Good to have) : Knowledge on other Cloud - AWS/Azure Knowledge on Infrastructure as Code (IAC) like Terraform Knowledge of any of the tools like Servicenow, BMC Helix, Ansible, Jenkins, Splunk Cloud automation using Python and Powershell scripts Knowledge on Devops, Kubernetes Behavioral Skill (Must Have) Good Communication Skill - effective written and oral Lead the team of juior architects Eagerness to learn new cloud services and technology Team Collaboration Creative thinking in implementing new solutions (ref:hirist.tech)
Posted 4 weeks ago
0 years
0 Lacs
India
On-site
*Who you are* You’re the person whose fingertips know the difference between spinning up a GPU cluster and spinning down a stale inference node. You love the “infrastructure behind the magic” of LLMs. You've built CI/CD pipelines that automatically version models, log inference metrics, and alert on drift. You’ve containerized GenAI services in Docker, deployed them on Kubernetes clusters (AKS or EKS), and implemented terraform or ARM to manage infra-as-code. You monitor cloud costs like a hawk, optimize GPU workloads, and sometimes sacrifice cost for performance—but never vice versa. You’re fluent in Python and Bash, can script tests for REST endpoints, and build automated feedback loops for model retraining. You’re comfortable working in Azure — OpenAI, Azure ML, Azure DevOps Pipelines—but are cloud-agnostic enough to cover AWS or GCP if needed. You read MLOps/LLMOps blog posts or arXiv summaries on the weekend and implement improvements on Monday. You think of yourself as a self-driven engineer: no playbooks, no spoon-feeding—just solid automation, reliability, and a hunger to scale GenAI from prototype to production. --- *What you will actually do* You’ll architect and build deployment platforms for internal LLM services: start from containerizing models and building CI/CD pipelines for inference microservices. You’ll write IaC (Terraform or ARM) to spin up clusters, endpoints, GPUs, storage, and logging infrastructure. You’ll integrate Azure OpenAI and Azure ML endpoints, pushing models via pipelines, versioning them, and enabling automatic retraining triggers. You’ll build monitoring and observability around latency, cost, error rates, drift, and prompt health metrics. You’ll optimize deployments—autoscaling, use of spot/gpu nodes, invalidation policies—to balance cost and performance. You’ll set up automated QA pipelines that validate model outputs (e.g. semantic similarity, hallucination detection) before merging. You’ll collaborate with ML, backend, and frontend teams to package components into release-ready backend services. You’ll manage alerts, rollbacks on failure, and ensure 99% uptime. You'll create reusable tooling (CI templates, deployment scripts, infra modules) to make future projects plug-and-play. --- *Skills and knowledge* Strong scripting skills in Python and Bash for automation and pipelines Fluent in Docker, Kubernetes (especially AKS), containerizing LLM workloads Infrastructure-as-code expertise: Terraform (Azure provider) or ARM templates Experience with Azure DevOps or GitHub Actions for CI/CD of models and services Knowledge of Azure OpenAI, Azure ML, or equivalent cloud LLM endpoints Familiar with setting up monitoring: Azure Monitor, Prometheus/Grafana—track latency, errors, drift, costs Cost-optimization tactics: spot nodes, autoscaling, GPU utilization tracking Basic LLM understanding: inference latency/cost, deployment patterns, model versioning Ability to build lightweight QA checks or integrate with QA pipelines Cloud-agnostic awareness—experience with AWS or GCP backup systems Comfortable establishing production-grade Ops pipelines, automating deployments end-to-end Self-starter mentality: no playbooks required, ability to pick up new tools and drive infrastructure independently
Posted 4 weeks ago
2.0 - 31.0 years
2 - 4 Lacs
Amrapali Dream Valley, Greater Noida
On-site
Key Responsibilities: Diagnose and fix performance bottlenecks across backend services, WebSocket connections, and API response times. Investigate issues related to high memory usage, CPU spikes, and slow query execution. Debug and optimize database queries (PostgreSQL) and ORM (Prisma) performance. Implement and fine-tune connection pooling strategies for PostgreSQL and Redis. Configure and maintain Kafka brokers, producers, and consumers to ensure high throughput. Monitor and debug WebSocket issues like connection drops, latency, and reconnection strategies. Optimize Redis usage and troubleshoot memory leaks or blocking commands. Set up or maintain Prometheus + Grafana for service and infrastructure monitoring. Work on containerized infrastructure using Docker and Kubernetes, including load balancing and scaling services. Collaborate with developers to fix memory leaks, inefficient queries, and slow endpoints. Maintain high availability and fault tolerance across all backend components. 🧠 Requirements: Technical Skills: Strong proficiency in Node.js and TypeScript. Deep knowledge of Prisma ORM and PostgreSQL optimization. Hands-on experience with Redis (pub/sub, caching, memory tuning). Solid understanding of WebSockets performance and reconnection handling. Experience working with Kafka (event streaming, partitions, consumer groups). Familiar with Docker, container lifecycle, and multi-service orchestration. Experience with Kubernetes (deployments, pods, autoscaling, resource limits). Familiar with connection pooling strategies for DB and services. Comfortable with performance monitoring tools like Prometheus, Grafana, UptimeRobot, etc. Soft Skills: Excellent debugging and analytical skills. Able to work independently and solve complex issues. Strong communication and documentation habits. ✅ Preferred Qualifications: 3+ years of experience in backend development. Experience with CI/CD pipelines and production deployments. Prior work with large-scale distributed systems is a plus
Posted 1 month ago
5.0 - 12.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
We are seeking a talented Lead Software Engineer with expertise in AWS and Java to join our dynamic team. This role involves working on critical application modernization projects, transforming legacy systems into cloud-native solutions, and driving innovation in security, observability, and governance. You'll collaborate with self-governing engineering teams to deliver high-impact, scalable software solutions. We are looking for candidates with strong expertise in Cloud Native Development, AWS, Microservices architecture, Java/J2EE, and hands-on experience in implementing CI/CD pipelines. Responsibilities Lead end-to-end development in Java and AWS services, ensuring high-quality deliverables Design, develop, and implement REST APIs using AWS Lambda/APIGateway, JBoss, or Spring Boot Utilize AWS Java SDK to interact with various AWS services effectively Drive deployment automation through AWS Java CDK, CloudFormation, or Terraform Architect containerized applications and manage orchestrations via Kubernetes on AWS EKS or AWS ECS Apply advanced microservices concepts and adhere to best practices during development Build, test, and debug code while addressing technical setbacks effectively Expose application functionalities via APIs using Lambda and Spring Boot Manage data formatting (JSON, YAML) and handle diverse data types (String, Numbers, Arrays) Implement robust unit test cases with JUnit or equivalent testing frameworks Oversee source code management through platforms like GitLab, GitHub, or Bitbucket Ensure efficient application builds using Maven or Gradle Coordinate development requirements, schedules, and other dependencies with multiple stakeholders Requirements 5 to 12 years of experience in Java development and AWS services Expertise in AWS services including Lambda, SQS, SNS, DynamoDB, Step Functions, and API Gateway Proficiency in using Docker and managing container orchestration through Kubernetes on AWS EKS or ECS Strong understanding of AWS Core services such as EC2, VPC, RDS, EBS, and EFS Competency in deployment tools like AWS CDK, Terraform, or CloudFormation Knowledge of NoSQL databases, storage solutions, AWS Elastic Cache, and DynamoDB Understanding of AWS Orchestration tools for automation and data processing Capability to handle production workloads, automate tasks, and manage logs effectively Experience in writing scalable applications employing microservices principles Nice to have Proficiency with AWS Core Services such as Autoscaling, Load Balancers, Route 53, and IAM Skills in scripting with Linux/Shell/Python/Windows PowerShell or using Ansible/Chef/Puppet Experience with build automation tools like Jenkins, AWS CodeBuild/CodeDeploy, or GitLab CI Familiarity with collaborative tools like Jira and Confluence Knowledge of in-place deployment strategies, including Blue-Green or Canary Deployment Showcase of experience in ELK (Elasticsearch, Logstash, Kibana) stack development
Posted 1 month ago
5.0 - 12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking a talented Lead Software Engineer with expertise in AWS and Java to join our dynamic team. This role involves working on critical application modernization projects, transforming legacy systems into cloud-native solutions, and driving innovation in security, observability, and governance. You'll collaborate with self-governing engineering teams to deliver high-impact, scalable software solutions. We are looking for candidates with strong expertise in Cloud Native Development, AWS, Microservices architecture, Java/J2EE, and hands-on experience in implementing CI/CD pipelines. Responsibilities Lead end-to-end development in Java and AWS services, ensuring high-quality deliverables Design, develop, and implement REST APIs using AWS Lambda/APIGateway, JBoss, or Spring Boot Utilize AWS Java SDK to interact with various AWS services effectively Drive deployment automation through AWS Java CDK, CloudFormation, or Terraform Architect containerized applications and manage orchestrations via Kubernetes on AWS EKS or AWS ECS Apply advanced microservices concepts and adhere to best practices during development Build, test, and debug code while addressing technical setbacks effectively Expose application functionalities via APIs using Lambda and Spring Boot Manage data formatting (JSON, YAML) and handle diverse data types (String, Numbers, Arrays) Implement robust unit test cases with JUnit or equivalent testing frameworks Oversee source code management through platforms like GitLab, GitHub, or Bitbucket Ensure efficient application builds using Maven or Gradle Coordinate development requirements, schedules, and other dependencies with multiple stakeholders Requirements 5 to 12 years of experience in Java development and AWS services Expertise in AWS services including Lambda, SQS, SNS, DynamoDB, Step Functions, and API Gateway Proficiency in using Docker and managing container orchestration through Kubernetes on AWS EKS or ECS Strong understanding of AWS Core services such as EC2, VPC, RDS, EBS, and EFS Competency in deployment tools like AWS CDK, Terraform, or CloudFormation Knowledge of NoSQL databases, storage solutions, AWS Elastic Cache, and DynamoDB Understanding of AWS Orchestration tools for automation and data processing Capability to handle production workloads, automate tasks, and manage logs effectively Experience in writing scalable applications employing microservices principles Nice to have Proficiency with AWS Core Services such as Autoscaling, Load Balancers, Route 53, and IAM Skills in scripting with Linux/Shell/Python/Windows PowerShell or using Ansible/Chef/Puppet Experience with build automation tools like Jenkins, AWS CodeBuild/CodeDeploy, or GitLab CI Familiarity with collaborative tools like Jira and Confluence Knowledge of in-place deployment strategies, including Blue-Green or Canary Deployment Showcase of experience in ELK (Elasticsearch, Logstash, Kibana) stack development
Posted 1 month ago
5.0 - 12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a talented Lead Software Engineer with expertise in AWS and Java to join our dynamic team. This role involves working on critical application modernization projects, transforming legacy systems into cloud-native solutions, and driving innovation in security, observability, and governance. You'll collaborate with self-governing engineering teams to deliver high-impact, scalable software solutions. We are looking for candidates with strong expertise in Cloud Native Development, AWS, Microservices architecture, Java/J2EE, and hands-on experience in implementing CI/CD pipelines. Responsibilities Lead end-to-end development in Java and AWS services, ensuring high-quality deliverables Design, develop, and implement REST APIs using AWS Lambda/APIGateway, JBoss, or Spring Boot Utilize AWS Java SDK to interact with various AWS services effectively Drive deployment automation through AWS Java CDK, CloudFormation, or Terraform Architect containerized applications and manage orchestrations via Kubernetes on AWS EKS or AWS ECS Apply advanced microservices concepts and adhere to best practices during development Build, test, and debug code while addressing technical setbacks effectively Expose application functionalities via APIs using Lambda and Spring Boot Manage data formatting (JSON, YAML) and handle diverse data types (String, Numbers, Arrays) Implement robust unit test cases with JUnit or equivalent testing frameworks Oversee source code management through platforms like GitLab, GitHub, or Bitbucket Ensure efficient application builds using Maven or Gradle Coordinate development requirements, schedules, and other dependencies with multiple stakeholders Requirements 5 to 12 years of experience in Java development and AWS services Expertise in AWS services including Lambda, SQS, SNS, DynamoDB, Step Functions, and API Gateway Proficiency in using Docker and managing container orchestration through Kubernetes on AWS EKS or ECS Strong understanding of AWS Core services such as EC2, VPC, RDS, EBS, and EFS Competency in deployment tools like AWS CDK, Terraform, or CloudFormation Knowledge of NoSQL databases, storage solutions, AWS Elastic Cache, and DynamoDB Understanding of AWS Orchestration tools for automation and data processing Capability to handle production workloads, automate tasks, and manage logs effectively Experience in writing scalable applications employing microservices principles Nice to have Proficiency with AWS Core Services such as Autoscaling, Load Balancers, Route 53, and IAM Skills in scripting with Linux/Shell/Python/Windows PowerShell or using Ansible/Chef/Puppet Experience with build automation tools like Jenkins, AWS CodeBuild/CodeDeploy, or GitLab CI Familiarity with collaborative tools like Jira and Confluence Knowledge of in-place deployment strategies, including Blue-Green or Canary Deployment Showcase of experience in ELK (Elasticsearch, Logstash, Kibana) stack development
Posted 1 month ago
4.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Fynd is India’s largest omnichannel platform and a multi-platform tech company specialising in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries. We are looking for a Senior Fullstack Javascript Developer responsible for the client side of our service. Your primary focus will be to implement a complete user interface in the form of a web app, with a focus on performance. Your primary duties will include creating modules and components and coupling them together into a functional app. You will work in a team with the back-end developers and communicate with the API using standard methods. A thorough understanding of all of the components of our platform and infrastructure is required. What will you do at Fynd? Build scalable and loosely coupled services to extend our platform Build bulletproof API integrations with third-party APIs for various use cases Evolve our Infrastructure and add a few more nines to our overall availability Have full autonomy and own your code, and decide on the technologies and tools to deliver as well operate large-scale applications on AWS Give back to the open-source community through contributions on code and blog posts This is a startup so everything can change as we experiment with more product improvements Some Specific Requirements Atleast 4+ years of Development Experience You have prior experience developing and working on consumer-facing web/app products Hands-on experience in JavaScript. Exceptions can be made if you’re really good at any other language with experience in building web/app-based tech products Expertise in Node.JS and Experience in at least one of the following frameworks - Express.js, Koa.js, Socket.io (http://socket.io/) Good knowledge of async programming using Callbacks, Promises, and Async/Await Hands-on experience with Frontend codebases using HTML, CSS, and AJAX Working knowledge of MongoDB, Redis, MySQL Good understanding of Data Structures, Algorithms, and Operating Systems You've worked with AWS services in the past and have experience with EC2, ELB, AutoScaling, CloudFront, S3 Experience with Frontend Stack would be added advantage (HTML, CSS) You might not have experience with all the tools that we use but you can learn those given the guidance and resources Experience in Vue.js would be plus What do we offer? Growth Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially. Flex University We help you upskill by organising in-house courses on important subjects Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you. Culture Community and Team building activities Host weekly, quarterly and annual events/parties. Wellness Mediclaim policy for you + parents + spouse + kids Experienced therapist for better mental health, improve productivity & work-life balance We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!
Posted 1 month ago
4.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Fynd is India’s largest omnichannel platform and a multi-platform tech company specializing in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries. We're looking for an Senior Engineer to join our Commerce Team. The Commerce Engineering Team forms the backbone of our core business. We build and iterate over our core platform that handles Fynd platform start from onboarding a seller to serving the finished products to end customers across different channels with customisation and configuration. Our team consists of generalist engineers who work on building REST APIs, Internal tools, and Infrastructure. What will you do at Fynd? Build scalable and loosely coupled services to extend our platform Build bulletproof API integrations with third-party APIs for various use cases Evolve our Infrastructure and add a few more nines to our overall availability Have full autonomy and own your code, and decide on the technologies and tools to deliver as well operate large-scale applications on AWS Give back to the open-source community through contributions on code and blog posts This is a startup so everything can change as we experiment with more product improvements Some Specific Requirements Atleast 4+ years of Development Experience You have prior experience developing and working on consumer-facing web/app products Solid experience in Python with experience in building web/app-based tech products Experience in at least one of the following frameworks - Sanic, Django, Flask, Falcon, web2py, Twisted, Tornado Working knowledge of MySQL, MongoDB, Redis, Aerospike Good understanding of Data Structures, Algorithms, and Operating Systems You've worked with core AWS services in the past and have experience with EC2, ELB, AutoScaling, CloudFront, S3, Elasticache Understanding of Kafka, Docker, Kubernetes Have knowledge of Solr, Elastic search Attention to detail You can dabble in Frontend codebases using HTML, CSS, and Javascript You love doing things efficiently. At Fynd, the work you do will have a disproportionate impact on the business. We believe in systems and processes that let us scale our impact to be larger than ourselves You might not have experience with all the tools that we use but you can learn those given the guidance and resources What do we offer? Growth Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially. Flex University We help you upskill by organising in-house courses on important subjects Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you. Culture Community and Team building activities Host weekly, quarterly and annual events/parties. Wellness Mediclaim policy for you + parents + spouse + kids Experienced therapist for better mental health, improve productivity & work-life balance We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!
Posted 1 month ago
3.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Fynd is India’s largest omnichannel platform and a multi-platform tech company specialising in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries. We are looking for a Fullstack Javascript Developer responsible for the client side of our service. Your primary focus will be to implement a complete user interface in the form of a web app, with a focus on performance. Your primary duties will include creating modules and components and coupling them together into a functional app. You will work in a team with the back-end developers and communicate with the API using standard methods. A thorough understanding of all of the components of our platform and infrastructure is required. What will you do at Fynd? Build scalable and loosely coupled services to extend our platform Build bulletproof API integrations with third-party APIs for various use cases Evolve our Infrastructure and add a few more nines to our overall availability Have full autonomy and own your code, and decide on the technologies and tools to deliver as well operate large-scale applications on AWS Give back to the open-source community through contributions on code and blog posts This is a startup so everything can change as we experiment with more product improvements Some Specific Requirements Atleast 3+ years of Development Experience You have prior experience developing and working on consumer-facing web/app products Hands-on experience in JavaScript. Exceptions can be made if you’re really good at any other language with experience in building web/app-based tech products Expertise in Node.JS and Experience in at least one of the following frameworks - Express.js, Koa.js, Socket.io (http://socket.io/) Good knowledge of async programming using Callbacks, Promises, and Async/Await Hands-on experience with Frontend codebases using HTML, CSS, and AJAX Working knowledge of MongoDB, Redis, MySQL Good understanding of Data Structures, Algorithms, and Operating Systems You've worked with AWS services in the past and have experience with EC2, ELB, AutoScaling, CloudFront, S3 Experience with Frontend Stack would be added advantage (HTML, CSS) You might not have experience with all the tools that we use but you can learn those given the guidance and resources Experience in Vue.js would be plus What do we offer? Growth Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially. Flex University We help you upskill by organising in-house courses on important subjects Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you. Culture Community and Team building activities Host weekly, quarterly and annual events/parties. Wellness Mediclaim policy for you + parents + spouse + kids Experienced therapist for better mental health, improve productivity & work-life balance We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!
Posted 1 month ago
2.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Fynd is India’s largest omnichannel platform and a multi-platform tech company specializing in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries. We're looking for an Engineer/Senior Engineer to join our Commerce Team. The Commerce Engineering Team forms the backbone of our core business. We build and iterate over our core platform that handles Fynd platform start from onboarding a seller to serving the finished products to end customers across different channels with customisation and configuration. Our team consists of generalist engineers who work on building REST APIs, Internal tools, and Infrastructure. What will you do at Fynd? Build scalable and loosely coupled services to extend our platform Build bulletproof API integrations with third-party APIs for various use cases Evolve our Infrastructure and add a few more nines to our overall availability Have full autonomy and own your code, and decide on the technologies and tools to deliver as well operate large-scale applications on AWS Give back to the open-source community through contributions on code and blog posts This is a startup so everything can change as we experiment with more product improvements Some Specific Requirements Atleast 2+ years of Development Experience You have prior experience developing and working on consumer-facing web/app products Solid experience in Python with experience in building web/app-based tech products Experience in at least one of the following frameworks - Sanic, Django, Flask, Falcon, web2py, Twisted, Tornado Working knowledge of MySQL, MongoDB, Redis, Aerospike Good understanding of Data Structures, Algorithms, and Operating Systems You've worked with core AWS services in the past and have experience with EC2, ELB, AutoScaling, CloudFront, S3, Elasticache Understanding of Kafka, Docker, Kubernetes Have knowledge of Solr, Elastic search Attention to detail You can dabble in Frontend codebases using HTML, CSS, and Javascript You love doing things efficiently. At Fynd, the work you do will have a disproportionate impact on the business. We believe in systems and processes that let us scale our impact to be larger than ourselves You might not have experience with all the tools that we use but you can learn those given the guidance and resources What do we offer? Growth Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially. Flex University We help you upskill by organising in-house courses on important subjects Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you. Culture Community and Team building activities Host weekly, quarterly and annual events/parties. Wellness Mediclaim policy for you + parents + spouse + kids Experienced therapist for better mental health, improve productivity & work-life balance We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!
Posted 1 month ago
1.0 years
0 Lacs
Kochi, Kerala, India
On-site
A proactive and detail-oriented DevOps Engineer with 1 year of hands-on experience in cloud infrastructure automation, container orchestration, and CI/CD implementation. Strong practical knowledge of Linux environments, cloud-native technologies, and network security. Adept at leveraging tools like Jenkins, Ansible, and Docker to streamline deployment workflows and ensure system reliability. Key Skills & Experience Kubernetes (EKS): Experience deploying, managing, and troubleshooting applications on AWS Elastic Kubernetes Service (EKS). Familiar with Helm, autoscaling, and monitoring within Kubernetes environments. Docker: Proficient in creating, managing, and optimizing Docker containers. Skilled in writing custom Dockerfiles and troubleshooting container issues. Linux & Shell Scripting: Strong expertise in Linux system administration and daily usage of shell commands for automation, monitoring, and system diagnostics. Network Security: Hands-on experience in configuring and managing security groups, firewalls, IAM policies, and ensuring secure communication between services in AWS and Kubernetes environments. Ansible: Experience in writing and executing playbooks for configuration management and automated provisioning of infrastructure. Jenkins: Skilled in designing and managing Jenkins pipelines for CI/CD workflows, integrating with Git, Docker, and AWS services. AWS Cloud EKS: Core competency in managing containerized workloads. EC2 & S3: Provisioning, securing, and managing compute and storage resources. Security Groups & IAM: Implementing secure access policies and managing service-to-service communication. Lambda: Working knowledge of setting up serverless functions for event-driven automation. Mandatory Hands-On Experience Linux systems and advanced shell command usage Jenkins pipeline configuration and maintenance AWS services including EKS, EC2, S3, and Security Groups Network security concepts and practical enforcement A quick learner and effective problem solver, passionate about automation, scalability, and secure DevOps practices. ON-SITE KOCHI-INFOPARK IMMEDIATE JOINER Maximum CTC: ₹3 LPA (Three Lakhs per Annum) Experience Required: Up to 2 years (Candidates with more than 2 years of experience need not apply) SEND YOUR RESUME TO: hrteam@touchworldtechnology.com
Posted 1 month ago
5.0 years
0 Lacs
Ahmedabad
On-site
Join our vibrant team at Zymr as a Senior DevOps CI/CD Engineer and become a driving force behind the exciting world of continuous integration and deployment automation. We're a dynamic group dedicated to building a high-quality product while maintaining exceptional speed and efficiency. This is a fantastic opportunity to be part of our rapidly growing team. Job Title : Sr. DevOps Engineer Location: Ahmedabad/Pune Experience: 5 + Years Educational Qualification: UG: BS/MS in Computer Science, or other engineering/technical degree Responsibilities: Deployments to Development, Staging, and Production Take charge of managing deployments to each environment with ease: Skillfully utilize Github protocols to identify and resolve root causes of merge conflicts and version mismatches. Deploy hotfixes promptly by leveraging deployment automation and scripts. Provide guidance and approval for Ruby on Rails (Ruby) scripting performed by junior engineers, ensuring smooth code deployment across various development environments. Review and approve CI/CD scripting pull requests from engineers, offering valuable feedback to enhance code quality. Ensure the smooth operation of each environment on a daily basis, promptly addressing any issues that arise: Leverage Datadog monitoring to maintain a remarkable uptime of 99.999% for each development environment. Develop strategic plans for Bash and Ruby scripting to automate health checks and enable auto-healing mechanisms in the event of errors. Implement effective auto-scaling strategies to handle higher-than-usual traffic on these development environments. Evaluate historical loads and implement autoscaling mechanisms to provide additional resources and computing power, optimizing workload performance. Collaborate with DevOps to plan capacity and monitoring using Datadog. Analyze developer workflows in close collaboration with team leads and attend squad standup meetings, providing valuable suggestions for improvement. Harness the power of Ruby and Bash to create tools that enhance engineers' development workflow. Script infrastructure using Terraform to facilitate the creation infrastructure Leverage CI/CD to add security scanning to code pipelines Develop Bash and Ruby scripts to automate code deployment while incorporating robust security checks for vulnerabilities. Enhance our CI/CD pipeline by building Canary Stages with Circle CI, Github Actions, YAML and Bash scripting. Integrate stress testing mechanisms using Ruby on Rails, Python, and Bash scripting into the pipeline's stages. Look for ways to reduce engineering toil and replace manual processes with automation! Nice to have: Terraform is required Github, AWS tooling, however pipeline outside of AWS, Rails (other scripting languages okay)
Posted 1 month ago
5.0 years
0 Lacs
Surat, Gujarat, India
On-site
Position : Technical Lead Location : Surat, Gujarat. (Onsite) ✅ Key Responsibilities 🚀 Architecture & System Design · Define scalable, secure, and modular architectures. · Implement high-availability patterns (circuit breakers, autoscaling, load balancing). · Enforce OWASP best practices, role-based access, and GDPR/PIPL compliance. 💻 Full-Stack Development · Oversee React Native & React.js codebases; mentor on state management (Redux/MobX). · Architect backend services with Node.js/Express; manage real-time layers (WebSocket, Socket.io). · Integrate third-party SDKs (streaming, ads, offerwalls, blockchain). 📈 DevOps & Reliability · Own CI/CD pipelines and Infrastructure-as-Code (Terraform/Kubernetes). · Drive observability (Grafana, Prometheus, ELK); implement SLOs and alerts. · Conduct load testing, capacity planning, and performance optimization. 👥 Team Leadership & Delivery · Mentor 5–10 engineers, lead sprint planning, code reviews, and Agile ceremonies. · Collaborate with cross-functional teams to translate roadmaps into deliverables. · Ensure on-time feature delivery and manage risk logs. 🔍 Innovation & Continuous Improvement · Evaluate emerging tech (e.g., Layer-2 blockchain, edge computing). · Improve development velocity through tools (linters, static analysis) and process optimization. 📌 What You’ll Need · 5+ years in full-stack development, 2+ years in a lead role · Proficient in: React.js, React Native, Node.js, Express, AWS, Kubernetes · Strong grasp of database systems (PostgreSQL, Redis, MongoDB) · Excellent communication and problem-solving skills · Startup or gaming experience a bonus 🎯 Bonus Skills · Blockchain (Solidity, smart contracts), streaming protocols (RTMP/HLS) · Experience with analytics tools (Redshift, Metabase, Looker) · Prior exposure to monetization SDKs (PubScale, AdX)
Posted 1 month ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Who We Are AssetPlus is a pioneering B2B2C wealth management platform based in India. Since 2016, we have empowered mutual fund distributors to seamlessly manage retail investments across diverse financial products like mutual funds, fixed deposits, and NPS. Our innovative approach simplifies wealth management and helps our partners grow with ease. Who We Are Looking For We are seeking a skilled and proactive DevOps Engineer to join our growing engineering team. You will play a critical role in ensuring smooth, secure, and efficient operations across our platform. As a DevOps Engineer at AssetPlus, you will work closely with developers, product managers, and leadership to streamline deployments, manage resources, and optimize infrastructure. If you thrive on solving complex operational challenges and enabling teams to deliver at scale, this role is for you! What Success Looks Like You will be successful in this role if you: Build and maintain robust monitoring and alerting systems, ensuring uptime and early detection of potential issues Manage developer permissions securely while balancing ease of access with compliance Drive cost and resource optimization initiatives, reducing unnecessary expenses while maintaining performance Establish and maintain secure, scalable networking and cloud infrastructure Create efficient, reliable deployment pipelines that minimize downtime and errors Take ownership of MongoDB database management, ensuring optimal performance, backups, and security Key Responsibilities Monitoring and Alerts Setup Design and implement monitoring systems to track application performance, health, and availability Configure automated alerts to proactively address potential problems Permission Management for Developers Set up and manage IAM roles, policies, and access control for developers and teams Regularly review and audit permissions to maintain compliance and security Cost and Resource Optimization Analyze cloud infrastructure costs and identify areas for optimization Implement autoscaling and right-sizing strategies for efficient resource usage Networking and Security Configure and maintain secure networking setups, including VPCs, subnets, and firewalls Ensure compliance with security best practices and implement robust systems for vulnerability management Deployment Pipelines Build and maintain CI/CD pipelines for smooth, automated deployments Optimize pipelines for speed, reliability, and rollback capabilities MongoDB Management Monitor and optimize MongoDB performance, ensuring high availability and minimal latency Manage backups, restoration, and version upgrades securely. What We Value In You Experience:Hands-on experience in DevOps roles, with a proven track record in cloud platforms AWS preferred) Technical Expertise:Proficiency in infrastructure as code (e.g., Terraform), CI/CD tools (e.g., Jenkins, GitHub Actions), and monitoring tools (e.g., Prometheus, Grafana) Problem-Solving:A proactive and resourceful mindset to tackle challenges in infrastructure, deployments, and optimization Collaboration:A team player who communicates effectively and works well with cross-functional teams Attention to Detail:A strong commitment to security, compliance, and system integrity. What We Offer A collaborative, innovative work environment where your contributions directly impact the success of our platform Opportunities for growth and professional development in a fast-paced startup Flexible work culture with the tools and resources you need to succeed. Ready to join us? If you are excited about building and maintaining the backbone of a high-impact wealth management platform, apply now and be a part of our journey to revolutionize the financial landscape in India.
Posted 1 month ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
🚀 What We’re Building CodeRound AI matches top 5% tech talent to fastest growing VC funded AI startups in Silicon Valley and India. Candidates apply once and get UPTO 20 remote as well as onsite interview opportunities IF selected! Top-tier product startups in US, UAE & India have hired top engineers & ML folk using CodeRound 🧩 What You’ll Do Build and optimize our cloud infrastructure — scalable, secure, and cost-effective (mostly AWS). Set up and manage CI/CD pipelines to ensure smooth deployment across backend, AI services, and mobile. Containerize backend services (FastAPI, Rails) and optimize them for performance. Implement monitoring, alerting, and logging to catch issues before users do. Optimize database performance (Postgres, Redis) and manage backups and scaling. Collaborate with backend, AI, and product teams to deploy new features safely and quickly. Champion infra-as-code and automation wherever possible. 💥 Why this is exciting You'll own DevOps for a high-usage, real-world AI platform — not just internal tools. You’ll work on real-time, high-stakes flows — interviews, scoring, hiring decisions. You’ll work closely with founders, ship weekly, and see the direct impact of your work. ✅ You’ll Be Great At This If You Have 2–5 years of experience as a DevOps engineer, SRE, or infrastructure engineer. Are strong with AWS services (EC2, RDS, ECS/EKS, S3, CloudWatch). Can write clean, reusable Terraform or CloudFormation code. Have experience setting up CI/CD pipelines and optimizing build/release flows. Are comfortable with Docker, Linux servers, and basic networking (VPCs, security groups). Understand application and database scaling (horizontal/vertical). ⚡ Bonus If You Have experience supporting AI/ML pipelines in production (fine-tuning infra, vector DBs, etc.). Know cost optimization tricks for cloud infra (spot instances, autoscaling groups, etc.). Are excited to eventually build a small infra team
Posted 1 month ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a highly skilled and motivated Senior Kafka Infrastructure Engineer to join our platform engineering team. This role is ideal for someone who is deeply experienced with the Apache Kafka ecosystem and passionate about building scalable, reliable, and secure streaming infrastructure. Key Responsibilities: Design, deploy, manage, and scale highly available Kafka clusters in production environments. Administer Kafka components, including Brokers, ZooKeeper/KRaft, Topics, Partitions, Schema Registry, Kafka Connect, and Kafka Streams. Deploy Kafka on Kubernetes clusters using Strimzi operators, Helm charts, and Terraform. Implement autoscaling, resource optimisation, network policies, and persistent storage (PVC) configurations. Monitor Kafka health and performance using Prometheus, Grafana, JMX Exporter, and custom metrics. Secure Kafka infrastructure with TLS, SASL, ACLs, Kubernetes secrets, and RBAC. Automate Kafka provisioning and deployment in AWS, GCP, or Azure (preferably with EKS, GKE, or AKS). Integrate Kafka infrastructure management into CI/CD pipelines using ArgoCD, Jenkins, etc. Build and maintain containerised Kafka deployment workflows and release pipelines. Required Skills and Experience: Deep understanding of Kafka architecture and internals. Extensive hands-on experience managing Kafka in cloud-native environments. Proficiency with Kubernetes and container orchestration concepts. Experience with Infrastructure as Code tools like Helm and Terraform. Solid grasp of cloud-native security practices and authentication mechanisms. Proven track record in automation and incident resolution. Strong debugging, analytical, and problem-solving skills. Soft Skills: Proactive, ownership-driven, and automation-first mindset. Strong verbal and written communication skills. Comfortable working collaboratively with SREs, developers, and other cross-functional teams. Detail-oriented and documentation-focused. Willingness to mentor and share knowledge with peers. Location: Hybrid (Gurgaon) Work Hours: Aligned with USA time zones Urgency: Must be able to join within 1 month
Posted 1 month ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role Overview Join our dynamic team in, Bangalore as a Backend + DevOps Engineer . You'll architect and scale document processing pipelines that handle thousands of financial documents daily, ensuring high availability and cost efficiency. What You'll Do Build scalable async processing pipelines for document classification, extraction, and validation Optimize cloud infrastructure costs while maintaining 99.9% uptime for document processing workflows Design and implement APIs for document upload, processing status, and results retrieval Manage Kubernetes deployments with autoscaling based on document processing load Implement monitoring and observability for complex multistage document workflows Optimize database performance for high-volume document metadata and processing results Build CI/CD pipelines for safe deployment of processing algorithms and business rules Technical RequirementsMust Have: 5+ years backend development (Python or Go) Strong experience with async processing (Celery, Temporal, or similar) Docker containerization and orchestration Cloud platforms (AWS/GCP/Azure) with cost optimization experience API design and development (REST/GraphQL) Database optimization (MongoDB, PostgreSQL) Production monitoring and debugging Nice to Have: Kubernetes experience Experience with document processing or ML pipelines Infrastructure as Code (Terraform/CloudFormation) Message queues (SQS, RabbitMQ, Kafka) Performance optimization for high-throughput systems
Posted 1 month ago
1.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We are looking for Backend Engineers with 1-3 years of production experience shipping and supporting backend code. You will be a part of our team, owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools, and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it's doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Requirements Proficient in Rust - comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing - have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills - you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) - able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud- have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have - exposure to blockchain or high-volume financial data streams. This job was posted by Akshay Singh from Yugen.ai.
Posted 1 month ago
2.0 years
3 - 5 Lacs
India
On-site
AWS Cloud Engineer with extensive experience of 2 years in designing available, cost-efficient, Fault-Tolerant and scalable distributed systems on AWS; exposure in AWS deployment and management services. Monitoring the deployments in environments, debugging deployment issues and resolving the same in timely manner reducing the downtime. Experience in AWS Cloud and DevOps Tools. Experienced working in AWS Infrastructure and its services like IAM, VPC, EC2, EBS, S3, ALB, NACL, Security Groups, Auto Scaling, RDS, SNS, EFS, CloudWatch, CloudFront. Good hands-on experience in IAC tool like Terraform, CloudFormation. Good Experience in source code management tool Git, Github and source control management concepts like Branches, Merges . Good Experience in automating CI CD pipeline using Jenkins tools. Good hands-on experience in Configuration Management tool like Ansible. Having experience in creating custom Docker Images using Docker file and pushing Docker Images to Docker Hub. Setting up Kubernetes Cluster using EKS and Kubeadm. Writing manifest files to create deployments and services for micro service applications. Configuring Persistent volumes (PVs), PVCs for persistent database environments. Managed Deployment, ReplicationSet, StatefullSet, AutoScaling fo r Kubernetes Clusters. Good Experience on ELK for Log Aggregation and Log monitoring. Implemented, maintained, monitored alarms and notifications for AWS services using Cloud Watch and SNS. Experienced in deploying and monitoring applications on various platforms and setting up life cycle policies to back data from AWS S3. Configured CloudWatch alarm rules for operational and performance metrics for AWS resources and applications. Provisioned AWS resources using AWS Management Console, Command line Interface (CLI) Planed, built, and configured network infrastructure within VPC and other components. Responsible for implementing and supporting of cloud-based infrastructure and its solutions. Launching and configuring EC2 Instance using AMIs (Linux) Created IAM users and Policies towards application access. Installing and configuring Apache web server in windows and Linux. Initiating alarms in CloudWatch service for monitoring the server’s performance, CPU Utilization, disk usage etc.to take recommended actions for better performance. Creating/Managing Instance Image/Snapshots/Managing Volumes. Setup/Managing VPC, Subnets, make connection between different availability zones. Monitor Access logs and Error logs in AWS Cloud watch. Configuring EFS to EC2 instances. Creating & Configuring Elastic Load Balancer to distribute the traffic. Administration of Jenkins server - Includes Setup of Jenkins, parameterized builds and Deployment automation. Experience in creating Jenkins jobs, plug-in installations, setting up distributed builds concept and other Jenkins administration activities. Experience in managing microservices application using docker and Kubernetes. Increasing EBS volume storage capacity using AWS EBS Volume features. Creating/Managing buckets on S3 and assigning access permissions Applications of software installations, troubleshooting and updating Build and release EC2 instance Amazon Linux for development and production environment. Moving EC2 logs into S3. Experience in S3 Versioning, Server access logging & Life cycle policies on S3Buckets. Creating & Maintaining user accounts, groups and permissions. Created SNS notifications for multiple services in AWS. Creating and attaching Elastic IP to EC2 instances Assigning access permissions for files and directories to users and groups. Creating and managing user accounts/groups, assigning Roles and policies using IAM Experience on AWS Cloud services like IAM, S3, VPC, EC2, CloudWatch, CloudFront, CloudTrail, Route53, EFS, AWS Auto Scaling, EBS, SNS, SES, SQS, KMS, RDS, Security groups, Lambda, ECS, EKS,Tag Editor and more. Involved in designing and developing Amazon EC2, Amazon S3, Amazon RDS, Lamnda and other services. Creating containers in docker, pulling images deployment. Creating networks, creating nodes and pods in Kubernetes. Deployments using Jenkins through CI/CD pipeline. Creating infrastructure using terraform. Responsible for designing and deploying best SCM processes and procedures. Responsible for branching , merging and resolving various conflicts arising in GIT. Setup/Created CI/CD pipeline in Jenkins and scheduling a job. Established complete Jenkins CI-CD pipeline and complete workflow of build and delivery pipelines. Involved in writing DockerFile to build customized DockerImage for creating Docker Container and pushing DockerImage to DockerHub. Creating and managed multiple containers using Kubernetes . And creating deployments using Yaml code. Used Kubernetes to Orchestrate the deployment, scaling and management of docker container. Experience with monitoring tools like Prometheus and Grafana. Responsible to establish complete pipeline work-flow starting from pulling source code from git repository till deploying end product into Kubernetes cluster. Managing infrastructure of client both Windows and Linux Creation of files and directories. Creating users and groups. Assigning access permissions for files and directories to users and group. Installing and managing Web Server. Installation of packages using YUM (HTTP, HTTPS) Monitoring system Performance of Disk utilization and CPU utilization Technical Skills Operating Systems: Linux, Cent OS, Ubuntu and Windows. AWS: EC2, VPC, S3, EBS, IAM, Load balancing, Autoscaling, CloudFormation, CloudWatch, CloudFront, SNS, EFS, Route-53 DevOps Tools: Git, Ansible, Chef, Docker, Jenkins, Kubernetes, Terraform. Scripting Languages: Shell, Python. Monitoring Tools: CloudWatch, Grafana, Prometheus. Job Types: Full-time, Permanent, Fresher Pay: ₹345,405.87 - ₹500,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Morning shift Rotational shift Supplemental Pay: Performance bonus Yearly bonus Work Location: In person Speak with the employer +91 8668118196
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough