Jobs
Interviews

425 Autoscaling Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 years

0 Lacs

amritsar, punjab, india

Remote

Experience : 1.00 + years Salary : Confidential (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - A funded, fast-growing InsurTech platform building digital solutions for the insurance industry) What do you need for this opportunity? Must have skills required: Git workflow, EKS, Grafana, My sql, AWS, Jenkins, Kubernetes A funded, fast-growing InsurTech platform building digital solutions for the insurance industry is Looking for: Job Overview As a Junior DevOps Engineer, you’ll be instrumental in ensuring our applications are production-ready, smoothly deployed, and reliably maintained. You’ll collaborate with developers and QA to automate environments, streamline deployments, and monitor system health across multiple stages. This role is perfect for someone who thrives in fast-paced environments and is eager to grow through hands-on experience and continuous learning. We value curiosity, initiative, and a growth mindset. If you are someone who actively seeks out new tools, stays updated with industry trends, and enjoys learning independently—this role is for you. Key Responsibilities Support CI/CD pipelines using Jenkins and Argo CD Manage containerized applications on Kubernetes (EKS) and troubleshoot deployment issues Provision and optimize production environments in AWS using performance benchmarks Finalize infrastructure components including security groups, autoscaling, and high availability Monitor systems using Grafana and validate observability (logging, tracing, alerting) Coordinate releases, validate rollback plans, and support change request processes Deploy Java microservices, Angular applications, and MySQL databases on AWS RDS Participate in daily scrums and assist the Development Director during launch phases Ensure backup, recovery, and restore procedures are tested and documented Qualifications 1–2 years of experience in DevOps or related roles Hands-on experience with: AWS EKS / Kubernetes CI/CD tools (Jenkins, Argo CD) Grafana MySQL Familiarity with Git workflows and agile development practices Strong problem-solving and communication skills Self-driven learner with a passion for exploring new technologies and improving processes Ability to adapt quickly and take ownership of tasks in a dynamic environment Are available during EST hours (with overlap until noon or 1 PM EST). Have strong English communication skills for collaboration. Benefits: Excellent work culture Focus on work life balance Focus on Professional Development How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

surat, gujarat, india

Remote

Experience : 1.00 + years Salary : Confidential (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - A funded, fast-growing InsurTech platform building digital solutions for the insurance industry) What do you need for this opportunity? Must have skills required: Git workflow, EKS, Grafana, My sql, AWS, Jenkins, Kubernetes A funded, fast-growing InsurTech platform building digital solutions for the insurance industry is Looking for: Job Overview As a Junior DevOps Engineer, you’ll be instrumental in ensuring our applications are production-ready, smoothly deployed, and reliably maintained. You’ll collaborate with developers and QA to automate environments, streamline deployments, and monitor system health across multiple stages. This role is perfect for someone who thrives in fast-paced environments and is eager to grow through hands-on experience and continuous learning. We value curiosity, initiative, and a growth mindset. If you are someone who actively seeks out new tools, stays updated with industry trends, and enjoys learning independently—this role is for you. Key Responsibilities Support CI/CD pipelines using Jenkins and Argo CD Manage containerized applications on Kubernetes (EKS) and troubleshoot deployment issues Provision and optimize production environments in AWS using performance benchmarks Finalize infrastructure components including security groups, autoscaling, and high availability Monitor systems using Grafana and validate observability (logging, tracing, alerting) Coordinate releases, validate rollback plans, and support change request processes Deploy Java microservices, Angular applications, and MySQL databases on AWS RDS Participate in daily scrums and assist the Development Director during launch phases Ensure backup, recovery, and restore procedures are tested and documented Qualifications 1–2 years of experience in DevOps or related roles Hands-on experience with: AWS EKS / Kubernetes CI/CD tools (Jenkins, Argo CD) Grafana MySQL Familiarity with Git workflows and agile development practices Strong problem-solving and communication skills Self-driven learner with a passion for exploring new technologies and improving processes Ability to adapt quickly and take ownership of tasks in a dynamic environment Are available during EST hours (with overlap until noon or 1 PM EST). Have strong English communication skills for collaboration. Benefits: Excellent work culture Focus on work life balance Focus on Professional Development How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

ahmedabad, gujarat, india

Remote

Experience : 1.00 + years Salary : Confidential (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - A funded, fast-growing InsurTech platform building digital solutions for the insurance industry) What do you need for this opportunity? Must have skills required: Git workflow, EKS, Grafana, My sql, AWS, Jenkins, Kubernetes A funded, fast-growing InsurTech platform building digital solutions for the insurance industry is Looking for: Job Overview As a Junior DevOps Engineer, you’ll be instrumental in ensuring our applications are production-ready, smoothly deployed, and reliably maintained. You’ll collaborate with developers and QA to automate environments, streamline deployments, and monitor system health across multiple stages. This role is perfect for someone who thrives in fast-paced environments and is eager to grow through hands-on experience and continuous learning. We value curiosity, initiative, and a growth mindset. If you are someone who actively seeks out new tools, stays updated with industry trends, and enjoys learning independently—this role is for you. Key Responsibilities Support CI/CD pipelines using Jenkins and Argo CD Manage containerized applications on Kubernetes (EKS) and troubleshoot deployment issues Provision and optimize production environments in AWS using performance benchmarks Finalize infrastructure components including security groups, autoscaling, and high availability Monitor systems using Grafana and validate observability (logging, tracing, alerting) Coordinate releases, validate rollback plans, and support change request processes Deploy Java microservices, Angular applications, and MySQL databases on AWS RDS Participate in daily scrums and assist the Development Director during launch phases Ensure backup, recovery, and restore procedures are tested and documented Qualifications 1–2 years of experience in DevOps or related roles Hands-on experience with: AWS EKS / Kubernetes CI/CD tools (Jenkins, Argo CD) Grafana MySQL Familiarity with Git workflows and agile development practices Strong problem-solving and communication skills Self-driven learner with a passion for exploring new technologies and improving processes Ability to adapt quickly and take ownership of tasks in a dynamic environment Are available during EST hours (with overlap until noon or 1 PM EST). Have strong English communication skills for collaboration. Benefits: Excellent work culture Focus on work life balance Focus on Professional Development How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

jaipur, rajasthan, india

Remote

Experience : 1.00 + years Salary : Confidential (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - A funded, fast-growing InsurTech platform building digital solutions for the insurance industry) What do you need for this opportunity? Must have skills required: Git workflow, EKS, Grafana, My sql, AWS, Jenkins, Kubernetes A funded, fast-growing InsurTech platform building digital solutions for the insurance industry is Looking for: Job Overview As a Junior DevOps Engineer, you’ll be instrumental in ensuring our applications are production-ready, smoothly deployed, and reliably maintained. You’ll collaborate with developers and QA to automate environments, streamline deployments, and monitor system health across multiple stages. This role is perfect for someone who thrives in fast-paced environments and is eager to grow through hands-on experience and continuous learning. We value curiosity, initiative, and a growth mindset. If you are someone who actively seeks out new tools, stays updated with industry trends, and enjoys learning independently—this role is for you. Key Responsibilities Support CI/CD pipelines using Jenkins and Argo CD Manage containerized applications on Kubernetes (EKS) and troubleshoot deployment issues Provision and optimize production environments in AWS using performance benchmarks Finalize infrastructure components including security groups, autoscaling, and high availability Monitor systems using Grafana and validate observability (logging, tracing, alerting) Coordinate releases, validate rollback plans, and support change request processes Deploy Java microservices, Angular applications, and MySQL databases on AWS RDS Participate in daily scrums and assist the Development Director during launch phases Ensure backup, recovery, and restore procedures are tested and documented Qualifications 1–2 years of experience in DevOps or related roles Hands-on experience with: AWS EKS / Kubernetes CI/CD tools (Jenkins, Argo CD) Grafana MySQL Familiarity with Git workflows and agile development practices Strong problem-solving and communication skills Self-driven learner with a passion for exploring new technologies and improving processes Ability to adapt quickly and take ownership of tasks in a dynamic environment Are available during EST hours (with overlap until noon or 1 PM EST). Have strong English communication skills for collaboration. Benefits: Excellent work culture Focus on work life balance Focus on Professional Development How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

greater lucknow area

Remote

Experience : 1.00 + years Salary : Confidential (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - A funded, fast-growing InsurTech platform building digital solutions for the insurance industry) What do you need for this opportunity? Must have skills required: Git workflow, EKS, Grafana, My sql, AWS, Jenkins, Kubernetes A funded, fast-growing InsurTech platform building digital solutions for the insurance industry is Looking for: Job Overview As a Junior DevOps Engineer, you’ll be instrumental in ensuring our applications are production-ready, smoothly deployed, and reliably maintained. You’ll collaborate with developers and QA to automate environments, streamline deployments, and monitor system health across multiple stages. This role is perfect for someone who thrives in fast-paced environments and is eager to grow through hands-on experience and continuous learning. We value curiosity, initiative, and a growth mindset. If you are someone who actively seeks out new tools, stays updated with industry trends, and enjoys learning independently—this role is for you. Key Responsibilities Support CI/CD pipelines using Jenkins and Argo CD Manage containerized applications on Kubernetes (EKS) and troubleshoot deployment issues Provision and optimize production environments in AWS using performance benchmarks Finalize infrastructure components including security groups, autoscaling, and high availability Monitor systems using Grafana and validate observability (logging, tracing, alerting) Coordinate releases, validate rollback plans, and support change request processes Deploy Java microservices, Angular applications, and MySQL databases on AWS RDS Participate in daily scrums and assist the Development Director during launch phases Ensure backup, recovery, and restore procedures are tested and documented Qualifications 1–2 years of experience in DevOps or related roles Hands-on experience with: AWS EKS / Kubernetes CI/CD tools (Jenkins, Argo CD) Grafana MySQL Familiarity with Git workflows and agile development practices Strong problem-solving and communication skills Self-driven learner with a passion for exploring new technologies and improving processes Ability to adapt quickly and take ownership of tasks in a dynamic environment Are available during EST hours (with overlap until noon or 1 PM EST). Have strong English communication skills for collaboration. Benefits: Excellent work culture Focus on work life balance Focus on Professional Development How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

thane, maharashtra, india

Remote

Experience : 1.00 + years Salary : Confidential (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - A funded, fast-growing InsurTech platform building digital solutions for the insurance industry) What do you need for this opportunity? Must have skills required: Git workflow, EKS, Grafana, My sql, AWS, Jenkins, Kubernetes A funded, fast-growing InsurTech platform building digital solutions for the insurance industry is Looking for: Job Overview As a Junior DevOps Engineer, you’ll be instrumental in ensuring our applications are production-ready, smoothly deployed, and reliably maintained. You’ll collaborate with developers and QA to automate environments, streamline deployments, and monitor system health across multiple stages. This role is perfect for someone who thrives in fast-paced environments and is eager to grow through hands-on experience and continuous learning. We value curiosity, initiative, and a growth mindset. If you are someone who actively seeks out new tools, stays updated with industry trends, and enjoys learning independently—this role is for you. Key Responsibilities Support CI/CD pipelines using Jenkins and Argo CD Manage containerized applications on Kubernetes (EKS) and troubleshoot deployment issues Provision and optimize production environments in AWS using performance benchmarks Finalize infrastructure components including security groups, autoscaling, and high availability Monitor systems using Grafana and validate observability (logging, tracing, alerting) Coordinate releases, validate rollback plans, and support change request processes Deploy Java microservices, Angular applications, and MySQL databases on AWS RDS Participate in daily scrums and assist the Development Director during launch phases Ensure backup, recovery, and restore procedures are tested and documented Qualifications 1–2 years of experience in DevOps or related roles Hands-on experience with: AWS EKS / Kubernetes CI/CD tools (Jenkins, Argo CD) Grafana MySQL Familiarity with Git workflows and agile development practices Strong problem-solving and communication skills Self-driven learner with a passion for exploring new technologies and improving processes Ability to adapt quickly and take ownership of tasks in a dynamic environment Are available during EST hours (with overlap until noon or 1 PM EST). Have strong English communication skills for collaboration. Benefits: Excellent work culture Focus on work life balance Focus on Professional Development How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

kanpur, uttar pradesh, india

Remote

Experience : 1.00 + years Salary : Confidential (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - A funded, fast-growing InsurTech platform building digital solutions for the insurance industry) What do you need for this opportunity? Must have skills required: Git workflow, EKS, Grafana, My sql, AWS, Jenkins, Kubernetes A funded, fast-growing InsurTech platform building digital solutions for the insurance industry is Looking for: Job Overview As a Junior DevOps Engineer, you’ll be instrumental in ensuring our applications are production-ready, smoothly deployed, and reliably maintained. You’ll collaborate with developers and QA to automate environments, streamline deployments, and monitor system health across multiple stages. This role is perfect for someone who thrives in fast-paced environments and is eager to grow through hands-on experience and continuous learning. We value curiosity, initiative, and a growth mindset. If you are someone who actively seeks out new tools, stays updated with industry trends, and enjoys learning independently—this role is for you. Key Responsibilities Support CI/CD pipelines using Jenkins and Argo CD Manage containerized applications on Kubernetes (EKS) and troubleshoot deployment issues Provision and optimize production environments in AWS using performance benchmarks Finalize infrastructure components including security groups, autoscaling, and high availability Monitor systems using Grafana and validate observability (logging, tracing, alerting) Coordinate releases, validate rollback plans, and support change request processes Deploy Java microservices, Angular applications, and MySQL databases on AWS RDS Participate in daily scrums and assist the Development Director during launch phases Ensure backup, recovery, and restore procedures are tested and documented Qualifications 1–2 years of experience in DevOps or related roles Hands-on experience with: AWS EKS / Kubernetes CI/CD tools (Jenkins, Argo CD) Grafana MySQL Familiarity with Git workflows and agile development practices Strong problem-solving and communication skills Self-driven learner with a passion for exploring new technologies and improving processes Ability to adapt quickly and take ownership of tasks in a dynamic environment Are available during EST hours (with overlap until noon or 1 PM EST). Have strong English communication skills for collaboration. Benefits: Excellent work culture Focus on work life balance Focus on Professional Development How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

nashik, maharashtra, india

Remote

Experience : 1.00 + years Salary : Confidential (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - A funded, fast-growing InsurTech platform building digital solutions for the insurance industry) What do you need for this opportunity? Must have skills required: Git workflow, EKS, Grafana, My sql, AWS, Jenkins, Kubernetes A funded, fast-growing InsurTech platform building digital solutions for the insurance industry is Looking for: Job Overview As a Junior DevOps Engineer, you’ll be instrumental in ensuring our applications are production-ready, smoothly deployed, and reliably maintained. You’ll collaborate with developers and QA to automate environments, streamline deployments, and monitor system health across multiple stages. This role is perfect for someone who thrives in fast-paced environments and is eager to grow through hands-on experience and continuous learning. We value curiosity, initiative, and a growth mindset. If you are someone who actively seeks out new tools, stays updated with industry trends, and enjoys learning independently—this role is for you. Key Responsibilities Support CI/CD pipelines using Jenkins and Argo CD Manage containerized applications on Kubernetes (EKS) and troubleshoot deployment issues Provision and optimize production environments in AWS using performance benchmarks Finalize infrastructure components including security groups, autoscaling, and high availability Monitor systems using Grafana and validate observability (logging, tracing, alerting) Coordinate releases, validate rollback plans, and support change request processes Deploy Java microservices, Angular applications, and MySQL databases on AWS RDS Participate in daily scrums and assist the Development Director during launch phases Ensure backup, recovery, and restore procedures are tested and documented Qualifications 1–2 years of experience in DevOps or related roles Hands-on experience with: AWS EKS / Kubernetes CI/CD tools (Jenkins, Argo CD) Grafana MySQL Familiarity with Git workflows and agile development practices Strong problem-solving and communication skills Self-driven learner with a passion for exploring new technologies and improving processes Ability to adapt quickly and take ownership of tasks in a dynamic environment Are available during EST hours (with overlap until noon or 1 PM EST). Have strong English communication skills for collaboration. Benefits: Excellent work culture Focus on work life balance Focus on Professional Development How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

nagpur, maharashtra, india

Remote

Experience : 1.00 + years Salary : Confidential (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - A funded, fast-growing InsurTech platform building digital solutions for the insurance industry) What do you need for this opportunity? Must have skills required: Git workflow, EKS, Grafana, My sql, AWS, Jenkins, Kubernetes A funded, fast-growing InsurTech platform building digital solutions for the insurance industry is Looking for: Job Overview As a Junior DevOps Engineer, you’ll be instrumental in ensuring our applications are production-ready, smoothly deployed, and reliably maintained. You’ll collaborate with developers and QA to automate environments, streamline deployments, and monitor system health across multiple stages. This role is perfect for someone who thrives in fast-paced environments and is eager to grow through hands-on experience and continuous learning. We value curiosity, initiative, and a growth mindset. If you are someone who actively seeks out new tools, stays updated with industry trends, and enjoys learning independently—this role is for you. Key Responsibilities Support CI/CD pipelines using Jenkins and Argo CD Manage containerized applications on Kubernetes (EKS) and troubleshoot deployment issues Provision and optimize production environments in AWS using performance benchmarks Finalize infrastructure components including security groups, autoscaling, and high availability Monitor systems using Grafana and validate observability (logging, tracing, alerting) Coordinate releases, validate rollback plans, and support change request processes Deploy Java microservices, Angular applications, and MySQL databases on AWS RDS Participate in daily scrums and assist the Development Director during launch phases Ensure backup, recovery, and restore procedures are tested and documented Qualifications 1–2 years of experience in DevOps or related roles Hands-on experience with: AWS EKS / Kubernetes CI/CD tools (Jenkins, Argo CD) Grafana MySQL Familiarity with Git workflows and agile development practices Strong problem-solving and communication skills Self-driven learner with a passion for exploring new technologies and improving processes Ability to adapt quickly and take ownership of tasks in a dynamic environment Are available during EST hours (with overlap until noon or 1 PM EST). Have strong English communication skills for collaboration. Benefits: Excellent work culture Focus on work life balance Focus on Professional Development How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

india

Remote

Experience : 1.00 + years Salary : Confidential (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - A funded, fast-growing InsurTech platform building digital solutions for the insurance industry) What do you need for this opportunity? Must have skills required: Git workflow, EKS, Grafana, My sql, AWS, Jenkins, Kubernetes A funded, fast-growing InsurTech platform building digital solutions for the insurance industry is Looking for: Job Overview As a Junior DevOps Engineer, you’ll be instrumental in ensuring our applications are production-ready, smoothly deployed, and reliably maintained. You’ll collaborate with developers and QA to automate environments, streamline deployments, and monitor system health across multiple stages. This role is perfect for someone who thrives in fast-paced environments and is eager to grow through hands-on experience and continuous learning. We value curiosity, initiative, and a growth mindset. If you are someone who actively seeks out new tools, stays updated with industry trends, and enjoys learning independently—this role is for you. Key Responsibilities Support CI/CD pipelines using Jenkins and Argo CD Manage containerized applications on Kubernetes (EKS) and troubleshoot deployment issues Provision and optimize production environments in AWS using performance benchmarks Finalize infrastructure components including security groups, autoscaling, and high availability Monitor systems using Grafana and validate observability (logging, tracing, alerting) Coordinate releases, validate rollback plans, and support change request processes Deploy Java microservices, Angular applications, and MySQL databases on AWS RDS Participate in daily scrums and assist the Development Director during launch phases Ensure backup, recovery, and restore procedures are tested and documented Qualifications 1–2 years of experience in DevOps or related roles Hands-on experience with: AWS EKS / Kubernetes CI/CD tools (Jenkins, Argo CD) Grafana MySQL Familiarity with Git workflows and agile development practices Strong problem-solving and communication skills Self-driven learner with a passion for exploring new technologies and improving processes Ability to adapt quickly and take ownership of tasks in a dynamic environment Are available during EST hours (with overlap until noon or 1 PM EST). Have strong English communication skills for collaboration. Benefits: Excellent work culture Focus on work life balance Focus on Professional Development How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

pune, maharashtra, india

On-site

Experience: 5+ years Location: Pune, Bangalore Devops Engineer skills Must be well versed with Linux OS and must have complete knowledge on the Linux Command line. Must have good hands-on experience with Version Control System like Git - GitHub, GitLab. Must have complete knowledge on command line and UI. Must have knowledge with containerization tools like Docker. Must have at least intermediate knowledge on container orchestration tools like Kubernetes. Must have knowledge on AWS services like EC2, VPC, IAM, S3, RDS, EKS, load balancers and autoscaling, Cache Service - Redis and OpenSearch Must have knowledge on monitoring tools like AWS Cloud watch, Grafana and any others. DBA skills Must have knowledge on MySQL, PostgreSQL, MongoDB. Must have hands-on with AWS DB services like RDS, Aurora RDS, DocumentDB and Data Migration Service (DMS). Must know about Backup and Recovery - Manual and Automated Backups. Point in Time recovery and must know about snapshot strategies. Performance tuning , query optimization, indexing and slow query analysis High availability and DR Failover strategies, Multi AZ and replication. Schema Design and Management: Normalization, constraints, and version control for schema changes (flyway and liquibase). Troubleshoot and resolving database related issues. Planning for DB maintenance, capacity and growth requirement. Secondary good to have Can have at least basic knowledge how on IAAC tools like Terraform and or Ansible.

Posted 3 weeks ago

Apply

4.0 - 9.0 years

4 - 6 Lacs

india

On-site

Are you ready to be a game-changer in the world of Fintech? Arukus Technologies is at the forefront of transforming the financial landscape with cutting-edge digital solutions. As we spearhead the digital revolution in the microfinance and NBFC sectors across India and Africa, we are on the lookout for a dynamic AWS Engineer to be a vital part of our innovative team. About Arukus Technologies: At Arukus, we're not just building web applications we're crafting the future of financial technology. Our commitment to excellence has positioned us as leaders in the industry, working hand-in-hand with some of the largest microfinance institutions and NBFCs to bring about unparalleled digital transformation. We pride ourselves on driving operational excellence and fostering growth through ground breaking solutions. Role Overview: Position: AWS Engineer Location: Kolkata (Work from Office) Experience: 4 - 9 years (Min Core Experience in AWS - 4 yrs) Job Responsibility: Experience and understanding of AWS components - EC2, Autoscaling, ELB, S3, EFS etc. 1. Cloud- AWS, Azure, GCP 2. OS - RedHat Linux, CentOS. Ubuntu 3. DevOps Tools- Jenkins, CICD Administration, Jenkins File 4. Code Base- GitHub, GitLab(with CI/CD) 5. CICD Administration 6. Developing CICD 7. Automating recurring manual tasks using scripting 8. Tweak the code whenever required for upgrades, fixing vulnerabilities 9. Troubleshooting the pipe line execution failure 10. Monitoring pipelines 11. Experience in any pipeline using GitLab 12. Configuration Tools- Puppet/Cheff/Ansible 13. Scripting- Scripting in any language Bash/Python etc. Required Skills and Qualifications: DevOps AWS (CICD, CentOS, Jenkins) Immediate joiners are preferred. Interested candidates do send your cv to hr@arukustech.com Why Arukus? Joining Arukus Technologies means being part of a team that is revolutionizing fintech, making a tangible impact on the financial landscape. As you collaborate with industry leaders and work on projects that drive digital transformation, you'll find a supportive environment that fosters creativity, growth, and unparalleled professional development. If you're ready to be a trailblazer in the fintech sector and contribute to the success story of Arukus Technologies, apply now! Let's shape the future of finance together. Job Types: Full-time, Permanent Pay: ₹400,000.00 - ₹650,000.00 per year Benefits: Leave encashment Provident Fund Education: Bachelor's (Preferred) Experience: AWS: 4 years (Required) Location: Salt Lake, Kolkata, West Bengal (Required) Work Location: In person Expected Start Date: 26/08/2025

Posted 3 weeks ago

Apply

0 years

0 Lacs

mumbai metropolitan region

Remote

Own and scale our AWS-based platform with Kubernetes (EKS), Terraform IaC, and GitHub Actions–driven CI/CD. You’ll streamline container build/deploy, observability, security, and developer workflows (including Slack integrations) to deliver reliable, cost-efficient, and secure infrastructure. Responsibilities Manage and optimize Kubernetes (EKS) on AWS, including deployments, scaling, ingress, networking, and security. Maintain and extend Terraform-based IaC for consistent, repeatable, multienvironment deployments (modules, remote state). Build, maintain, and optimize GitHub Actions pipelines for backend, frontend, and infrastructure. Manage Docker images and AWS ECR (build, tag/version, cache, and vulnerability scanning). Monitor health, performance, and costs across AWS; recommend and implement improvements. Implement alerting, logging, and observability using CloudWatch, Prometheus, Grafana (or equivalents). Automate operational tasks via scripting and internal tooling to reduce manual work. Partner with developers to ensure environment parity, smooth deployments, and fast incident resolution. Integrate Slack with CI/CD, monitoring, and incident workflows. Enforce security best practices across AWS, Kubernetes, CI/CD pipelines, and container images. Requirements Must-have skills & experience Cloud (AWS): IAM, VPC, EC2, S3, RDS, CloudWatch, EKS, ECR. Kubernetes (EKS): Deployments, autoscaling, ingress, networking, secrets, ConfigMaps, Helm. IaC (Terraform): Modular code, remote state, environment patterns. Containers: Docker image building/optimization and vulnerability scanning. CI/CD (GitHub Actions): Workflows, matrix builds, caching, secrets management. Monitoring & Logging: Hands-on with CloudWatch, Prometheus, Grafana, ELK/EFK, or Loki. Security: Practical knowledge of IAM policies, K8s RBAC, and hardening practices. Scripting: Proficiency in Bash. Collaboration: Experience wiring Slack for deployments, alerts, and on-call workflows. Nice to have AWS cost optimization experience. Service mesh / advanced Kubernetes networking. Secrets management (AWS Secrets Manager, HashiCorp Vault). Familiarity with incident response processes and on-call rotations. What You Can Expect In Return ESOPs Health insurance Statutory benefits like PF & Gratuity Flexible Working structure Professional development opportunities Collaborative and inclusive work culture EduFund is India’s first dedicated education-focused fintech platform, built to help Indian families plan, save and secure their child’s education. Founded in 2020, our mission is to remove financial stress from education planning. We offer a full suite of solutions, including investments, education loans, visa and immigration support, international remittance, and expert counselling, making it India’s only end‑to‑end education financial planning platform. Whether it's saving early or funding a degree abroad, EduFund helps parents make smarter financial decisions for their child’s future. With 2.5 lakh+ families, 40+ AMC partners, 15+ lending partners, and a growing presence across Tier 1 and Tier 2 cities, EduFund is becoming the go-to platform for education financing in India. In July 2025, EduFund raised $6 million in Series A funding, led by Cercano Management and MassMutual Ventures, bringing the total capital raised to $12 million. Explore more at www.edufund.in Skills: kubernetes,aws,github,cd,ci,networking,terraform,security

Posted 3 weeks ago

Apply

0.0 - 4.0 years

4 - 6 Lacs

salt lake, kolkata, west bengal

On-site

Are you ready to be a game-changer in the world of Fintech? Arukus Technologies is at the forefront of transforming the financial landscape with cutting-edge digital solutions. As we spearhead the digital revolution in the microfinance and NBFC sectors across India and Africa, we are on the lookout for a dynamic AWS Engineer to be a vital part of our innovative team. About Arukus Technologies: At Arukus, we're not just building web applications we're crafting the future of financial technology. Our commitment to excellence has positioned us as leaders in the industry, working hand-in-hand with some of the largest microfinance institutions and NBFCs to bring about unparalleled digital transformation. We pride ourselves on driving operational excellence and fostering growth through ground breaking solutions. Role Overview: Position: AWS Engineer Location: Kolkata (Work from Office) Experience: 4 - 9 years (Min Core Experience in AWS - 4 yrs) Job Responsibility: Experience and understanding of AWS components - EC2, Autoscaling, ELB, S3, EFS etc. 1. Cloud- AWS, Azure, GCP 2. OS - RedHat Linux, CentOS. Ubuntu 3. DevOps Tools- Jenkins, CICD Administration, Jenkins File 4. Code Base- GitHub, GitLab(with CI/CD) 5. CICD Administration 6. Developing CICD 7. Automating recurring manual tasks using scripting 8. Tweak the code whenever required for upgrades, fixing vulnerabilities 9. Troubleshooting the pipe line execution failure 10. Monitoring pipelines 11. Experience in any pipeline using GitLab 12. Configuration Tools- Puppet/Cheff/Ansible 13. Scripting- Scripting in any language Bash/Python etc. Required Skills and Qualifications: DevOps AWS (CICD, CentOS, Jenkins) Immediate joiners are preferred. Interested candidates do send your cv to hr@arukustech.com Why Arukus? Joining Arukus Technologies means being part of a team that is revolutionizing fintech, making a tangible impact on the financial landscape. As you collaborate with industry leaders and work on projects that drive digital transformation, you'll find a supportive environment that fosters creativity, growth, and unparalleled professional development. If you're ready to be a trailblazer in the fintech sector and contribute to the success story of Arukus Technologies, apply now! Let's shape the future of finance together. Job Types: Full-time, Permanent Pay: ₹400,000.00 - ₹650,000.00 per year Benefits: Leave encashment Provident Fund Education: Bachelor's (Preferred) Experience: AWS: 4 years (Required) Location: Salt Lake, Kolkata, West Bengal (Required) Work Location: In person Expected Start Date: 26/08/2025

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

jaipur, rajasthan, india

On-site

Job Summary We’re seeking a hands-on GenAI & Computer Vision Engineer with 3–5 years of experience delivering production-grade AI solutions. You must be fluent in the core libraries, tools, and cloud services listed below, and able to own end-to-end model development—from research and fine-tuning through deployment, monitoring, and iteration. In this role, you’ll tackle domain-specific challenges like LLM hallucinations, vector search scalability, real-time inference constraints, and concept drift in vision models. Key Responsibilities Generative AI & LLM Engineering Fine-tune and evaluate LLMs (Hugging Face Transformers, Ollama, LLaMA) for specialized tasks Deploy high-throughput inference pipelines using vLLM or Triton Inference Server Design agent-based workflows with LangChain or LangGraph, integrating vector databases (Pinecone, Weaviate) for retrieval-augmented generation Build scalable inference APIs with FastAPI or Flask, managing batching, concurrency, and rate-limiting Computer Vision Development Develop and optimize CV models (YOLOv8, Mask R-CNN, ResNet, EfficientNet, ByteTrack) for detection, segmentation, classification, and tracking Implement real-time pipelines using NVIDIA DeepStream or OpenCV (cv2); optimize with TensorRT or ONNX Runtime for edge and cloud deployments Handle data challenges—augmentation, domain adaptation, semi-supervised learning—and mitigate model drift in production MLOps & Deployment Containerize models and services with Docker; orchestrate with Kubernetes (KServe) or AWS SageMaker Pipelines Implement CI/CD for model/version management (MLflow, DVC), automated testing, and performance monitoring (Prometheus + Grafana) Manage scalability and cost by leveraging cloud autoscaling on AWS (EC2/EKS), GCP (Vertex AI), or Azure ML (AKS) Cross-Functional Collaboration Define SLAs for latency, accuracy, and throughput alongside product and DevOps teams Evangelize best practices in prompt engineering, model governance, data privacy, and interpretability Mentor junior engineers on reproducible research, code reviews, and end-to-end AI delivery Required Qualifications You must be proficient in at least one tool from each category below: LLM Frameworks & Tooling: Hugging Face Transformers, Ollama, vLLM, or LLaMA Agent & Retrieval Tools: LangChain or LangGraph; RAG with Pinecone, Weaviate, or Milvus Inference Serving: Triton Inference Server; FastAPI or Flask Computer Vision Frameworks & Libraries: PyTorch or TensorFlow; OpenCV (cv2) or NVIDIA DeepStream Model Optimization: TensorRT; ONNX Runtime; Torch-TensorRT MLOps & Versioning: Docker and Kubernetes (KServe, SageMaker); MLflow or DVC Monitoring & Observability: Prometheus; Grafana Cloud Platforms: AWS (SageMaker, EC2/EKS) or GCP (Vertex AI, AI Platform) or Azure ML (AKS, ML Studio) Programming Languages: Python (required); C++ or Go (preferred) Additionally Bachelor’s or Master’s in Computer Science, Electrical Engineering, AI/ML, or a related field 3–5 years of professional experience shipping both generative and vision-based AI models in production Strong problem-solving mindset; ability to debug issues like LLM drift, vector index staleness, and model degradation Excellent verbal and written communication skills Typical Domain Challenges You’ll Solve LLM Hallucination & Safety: Implement grounding, filtering, and classifier layers to reduce false or unsafe outputs Vector DB Scaling: Maintain low-latency, high-throughput similarity search as embeddings grow to millions Inference Latency: Balance batch sizing and concurrency to meet real-time SLAs on cloud and edge hardware Concept & Data Drift: Automate drift detection and retraining triggers in vision and language pipelines Multi-Modal Coordination: Seamlessly orchestrate data flow between vision models and LLM agents in complex workflows About Company Hi there! We are Auriga IT. We power businesses across the globe through digital experiences, data and insights. From the apps we design to the platforms we engineer, we're driven by an ambition to create world-class digital solutions and make an impact. Our team has been part of building the solutions for the likes of Zomato, Yes Bank, Tata Motors, Amazon, Snapdeal, Ola, Practo, Vodafone, Meesho, Volkswagen, Droom and many more. We are a group of people who just could not leave our college-life behind and the inception of Auriga was solely based on a desire to keep working together with friends and enjoying the extended college life. Who Has not Dreamt of Working with Friends for a Lifetime Come Join In Our Website - https://aurigait.com/

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

chennai, tamil nadu, india

On-site

We are looking for a DevOps Engineer cum Linux System Administrator in Amazon AWS, who can design, build, deploy, maintain, monitor and cost-effectively scale as per need of our web applications LAMP environments. The person should also be able to maintain and extend our AWS, Puppet, Git, Jenkins CI environment and our containerized local dev and QA environments. The person must be a team player, working closely with the DevOps team members and the developers. Experience: 7 years as Linux System Administrator with recent 3 years in Amazon AWS, Apache HTTP server, Puppet, Jenkins, Docker and building automation scripts using bash and AWS CLI. Responsibilities You will be responsible for designing, configuring, deploying, administering, monitoring, analysing, and supporting cloud based (IaaS/PaaS) application services and systems. You will work alongside developers to deploy our software and systems in the QA and production environments. You will be continuously improving and automating the environment setup using Puppet code. You will be managing and extending our Jenkins based CI / CD environment and maintaining our Git repository. You will be continuously monitoring the servers for load assessment and security risks suggesting appropriate and timely recourse and rectification. You will be supporting the developers in running Docker on their Ubuntu workstations. You will be building and managing images for them to use. Analysing the logs like alert and trace files, syslog, auth.log, mail.log, Apache logs, AWS Logs (ELB, ALB, CloudTrail log, RDS logs VPC Flow log, etc.). Linux update and upgrade, security patches, etc. Backup and restore, log rotation and purging, snapshots and purging, etc. Automating and documenting server maintenance tasks. Strong knowledge of monitoring tools (Nagios, in particular) including experience in designing and implementing new monitoring checks. Strong troubleshooting and analytical skills, ability to comprehend, review and analyse application logs. Requirements 7 years of experience working as a Linux system administrator, managing the LAMP stack with the ability to configure and maintain network with subnets, load balancer, mail service, users, groups, sudoer, file and directory permissions, port access, firewall, log files for all services, secure socket layer (SSL), secure shell (SSH) & key-based access, role-based access, crontab, Apache / vhosts configuration, etc. 3 years of experience designing and building LAMP web application environments on AWS services. 2 years of experience with Puppet. Experience with other open-source configuration management utilities such as Chef, Salt, etc. will be a plus. Puppet certification is preferred. Design, develop and maintain DevOps process comprising several stages including plan, code, build, test, release, deploy, operate and monitor. Experience with setting up and maintaining Git and Jenkins CI environment. Experience with Linux/Unix OS system administration, configuration, troubleshooting, performance tuning, preventative maintenance, and security procedures. Hands-on experience in building VMs and Containers using Docker and Docker-Compose. Experience with NewRelic setup and administration. Experience with MySQL database backup and restore. Experience with setting up opcache, Varnish, memcache, AWS Elasticache. Experience with bash scripting system maintenance tasks. Must have a flair for automation and continuous performance improvement. Must have strong oral and written communication skills, presentation skills and have the ability to self prioritise tasks. Must be able to maintain a balanced composure in high stress situations. Must possess the ability to anticipate and mitigate problems proactively. Cloud migration experience will be a plus. Experience with Terraform will be a plus. Knowledge of basic windows pc maintenance will be a plus. Expected AWS services setup and configuration skills: Well versed with AWS CLI commands, EC2, VPC, VPC Peering, NAT Gateway, RDS, Route 53, ALB, ELB, Security Groups, IAM Permission Policies, S3, S3 Lifecycle, Glacier, SNS, SES, SQS, EFS, CloudFront, ElastiCache (Memcached), CloudWatch, CloudTrail, Cloud Formation, Autoscaling, Athena, ECS, Trusted Advisor, Certificate Manager. Experience with or knowledge of additional AWS services is a PLUS. Certification preferred. Expected software and services installation and configuration skills: LAMP, Puppet, HAProxy, Docker (setup Dockerfile, build Docker images, setup docker-compose, YAML, etc.), Jenkins, MySQL 5.5, MySQL 5.6 and MySQL 5.7, mysql backup, Apache 2.4, PHP 5.5, PHP 5.6, PHP 7.2, PHP 7.4, PHP 8.1, PHP-FPM, AWS CLI, NFS, Varnish, SOLR, Zookeeper, Linux system crons, Postfix, pfsense, s6-svscan, OpenSSL, NetBeans, Eclipse, DokuWiki, NewRelic, Nagios, OpenVPN, PHP opcache, Memcache, node.js, npm, JS frameworks, dnsmasq, Git. Experience with Linux flavours: Ubuntu/Lubuntu 14.04, 16.04, 18.04 & 20.04 Experience with other Linux flavours will be a plus. CM tools & programming experience: Bash, Puppet, ERB templates, YAML, Ruby (basic knowledge), SQL (basic knowledge), PHP (basic knowledge), JSON Minimum education: Graduate in Computer Science, MCA or equivalent.

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

hyderabad, telangana, india

On-site

About the Role We are looking for a skilled and hands-on Azure Infrastructure Engineer based in Kuwait , with strong expertise in cloud-native infrastructure architecture and deployment on Microsoft Azure. The ideal candidate will have deep experience in setting up and managing Azure Landing Zones, designing enterprise-grade networks, implementing Kubernetes (AKS) clusters, configuring ExpressRoute, and working with Azure Stack, CLI, and infrastructure-as-code tooling. This is a pure infrastructure engineering role requiring direct technical execution, system design, troubleshooting, and secure provisioning across hybrid and multi-region environments. Key Responsibilities Design, configure, and manage Azure Landing Zones, including identity, policy, subscription structure, network topology, and security controls Deploy and manage Azure Kubernetes Service (AKS) clusters including node pools, autoscaling, ingress controllers, and identity integrations Implement and optimize networking infrastructure including vNets, Subnets, NSGs, Azure Firewall, Application Gateways, and Private Endpoints Set up hybrid connectivity via ExpressRoute, VPN gateways, vNet peering, and Site-to-Site/IPSec tunnels Work with Azure AI Foundry infrastructure requirements, model hosting environments, and access/security isolation Use Azure CLI, PowerShell, and Infrastructure-as-Code (ARM templates, Bicep) to automate provisioning and configuration Configure and manage Azure Stack Hub and hybrid cloud deployments Implement monitoring and diagnostics using Azure Monitor, Log Analytics, and Network Watcher Ensure systems meet high availability, backup, DR, and security standards Collaborate with architects and security teams to validate designs and enforce guardrails Maintain detailed technical documentation (HLDs, LLDs, runbooks) Required Skills & Qualifications Minimum 3 years of hands-on experience in Azure infrastructure engineering roles Proficient with: Azure Landing Zones & CAF Azure Networking (vNet, NSG, UDR, VPN Gateway, ExpressRoute) Kubernetes (AKS) – design, scaling, policies, deployment Azure Stack Hub/Edge, Azure CLI, PowerShell automation Azure Monitor, Log Analytics, diagnostics tooling Strong understanding of hybrid cloud, RBAC, and secure connectivity Experience with IaC (Bicep, ARM, or equivalent) Ability to troubleshoot complex routing/connectivity issues Familiarity with Managed Identities, Key Vault, RBAC Excellent documentation skills Nice to Have Microsoft certifications (AZ-104, AZ-305, AZ-700, AZ-400) Experience with Terraform or GitHub Actions pipelines Exposure to multi-region governance models Knowledge of CIS/NIST compliance standards What We Offer Mission-critical cloud modernization projects in Kuwait Access to the latest Microsoft engineering tools, partner support, and certification tracks Flexible and collaborative work culture focused on excellence Competitive salary, certification sponsorship, and career growth opportunities

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

delhi, india

Remote

About HighLevel: HighLevel is an AI powered, all-in-one white-label sales & marketing platform that empowers agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. We are proud to support a global and growing community of over 2 million businesses, comprised of agencies, consultants, and businesses of all sizes and industries. HighLevel empowers users with all the tools needed to capture, nurture, and close new leads into repeat customers. As of mid 2025, HighLevel processes over 15 billion API hits and handles more than 2.5 billion message events every day. Our platform manages over 470 terabytes of data distributed across five databases, operates with a network of over 250 microservices, and supports over 1 million domain names. Our People With over 1,500 team members across 15+ countries, we operate in a global, remote-first environment. We are building more than software; we are building a global community rooted in creativity, collaboration, and impact. We take pride in cultivating a culture where innovation thrives, ideas are celebrated, and people come first, no matter where they call home. Our Impact As of mid 2025, our platform powers over 1.5 billion messages, helps generate over 200 million leads, and facilitates over 20 million conversations for the more than 2 million businesses we serve each month. Behind those numbers are real people growing their companies, connecting with customers, and making their mark - and we get to help make that happen. About the Role: We’re building the most powerful automation system on the planet — one platform to connect any app, trigger any action, and drive real business impact for millions of SMBs. That’s the mission. That’s the team. And you’ll lead it. This is not a team for someone looking to manage Jira/Clickup tickets and talk process. This is for someone who wants to own the heartbeat of a system that sends real emails, fires real webhooks, and drives real dollars — in real time. If the system lags, businesses lose leads. If it goes down, people lose income. Your job is to make sure that doesn’t happen. Requirements: 8+ years in software engineering, with at least 2 years managing teams building high-performance, high-availability systems Experience scaling distributed backend systems handling billions of transactions Demonstrated success leading teams in fast-paced, product-driven environments Proven track record of mentoring high-potential engineers and leaders Experience contributing to architecture and design decisions of mission-critical platforms Good to have: Solid experience with our tech stack, including GCP, Node.js, Go, Firestore, MongoDB, Redis, Elasticsearch, and ClickHouse What You’ll Lead: A team of 5-10 engineers who build and operate the core Workflow Builder — from orchestration engine to frontend logic, and everything in between A system that handles 12B+ actions/month, with sub-second latency and zero room for failure The future of how tools plug in and out of our automation graph — modular, dynamic, and seamless Real-time, event-driven infrastructure built on: Google Cloud Platform: Pub/Sub, Cloud Tasks, GKE || Databases: Firestore, MongoDB, Redis, ElasticSearch, Clickhouse || Languages: Node.js (TypeScript), Go || Architecture: Event-driven execution, distributed workloads, autoscaling across thousands of pods What You'll Drive: Architecture: Redesign the orchestration engine into something modular, robust, and extensible. Add a node? One click. Swap a source? Seamless. Execution: Bring stability without sacrificing velocity. This system has to fly and land safely — every single time. People: Hire smart. Set standards. Give autonomy. Push growth. Collaboration: Work tightly with Product and Design — but you own the technical outcomes. What Success Looks Like: Your team is shipping fast and clean. Delivery is predictable. Failures are rare The system architecture feels boring (that’s a compliment) The orchestration engine becomes a plug-and-play automation layer anyone can build on top of You’ve earned the trust of engineers, PMs, and leadership — because you deliver EEO Statement: At HighLevel, we value diversity. In fact, we understand it makes our organisation stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way! Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions. #NJ1

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

delhi, india

Remote

About HighLevel: HighLevel is an AI powered, all-in-one white-label sales & marketing platform that empowers agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. We are proud to support a global and growing community of over 2 million businesses, comprised of agencies, consultants, and businesses of all sizes and industries. HighLevel empowers users with all the tools needed to capture, nurture, and close new leads into repeat customers. As of mid 2025, HighLevel processes over 15 billion API hits and handles more than 2.5 billion message events every day. Our platform manages over 470 terabytes of data distributed across five databases, operates with a network of over 250 microservices, and supports over 1 million domain names. Our People With over 1,500 team members across 15+ countries, we operate in a global, remote-first environment. We are building more than software; we are building a global community rooted in creativity, collaboration, and impact. We take pride in cultivating a culture where innovation thrives, ideas are celebrated, and people come first, no matter where they call home. Our Impact As of mid 2025, our platform powers over 1.5 billion messages, helps generate over 200 million leads, and facilitates over 20 million conversations for the more than 2 million businesses we serve each month. Behind those numbers are real people growing their companies, connecting with customers, and making their mark - and we get to help make that happen. About the Role: The Contacts Team is at the heart of our CRM platform, managing the core infrastructure for contact records, views, filters, segmentation, and more. Every product surface—whether it's automation, campaigns, pipelines, or reporting—interacts with Contacts. We're building scalable, reliable, and flexible systems that power millions of interactions daily. In this role, you will drive technical outcomes as a hands-on Individual Contributor. You will own high-impact initiatives across the Contacts domain. Responsibilities: Architect & Scale: Design and build highly scalable and reliable backend services using Node.js, MongoDB, and ElasticSearch, ensuring optimal indexing, sharding, and query performance Frontend Development: Develop and optimize user interfaces using Vue.js (or React/Angular) for an exceptional customer experience Event-Driven Systems: Design and implement real-time data processing pipelines using Kafka, RabbitMQ, or ActiveMQ Optimize Performance: Work on autoscaling, database sharding, and indexing strategies to handle millions of transactions efficiently Cross-Functional Collaboration: Work closely with Product Managers, Data Engineers, and DevOps teams to align on vision, execution, and business goals Quality & Security: Implement secure, maintainable, and scalable codebases while adhering to industry best practices Code Reviews & Standards: Drive high engineering standards, perform code reviews, and enforce best practices across the development team Ownership & Delivery: Manage timelines, oversee deployments, and ensure smooth product releases with minimal downtime Mentor: Guide a team of developers, ensuring best practices in software development, clean architecture, and performance optimization Requirements: 5+ years of hands-on software development experience Strong proficiency in Node.js, Vue.js (or React/Angular), MongoDB, and Elasticsearch Experience in real-time data processing, message queues (Kafka, RabbitMQ, or ActiveMQ), and event-driven architectures Scalability expertise: Proven track record of scaling services to 200k+ MAUs and handling high-throughput systems Strong understanding of database sharding, indexing, and performance optimization Experience with distributed systems, microservices, and cloud infrastructures (AWS, GCP, or Azure) Proficiency in CI/CD pipelines, Git version control, and automated testing Strong problem-solving, analytical, and debugging skills Excellent communication and leadership abilities—able to guide engineers while collaborating with stakeholders Tech Stack: Backend: Node.js, Nest JS, REST APIs, Redis, Firestore, Mongo, Elastic search Frontend: Vue 2 + Vue 3 (Composition API), Pinia, TypeScript, Vite Infrastructure: Docker, Kubernetes, GCP Accessibility & Internationalization (i18n) best practices EEO Statement: At HighLevel, we value diversity. In fact, we understand it makes our organisation stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way! Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

delhi, india

Remote

About Us: HighLevel is an AI powered, all-in-one white-label sales & marketing platform that empowers agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. We are proud to support a global and growing community of over 2 million businesses, comprised of agencies, consultants, and businesses of all sizes and industries. HighLevel empowers users with all the tools needed to capture, nurture, and close new leads into repeat customers. As of mid 2025, HighLevel processes over 15 billion API hits and handles more than 2.5 billion message events every day. Our platform manages over 470 terabytes of data distributed across five databases, operates with a network of over 250 microservices, and supports over 1 million domain names. Our People With over 1,500 team members across 15+ countries, we operate in a global, remote-first environment. We are building more than software; we are building a global community rooted in creativity, collaboration, and impact. We take pride in cultivating a culture where innovation thrives, ideas are celebrated, and people come first, no matter where they call home. Our Impact As of mid 2025, our platform powers over 1.5 billion messages, helps generate over 200 million leads, and facilitates over 20 million conversations for the more than 2 million businesses we serve each month. Behind those numbers are real people growing their companies, connecting with customers, and making their mark - and we get to help make that happen. About the Role: This role leads the evolution of our massive real-time messaging platform, powering billions of messages across SMS, Email, WhatsApp, and more. You’ll improve core delivery, retry, and storage systems across Node.js, Firestore, Mongo, and ClickHouse, while shaping backend architecture and patterns. Operating at large scale — 50+ workloads, thousands of pods, and event-driven pipelines — you’ll make our infrastructure easier to operate, safer to change, and built to scale. Requirements: 7+ years in backend engineering; proven experience scaling systems with massive throughput Deep expertise in databases: Firestore, MongoDB, ElasticSearch, ClickHouse Hands-on experience with Kubernetes, GCP (or AWS), and event-driven infra Clear understanding of distributed systems principles: CAP tradeoffs, consistency, rate limiting, retries, idempotency Strong API and system boundary design in micro-services ecosystems Crisp communication — technical leadership without posturing Responsibilities: You write clean, observable, and testable backend code Architect and scale backend systems for message ingestion, delivery, storage, and retries Lead re-architecture efforts to improve reliability, latency, and observability Drive infra decisions across GCP, Kubernetes, Pub/Sub, Cloud Tasks, and custom queueing models Own core services: delivery guarantees, message deduplication, state transitions, retry semantics Partner with platform and SRE teams to ensure autoscaling, fault tolerance, and production readiness Guide API boundaries, system interactions, and contracts across services Mentor engineers in distributed systems, data models, and real-world scale Own incidents, root cause, recovery, and prevention — not just alerts EEO Statement: At HighLevel, we value diversity. In fact, we understand it makes our organisation stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way! Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions. #NJ1

Posted 3 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

pune, maharashtra, india

On-site

Role Description Role Proficiency: Acts under minimum guidance of DevOps Architect to set up and manage DevOps tools and pipelines. Outcomes Interpret the DevOps Tool/feature/component design and develop/support the same in accordance with specifications Follow and contribute existing SOPs to trouble shoot issues Adapt existing DevOps solutions for new contexts Code debug test and document; and communicate DevOps development stages/status of DevOps develop/support issues Select appropriate technical options for development such as reusing improving or reconfiguration of existing components Support users onboarding them on existing tools with guidance from DevOps leads Work with diverse teams with Agile methodologies Facilitate saving measures through automation Mentor A1 and A2 resources Involved in the Code Review of the team Measures Of Outcomes Schedule adherence Quality of the code Defect injection at various stages of lifecycle # SLA related to level 1 and level 2 support # of domain certification/ product certification obtained Facilitate saving measures through automation Outputs Expected Automated components: Deliver components that automate parts to install components/configure of software/tools in on-premises and on cloud Deliver components that automate parts of the build/deploy for applications Configured Components Configure a CI/CD pipeline that can be used by application development/support teams Scripts Develop/Support scripts (like Powershell/Shell/Python scripts) that automate installation/ configuration/ build/ deployment tasks Onboard Users Onboard and extend existing tools to new app dev/support teams Mentoring Mentoring and providing guidance to peers Stakeholder Management Guide the team in preparing status updates; keeping management updated regarding the status Data Base Data Insertion Data update Data Delete Data view creations Skill Examples Install configure troubleshoot CI/CD pipelines and software using Jenkins/Bamboo/Ansible/Puppet /Chef/PowerShell /Docker/Kubernetes Integrate with code/test quality analysis tools like Sonarqube/Cobertura/Clover Integrate build/deploy pipelines with test automation tools like Selenium/Junit/NUnit Scripting skills (Python Linux/Shell/Perl/Groovy/PowerShell) Repository Management/Migration Automation – GIT/BitBucket/GitHub/Clearcase Build automation scripts – Maven/Ant Artefact repository management – Nexus/Artifactory Dashboard Management & Automation- ELK/Splunk Configuration of cloud infrastructure (AWS/Azure/Google) Migration of applications from on-premises to cloud infrastructures Working on Azure DevOps/ARM (Azure Resource Manager)/DSC (Desired State Configuration) Strong debugging skill in C#/C Sharp/Dotnet Basic working knowledge of database Knowledge Examples Knowledge of Installation/Config/Build/Deploy tools and knowledge of DevOps processes Knowledge of IAAS - Cloud providers (AWS/Azure/Google etc.) and their tool sets Knowledge of the application development lifecycle Knowledge of Quality Assurance processes Knowledge of Quality Automation processes & tools Knowledge of Agile methodologies Knowledge of security policies and tools Additional Comments A DevOps engineer with 5-7 years of experience. Here are some of the typical responsibilities of a DevOps Harness engineer: Harness Expertise: Possess a strong understanding of the Harness platform and its capabilities, including pipelines, deployments, configurations, and security features. CI/CD Pipeline Management: Design, develop, and manage CI/CD pipelines using Harness. This involves automating tasks such as code building, testing, deployment, and configuration management. Automation Playbook Creation: Create reusable automation scripts (playbooks) for deployments, configuration control, infrastructure provisioning, and other repetitive tasks. Scalability and Standards: Ensure scalability of the CI/CD pipelines and adherence to organizational standards for deployment processes. DevOps Technologies: Be familiar with various DevOps technologies such as Docker, Kubernetes, and Jenkins, especially in the context of cloud platforms. Security: Integrate security best practices into the CI/CD pipelines (SecDevOps). 1. Candidate must have strong working experience in Kubernetes core concepts like autoscaling, RBAC, Pod placements as well as advanced concepts like Karpenter, service mesh etc 2. Candidate must have strong working experience in AWS services like cloudwatch, EKS, ECS, DynamoDB etc. 3. Candidate must have strong working experience in IAC especially in Terraform and Terragrunt, should be able to create modules. Must have experience in infrastructure provisioning with AWS 4. Candidate must have strong working experience in scripting languages like shell, Powershell or Python. 5. Candidate must have strong working experience in CICD concepts like creating pipelines, automating the deployments. Skills Ansible,Aws,Iac,Kubernetes

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

mumbai, maharashtra, india

On-site

Full Stack Python Developer | Software Developer – GenAI Productization Experience : 4–6 years Role Summary We are seeking an experienced Python Developer to join our GenAI team, focused on transforming proof-of-concepts (POCs) into production-grade systems. You will build and scale backend services capable of handling high concurrency, asynchronous processing, queueing, and real-time streaming . The ideal candidate has a strong foundation in backend engineering, infrastructure, API design, security, and performance optimization—especially in GCP cloud environments. Key Responsibilities Convert GenAI POCs into robust, production-ready services Develop scalable, asynchronous microservices optimized for high throughput and low latency Handle concurrency, rate limiting, throttling, and queueing strategies for high-load systems Collaborate with AI/ML teams on agent orchestration and model-serving pipelines Implement telemetry (logs, metrics, tracing) to ensure debuggability and performance insights Manage the full API lifecycle, including security (OAuth, API keys), testing, and documentation Publish and maintain client SDKs, Postman collections, and internal developer portals Define and enforce engineering standards: CI/CD automation, testing strategies, environment promotion, and release workflows Integrate with message brokers like Kafka, Google Pub/Sub for event-driven architecturesPrepare HLD/LLD, UML/sequence diagrams , and apply design patterns for resilient system design Design and implement reliable, versioned APIs with backward compatibility Required Skills Expert-level proficiency in Python , especially using FastAPI , and strong understanding of asynchronous programming and multiprocessing Deep understanding of microservices, event-driven, and async system design Proficient in WebSockets, gRPC , REST, and OpenAPI/Swagger-based API contract design Proficient in OOP, dependency injection, and Pydantic-based validation in FastAPI for building modular, maintainable APIs. Proficient in working with databases using ORMs like SQLAlchemy , along with strong command over relational database design, queries , and performance optimization. Hands-on experience with cloud-native development on GCP , AWS, or Azure, including API gateways , autoscaling, and serverless architecture Strong grasp of Docker , Git-based version control, and container orchestration workflows Deep understanding of network, authentication, and infosec aspects in API and app deployments Familiarity with CI/CD pipelines , infrastructure-as-code, and secure deployment practices Experienced in DevOps practices , including configuring reverse proxy with NGINX to enable secure and efficient communication between frontend and backend services deployed on GKE. Preferred Experience with Kafka, Google Pub/Sub , or equivalent message brokers Working knowledge of React.js, HTML, CSS for integration and debugging (not core responsibility)Prior experience with GenAI-based systems, especially real-time chatbots or voicebots Exposure to model orchestration frameworks, LLM serving, or Vertex AI Knowledge of zero-downtime deployment and rollback strategies Exposure to LLM orchestration (LangChain, LangGraph) Experience with RAG architectures, vector DBs, and MLOps frameworks (GCP Vertex Pipelines). Understanding of Model Context Protocol (MCP) and Agent-to-Agent toolkits for advanced agent workflows. Strong UX awareness to influence AI-driven product design and user journeys.

Posted 3 weeks ago

Apply

15.0 years

3 - 6 Lacs

gurgaon

On-site

Project Role : Technology Support Engineer Project Role Description : Resolve incidents and problems across multiple business system components and ensure operational stability. Create and implement Requests for Change (RFC) and update knowledge base articles to support effective troubleshooting. Collaborate with vendors and help service management teams with issue analysis and resolution. Must have skills : Cloud Automation DevOps Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary: We are looking for an experienced Multi-Cloud FinOps Engineer to lead cloud financial operations across Azure, AWS, and GCP platforms. The role requires a blend of cloud engineering, cost analysis, and governance skills to improve cost visibility, enforce budget policies, and align cloud spending with business value. You will work closely with finance, DevOps, platform engineering, and procurement teams to drive cost transparency, forecasting, and optimization strategies across the organization. Roles and responsibilities: - Analyze cloud usage patterns and identify cost-saving opportunities (e.g., right-sizing, idle resources, reserved instances, autoscaling). - Configure and manage cost visibility tools (e.g., Azure Cost Management, AWS Cost Explorer, GCP Billing). - Implement budget alerts, spend tracking dashboards, and anomaly detection workflows. - Define and enforce cloud cost governance policies, tagging standards, and usage accountability frameworks. - Collaborate with business units to build chargeback/showback models and cost allocation reporting. - Partner with procurement to optimize contracting, discount models, and Enterprise Agreements (EA/RIs/SPs). - Build or manage FinOps tools such as CloudHealth, CloudCheckr, Apptio Cloudability, Yotascale, or native cloud billing APIs. - Develop scripts and automation pipelines (Python, PowerShell, or Terraform) to remediate cost inefficiencies and enforce policy-as-code. - Integrate cloud billing data into enterprise reporting platforms like Power BI, Snowflake, or Tableau. - Support finance and budgeting teams with cloud spend forecasts, budget variance analysis, and unit economics tracking. - Prepare monthly/quarterly executive reports and cloud cost KPIs. - Conduct trend analysis to predict future cloud spending and workload shifts. - Work with security, compliance, and architecture teams to balance cost, performance, and compliance. - Provide education and workshops for engineering teams on FinOps best practices. - Serve as an SME during cloud architecture and migration planning to ensure cost-aware decisions. Professional and Technical skills: - 5+ years of experience in cloud operations, cloud engineering, or FinOps - Expertise in multi-cloud environments: Azure, AWS, and/or GCP - Strong hands-on knowledge of cloud billing consoles, APIs, and optimization features - Experience with FinOps tools (e.g., CloudHealth, Cloudability, Apptio, ProsperOps) - Scripting experience in Python, PowerShell, or Bash - Proficiency with Excel, Power BI, or data visualization/reporting tools - FinOps Certified Practitioner (from FinOps Foundation) - Cloud certifications such as: - Azure Administrator / Solutions Architect - AWS Certified Cloud Practitioner / Solutions Architect - Google Professional Cloud Architect - Familiarity with Infrastructure as Code (IaC) for cost enforcement (e.g., Terraform, Bicep) - Strong analytical, communication, and stakeholder engagement skills - Attention to detail with a data-driven mindset - Ability to translate technical usage into financial impacts - Comfortable working in agile, cross-functional teams Additional information: - The candidate should have minimum 3 years of experience. - The position is at our Gurugram office. - A 15 year full time education is required. 15 years full time education

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

india

Remote

About ZipNom Technologies ZipNom Technologies is a fast-growing IT solutions company helping startups, SMEs, and enterprises launch scalable, investor-ready digital products. We specialize in: Web & Mobile App Development AI & ML Solutions Cloud Computing & DevOps Cybersecurity & IT Consultancy Salesforce & Enterprise IT At ZipNom, we don’t just write code — we build the foundation for growth, speed, and scale. As part of our DevOps team, you'll help build the automation and infrastructure that power global technology products. Job Description We are looking for a DevOps Engineer who is passionate about automation, cloud infrastructure, and CI/CD processes. You will work closely with developers, testers, and product teams to optimize deployment pipelines, manage scalable cloud infrastructure, and ensure high system reliability. ​ Key Responsibilities Design, implement, and manage CI/CD pipelines for development and production. Deploy and manage containerized applications using Docker and Kubernetes . Work with cloud services (AWS, Azure) to provision and manage infrastructure. Monitor system performance and availability using tools like Prometheus and Grafana . Automate infrastructure with tools like Terraform or Ansible (optional). Configure and manage web servers (NGINX) , SSL certificates , and network-level security. Maintain Git repositories , version control workflows, and branch strategies. Troubleshoot infrastructure issues, perform root cause analysis, and ensure uptime. Collaborate with the development team for smoother deployments and rollback processes. Requirements Must-have 1–3 years hands-on DevOps / SRE experience in a production environment Proficiency with AWS core services (EC2, ECS/EKS, VPC, IAM, CloudWatch) Strong Docker skills and working knowledge of Kubernetes fundamentals (deployments, services, config maps, ingress) Practical experience creating CI/CD pipelines in GitHub Actions, Jenkins or GitLab CI Infrastructure-as-Code expertise with Terraform or CloudFormation Linux administration, networking (NGINX/Traefik), and scripting in Bash or Python Solid understanding of container security, IAM best practices and automated secrets management Version-control fluency (Git, pull-request workflows) and familiarity with agile ceremonies Nice-to-have Helm, Argo CD or Flux for GitOps Experience with desktop app release automation (NSIS, AppImage, DMG) GCP or Azure exposure, multi-cloud fail-over patterns Cost-monitoring tools (AWS Cost Explorer, CloudZero) Observability-as-Code, Karpenter/KEDA autoscaling, Sentry error tracking Benefits Remote Work: Work from anywhere in India Performance-based bonuses and growth tracks Fast-paced team with global product exposure Work directly with founders and senior product engineers Flexibility, autonomy, and a supportive work culture

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

hyderabad, telangana, india

On-site

Required 5+ years of experience building and operating AWS infrastructure at scale. Strong expertise in Infrastructure-as-Code using Terraform, with experience in module design and state management. Hands-on experience with containerization and orchestration (Docker, Kubernetes, ECS/EKS). Proficiency in at least one programming language (Python, Go, or similar) for automation and tooling. Experience with CI/CD pipelines and GitOps practices. Strong understanding of AWS networking, security, and IAM best practices. Track record of building self-service platforms and developer tools. Experience with monitoring, logging, and observability tools. Preferred AWS certifications (Solutions Architect Professional, DevOps Engineer, or relevant specialty certifications). Experience with service mesh technologies (Istio, App Mesh). Knowledge of FinOps practices and cloud cost optimization. Experience with compliance frameworks and security automation. Contributions to open-source projects or internal tooling initiatives. Experience mentoring engineers and leading technical initiatives. DevOps (Containers & Kubernetes) Focus Additional Responsibilities Design and operate Kubernetes clusters at scale using EKS, including multi-cluster strategies. Implement service mesh architectures using Istio for traffic management, security, and observability. Manage GitOps deployments using ArgoCD across multiple environments and clusters. Optimize cluster autoscaling with Karpenter and implement workload-specific scaling strategies. Build container security scanning pipelines and runtime protection mechanisms. Additional Qualifications Expert-level Kubernetes knowledge including CRDs, operators, and cluster administration. Deep experience with EKS including managed node groups, Fargate, and add-ons management. Hands-on expertise with Istio service mesh including traffic management and security policies. Proficiency with ArgoCD including ApplicationSets, sync strategies, and RBAC configuration. Experience with Karpenter for intelligent workload provisioning and cost optimization. Knowledge of container security best practices and tools (Falco, OPA, admission controllers). Certified Kubernetes Administrator (CKA) or similar certification preferred. (ref:hirist.tech)

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies