Jobs
Interviews

32 Pygad Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 - 0 Lacs

cuttack, odisha, india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

bhubaneswar, odisha, india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

guwahati, assam, india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

kolkata, west bengal, india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

ranchi, jharkhand, india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

raipur, chhattisgarh, india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

jamshedpur, jharkhand, india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

amritsar, punjab, india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

surat, gujarat, india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

ahmedabad, gujarat, india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

jaipur, rajasthan, india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

greater lucknow area

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

thane, maharashtra, india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

nashik, maharashtra, india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

nagpur, maharashtra, india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

kanpur, uttar pradesh, india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

kochi, kerala, india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

greater bhopal area

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

visakhapatnam, andhra pradesh, india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

indore, madhya pradesh, india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

chandigarh, india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

dehradun, uttarakhand, india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

mysore, karnataka, india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

4.0 years

0 - 0 Lacs

vijayawada, andhra pradesh, india

Remote

Experience : 4.00 + years Salary : USD 2222-2592 / month (based on experience) Expected Notice Period : 7 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Hybrid AI) What do you need for this opportunity? Must have skills required: RecSys / decision optimization engines., Cloud (AWS/Azure/GCP), Data Pipelines (Celery, Databases (Cassandra, ELK), GPU/CUDA, Grafana, Kafka), ML Libraries (PyTorch, Monitoring (Prometheus, pandas, PyGAD), PySpark) + GPU/CUDA, Python (Django/FastAPI), Scikit-learn, TensorFlow, Kubernetes, PostgreSQL, Redis Hybrid AI is Looking for: About The Job About HybridAI Company and SaaS product: At HybridAI, a dynamic startup, we are revolutionizing the AI resources are consumed optimally with our innovative SaaS B2B platform. Our mission is to simplify decision-making for CIOs by providing tailored recommendations that balances cost, power consumption, and compatibility with existing IT environments. We take the complexity out of choosing the right infrastructure, optimizing both cost and power consumption while delivering high-performance AI and IT workloads. Our platform, designed by esteemed professors in AI/ML, industry CIOs, and brought to market by experienced software industry executives, addresses the critical challenges CIOs face in navigating the rapidly evolving AI landscape. By offering real-time, tailored recommendations, HybridAI enables CIOs to make informed decisions quickly, reducing complexity and driving immediate benefits. Join us at HybridAI and be part of a team that is shaping the future of AI infrastructure with innovation, expertise, and a commitment to excellence. You will build a system to balance cost, performance, and power consumption. Location: Remote Experience: 4-5 years Type: 6 months contract As an AI/ML Engineer at HybridAI, you’ll be responsible for design, develop, and scale AI-powered algorithms to optimize Kubernetes clusters, VMWare resource allocation, GPU resource allocation, and cloud infrastructure - all with an AI-first coding approach, embedding AI capabilities directly into product architecture from day one. Responsibilities Design, develop, and deploy ML models to optimize infrastructure selection (cloud/on-prem, hardware type, cost/power trade-offs) AI/ML Development: Build and tune genetic algorithms, predictive models, and optimization strategies for large-scale infrastructure datasets. GPU Resource Management: Create intelligent GPU allocation and scheduling systems for Kubernetes pods/VMs, including multi-GPU and fractional GPU scenarios. Kubernetes Integration: Implement AI-driven pod placement, node selection, and automated infrastructure decision-making, with real-time metric integration (Prometheus, Grafana). Research & Innovation: Explore emerging optimization techniques, analyze results, and document findings. Build scalable feature engineering and data preprocessing pipelines using Pandas, scikit-learn, and PySpark to feed infrastructure and performance metrics into ML models. Build interval-based retraining workflows triggered by incoming metric streams using Celery task queues and workers in Python. Collaborate with DevOps and backend teams to integrate model outputs into the core SaaS application via REST or gRPC APIs. Monitor, evaluate, and retrain models using MLflow, Weights & Biases, or similar MLOps tools. Continuously monitor model performance and retrain as needed using MLOps practices Ensure low-latency decisioning while maintaining model accuracy and explainability Store model outputs and inference results in Cassandra, optimizing for write throughput and fast queries Scale and orchestrate model training and inference jobs across multiple tenants and environments Ensure models provide explainable, auditable outputs to support enterprise-grade transparency Help define best practices for AI/ML in the context of SaaS platforms What We’re Looking For 4–5 years of experience building and deploying machine learning models in production Deep competence in AI-first architectures where ML models and algorithms are central to system workflows. Deep competence in Python language, Django Administration and Fast API Deep competence in Pytorch, Tensorflow, scikit-learn and PyGAD Experience working with cloud platforms (AWS, Azure, GCP) and hybrid on-prem infrastructure Experience working with Kubernetes, NVIDIA GPUs and CUDA architectures Experience working with Cassandra, PostgreSQL, Redis, Kafka, Docker Experience working with Prometheus, Grafana, ELK Strong Python skills with hands-on experience using Celery for background processing and task scheduling Hands-on experience with ML lifecycle tools: MLflow, Kubeflow, Airflow, or similar Nice to Have Experience with recommendation systems or decision optimization engines Why Join Us? Innovate in a fast-growing AI SaaS startup shaping enterprise AI infrastructure. Lead AI/ML initiatives with an AI-first mindset. Influence long-term product architecture and AI strategy. Gain recognition in AI-driven infrastructure optimization research. Work on cutting-edge technologies with AI/ML-driven decision-making. Flexible remote work culture with a collaborative and dynamic team. Career growth opportunities in a rapidly evolving AI/GPU/Data centers domain. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies