Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 years
0 - 0 Lacs
Nagpur, Maharashtra, India
Remote
Experience : 3.00 + years Salary : GBP 1785-2500 / month (based on experience) Expected Notice Period : 15 Days Shift : (GMT+01:00) Europe/London (BST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - UK's Leading AgriTech Company) What do you need for this opportunity? Must have skills required: AgriTech Industry, Large Language Models, Nvidia Jetson, Raspberry PI, Blender, Computer Vision, opencv, Python, Pytorch/tensorflow, Segmentation, Extraction, Regression UK's Leading AgriTech Company is Looking for: Location: Remote Type: 6 months contract Experience Level : 3–5 Years Industry: Agritech | Sustainability | AI for Renewables About Us We're an AI-first company transforming the renewable and sustainable agriculture space. Our mission is to harness advanced computer vision and machine learning to enable smart, data-driven decisions in the livestock and agricultural ecosystem. We focus on practical applications such as automated weight estimation of cattle , livestock monitoring, and resource optimization to drive a more sustainable food system. Role Overview We are hiring a Computer Vision Engineer to develop intelligent image-based systems for livestock management, focusing on cattle weight estimation from images and video feeds. You will be responsible for building scalable vision pipelines, working with deep learning models, and bringing AI to production in real-world farm settings . Key Responsibilities Design and develop vision-based models to predict cattle weight from 2D/3D images, video, or depth data. Build image acquisition and preprocessing pipelines using multi-angle camera data. Implement classical and deep learning-based feature extraction techniques (e.g., body measurements, volume estimation). Conduct camera calibration, multi-view geometry analysis, and photogrammetry for size inference. Apply deep learning architectures (e.g., CNNs, ResNet, UNet, Mask R-CNN) for object detection, segmentation, and keypoint localization. Build 3D reconstruction pipelines using stereo imaging, depth sensors, or photogrammetry. Optimize and deploy models for edge devices (e.g., NVIDIA Jetson) or cloud environments. Collaborate with data scientists and product teams to analyze livestock datasets, refine prediction models, and validate outputs. Develop tools for automated annotation, model training pipelines, and continuous performance tracking. Required Qualifications & Skills Computer Vision: Object detection, keypoint estimation, semantic/instance segmentation, stereo imaging, and structure-from-motion. Weight Estimation Techniques: Experience in livestock monitoring, body condition scoring, and volumetric analysis from images/videos. Image Processing: Noise reduction, image normalization, contour extraction, 3D reconstruction, and camera calibration. Data Analysis & Modeling: Statistical modeling, regression techniques, and feature engineering for biological data. Technical Stack Programming Languages: Python (mandatory) Libraries & Frameworks: OpenCV, PyTorch, TensorFlow, Keras, scikit-learn 3D Processing: Open3D, PCL (Point Cloud Library), Blender (optional) Data Handling: NumPy, Pandas, DVC Annotation Tools: LabelImg, CVAT, Roboflow Cloud & DevOps: AWS/GCP, Docker, Git, CI/CD pipelines Deployment Tools: ONNX, TensorRT, FastAPI, Flask (for model serving) Preferred Qualifications Prior experience working in agritech, animal husbandry, or precision livestock farming. Familiarity with Large Language Models (LLMs) and integrating vision + language models for domain-specific insights. Knowledge of edge computing for on-farm device deployment (e.g., NVIDIA Jetson, Raspberry Pi). Contributions to open-source computer vision projects or relevant publications in CVPR, ECCV, or similar conferences. Soft Skills Strong problem-solving and critical thinking skills Clear communication and documentation practices Ability to work independently and collaborate in a remote, cross-functional team Why Join Us? Work at the intersection of AI and sustainability Be part of a dynamic and mission-driven team Opportunity to lead innovation in an emerging field of agritech Flexible remote work environment How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
3.0 years
0 - 0 Lacs
Kanpur, Uttar Pradesh, India
Remote
Experience : 3.00 + years Salary : GBP 1785-2500 / month (based on experience) Expected Notice Period : 15 Days Shift : (GMT+01:00) Europe/London (BST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - UK's Leading AgriTech Company) What do you need for this opportunity? Must have skills required: AgriTech Industry, Large Language Models, Nvidia Jetson, Raspberry PI, Blender, Computer Vision, opencv, Python, Pytorch/tensorflow, Segmentation, Extraction, Regression UK's Leading AgriTech Company is Looking for: Location: Remote Type: 6 months contract Experience Level : 3–5 Years Industry: Agritech | Sustainability | AI for Renewables About Us We're an AI-first company transforming the renewable and sustainable agriculture space. Our mission is to harness advanced computer vision and machine learning to enable smart, data-driven decisions in the livestock and agricultural ecosystem. We focus on practical applications such as automated weight estimation of cattle , livestock monitoring, and resource optimization to drive a more sustainable food system. Role Overview We are hiring a Computer Vision Engineer to develop intelligent image-based systems for livestock management, focusing on cattle weight estimation from images and video feeds. You will be responsible for building scalable vision pipelines, working with deep learning models, and bringing AI to production in real-world farm settings . Key Responsibilities Design and develop vision-based models to predict cattle weight from 2D/3D images, video, or depth data. Build image acquisition and preprocessing pipelines using multi-angle camera data. Implement classical and deep learning-based feature extraction techniques (e.g., body measurements, volume estimation). Conduct camera calibration, multi-view geometry analysis, and photogrammetry for size inference. Apply deep learning architectures (e.g., CNNs, ResNet, UNet, Mask R-CNN) for object detection, segmentation, and keypoint localization. Build 3D reconstruction pipelines using stereo imaging, depth sensors, or photogrammetry. Optimize and deploy models for edge devices (e.g., NVIDIA Jetson) or cloud environments. Collaborate with data scientists and product teams to analyze livestock datasets, refine prediction models, and validate outputs. Develop tools for automated annotation, model training pipelines, and continuous performance tracking. Required Qualifications & Skills Computer Vision: Object detection, keypoint estimation, semantic/instance segmentation, stereo imaging, and structure-from-motion. Weight Estimation Techniques: Experience in livestock monitoring, body condition scoring, and volumetric analysis from images/videos. Image Processing: Noise reduction, image normalization, contour extraction, 3D reconstruction, and camera calibration. Data Analysis & Modeling: Statistical modeling, regression techniques, and feature engineering for biological data. Technical Stack Programming Languages: Python (mandatory) Libraries & Frameworks: OpenCV, PyTorch, TensorFlow, Keras, scikit-learn 3D Processing: Open3D, PCL (Point Cloud Library), Blender (optional) Data Handling: NumPy, Pandas, DVC Annotation Tools: LabelImg, CVAT, Roboflow Cloud & DevOps: AWS/GCP, Docker, Git, CI/CD pipelines Deployment Tools: ONNX, TensorRT, FastAPI, Flask (for model serving) Preferred Qualifications Prior experience working in agritech, animal husbandry, or precision livestock farming. Familiarity with Large Language Models (LLMs) and integrating vision + language models for domain-specific insights. Knowledge of edge computing for on-farm device deployment (e.g., NVIDIA Jetson, Raspberry Pi). Contributions to open-source computer vision projects or relevant publications in CVPR, ECCV, or similar conferences. Soft Skills Strong problem-solving and critical thinking skills Clear communication and documentation practices Ability to work independently and collaborate in a remote, cross-functional team Why Join Us? Work at the intersection of AI and sustainability Be part of a dynamic and mission-driven team Opportunity to lead innovation in an emerging field of agritech Flexible remote work environment How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
0 years
0 Lacs
India
On-site
Overview We are seeking a highly skilled NLP Engineer with a PhD who has extensive experience in Large Language Models (LLMs), vLLM (vision-language models), and Mixture of Experts (MoE). The ideal candidate will have a strong background in medical AI and a passion for leveraging NLP techniques to advance healthcare technology. Responsibilities Design, develop, and optimize LLM-based NLP models for medical applications. Research and implement cutting-edge vLLM and MoE architectures for scalable and efficient NLP processing. Develop fine-tuned models tailored for medical and healthcare-specific language tasks. Work closely with medical professionals and domain experts to ensure models align with real-world healthcare needs. Optimize inference pipelines to improve performance, reduce latency, and enhance efficiency for deployment. Collaborate with cross-functional teams, including AI researchers, software engineers, and product teams, to integrate NLP solutions into our products. Stay updated with the latest advancements in LLMs, vLLM, MoE, and medical AI research. Qualifications Master's or PhD in NLP, Machine Learning, AI, or a related field with a strong research background. Expertise in Large Language Models (LLMs), vision-language models (vLLM), and Mixture of Experts (MoE). Proven experience in building and fine-tuning transformer-based architectures (e.g., GPT, BERT, T5, Llama). Hands-on experience in PyTorch, TensorFlow, JAX, or similar ML frameworks. Strong programming skills in Python and experience with ML libraries such as Hugging Face, DeepSpeed, or Triton. Experience in medical AI or healthcare-related NLP applications is highly preferred. Solid understanding of data preprocessing, tokenization, and model training on large-scale datasets. Experience with distributed training, model optimization, and deployment on cloud infrastructures (AWS, GCP, or Azure). Preferred Qualifications Publications in top-tier NLP, ML, or AI conferences/journals. Experience in multimodal learning (text, image, video) and its applications in healthcare. Familiarity with regulatory compliance and ethical considerations in medical AI. Knowledge of reinforcement learning (RLHF) techniques for model fine-tuning. Show more Show less
Posted 1 week ago
3.0 years
0 - 0 Lacs
Kochi, Kerala, India
Remote
Experience : 3.00 + years Salary : GBP 1785-2500 / month (based on experience) Expected Notice Period : 15 Days Shift : (GMT+01:00) Europe/London (BST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - UK's Leading AgriTech Company) What do you need for this opportunity? Must have skills required: AgriTech Industry, Large Language Models, Nvidia Jetson, Raspberry PI, Blender, Computer Vision, opencv, Python, Pytorch/tensorflow, Segmentation, Extraction, Regression UK's Leading AgriTech Company is Looking for: Location: Remote Type: 6 months contract Experience Level : 3–5 Years Industry: Agritech | Sustainability | AI for Renewables About Us We're an AI-first company transforming the renewable and sustainable agriculture space. Our mission is to harness advanced computer vision and machine learning to enable smart, data-driven decisions in the livestock and agricultural ecosystem. We focus on practical applications such as automated weight estimation of cattle , livestock monitoring, and resource optimization to drive a more sustainable food system. Role Overview We are hiring a Computer Vision Engineer to develop intelligent image-based systems for livestock management, focusing on cattle weight estimation from images and video feeds. You will be responsible for building scalable vision pipelines, working with deep learning models, and bringing AI to production in real-world farm settings . Key Responsibilities Design and develop vision-based models to predict cattle weight from 2D/3D images, video, or depth data. Build image acquisition and preprocessing pipelines using multi-angle camera data. Implement classical and deep learning-based feature extraction techniques (e.g., body measurements, volume estimation). Conduct camera calibration, multi-view geometry analysis, and photogrammetry for size inference. Apply deep learning architectures (e.g., CNNs, ResNet, UNet, Mask R-CNN) for object detection, segmentation, and keypoint localization. Build 3D reconstruction pipelines using stereo imaging, depth sensors, or photogrammetry. Optimize and deploy models for edge devices (e.g., NVIDIA Jetson) or cloud environments. Collaborate with data scientists and product teams to analyze livestock datasets, refine prediction models, and validate outputs. Develop tools for automated annotation, model training pipelines, and continuous performance tracking. Required Qualifications & Skills Computer Vision: Object detection, keypoint estimation, semantic/instance segmentation, stereo imaging, and structure-from-motion. Weight Estimation Techniques: Experience in livestock monitoring, body condition scoring, and volumetric analysis from images/videos. Image Processing: Noise reduction, image normalization, contour extraction, 3D reconstruction, and camera calibration. Data Analysis & Modeling: Statistical modeling, regression techniques, and feature engineering for biological data. Technical Stack Programming Languages: Python (mandatory) Libraries & Frameworks: OpenCV, PyTorch, TensorFlow, Keras, scikit-learn 3D Processing: Open3D, PCL (Point Cloud Library), Blender (optional) Data Handling: NumPy, Pandas, DVC Annotation Tools: LabelImg, CVAT, Roboflow Cloud & DevOps: AWS/GCP, Docker, Git, CI/CD pipelines Deployment Tools: ONNX, TensorRT, FastAPI, Flask (for model serving) Preferred Qualifications Prior experience working in agritech, animal husbandry, or precision livestock farming. Familiarity with Large Language Models (LLMs) and integrating vision + language models for domain-specific insights. Knowledge of edge computing for on-farm device deployment (e.g., NVIDIA Jetson, Raspberry Pi). Contributions to open-source computer vision projects or relevant publications in CVPR, ECCV, or similar conferences. Soft Skills Strong problem-solving and critical thinking skills Clear communication and documentation practices Ability to work independently and collaborate in a remote, cross-functional team Why Join Us? Work at the intersection of AI and sustainability Be part of a dynamic and mission-driven team Opportunity to lead innovation in an emerging field of agritech Flexible remote work environment How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
3.0 years
0 - 0 Lacs
Greater Bhopal Area
Remote
Experience : 3.00 + years Salary : GBP 1785-2500 / month (based on experience) Expected Notice Period : 15 Days Shift : (GMT+01:00) Europe/London (BST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - UK's Leading AgriTech Company) What do you need for this opportunity? Must have skills required: AgriTech Industry, Large Language Models, Nvidia Jetson, Raspberry PI, Blender, Computer Vision, opencv, Python, Pytorch/tensorflow, Segmentation, Extraction, Regression UK's Leading AgriTech Company is Looking for: Location: Remote Type: 6 months contract Experience Level : 3–5 Years Industry: Agritech | Sustainability | AI for Renewables About Us We're an AI-first company transforming the renewable and sustainable agriculture space. Our mission is to harness advanced computer vision and machine learning to enable smart, data-driven decisions in the livestock and agricultural ecosystem. We focus on practical applications such as automated weight estimation of cattle , livestock monitoring, and resource optimization to drive a more sustainable food system. Role Overview We are hiring a Computer Vision Engineer to develop intelligent image-based systems for livestock management, focusing on cattle weight estimation from images and video feeds. You will be responsible for building scalable vision pipelines, working with deep learning models, and bringing AI to production in real-world farm settings . Key Responsibilities Design and develop vision-based models to predict cattle weight from 2D/3D images, video, or depth data. Build image acquisition and preprocessing pipelines using multi-angle camera data. Implement classical and deep learning-based feature extraction techniques (e.g., body measurements, volume estimation). Conduct camera calibration, multi-view geometry analysis, and photogrammetry for size inference. Apply deep learning architectures (e.g., CNNs, ResNet, UNet, Mask R-CNN) for object detection, segmentation, and keypoint localization. Build 3D reconstruction pipelines using stereo imaging, depth sensors, or photogrammetry. Optimize and deploy models for edge devices (e.g., NVIDIA Jetson) or cloud environments. Collaborate with data scientists and product teams to analyze livestock datasets, refine prediction models, and validate outputs. Develop tools for automated annotation, model training pipelines, and continuous performance tracking. Required Qualifications & Skills Computer Vision: Object detection, keypoint estimation, semantic/instance segmentation, stereo imaging, and structure-from-motion. Weight Estimation Techniques: Experience in livestock monitoring, body condition scoring, and volumetric analysis from images/videos. Image Processing: Noise reduction, image normalization, contour extraction, 3D reconstruction, and camera calibration. Data Analysis & Modeling: Statistical modeling, regression techniques, and feature engineering for biological data. Technical Stack Programming Languages: Python (mandatory) Libraries & Frameworks: OpenCV, PyTorch, TensorFlow, Keras, scikit-learn 3D Processing: Open3D, PCL (Point Cloud Library), Blender (optional) Data Handling: NumPy, Pandas, DVC Annotation Tools: LabelImg, CVAT, Roboflow Cloud & DevOps: AWS/GCP, Docker, Git, CI/CD pipelines Deployment Tools: ONNX, TensorRT, FastAPI, Flask (for model serving) Preferred Qualifications Prior experience working in agritech, animal husbandry, or precision livestock farming. Familiarity with Large Language Models (LLMs) and integrating vision + language models for domain-specific insights. Knowledge of edge computing for on-farm device deployment (e.g., NVIDIA Jetson, Raspberry Pi). Contributions to open-source computer vision projects or relevant publications in CVPR, ECCV, or similar conferences. Soft Skills Strong problem-solving and critical thinking skills Clear communication and documentation practices Ability to work independently and collaborate in a remote, cross-functional team Why Join Us? Work at the intersection of AI and sustainability Be part of a dynamic and mission-driven team Opportunity to lead innovation in an emerging field of agritech Flexible remote work environment How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
3.0 years
0 - 0 Lacs
Indore, Madhya Pradesh, India
Remote
Experience : 3.00 + years Salary : GBP 1785-2500 / month (based on experience) Expected Notice Period : 15 Days Shift : (GMT+01:00) Europe/London (BST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - UK's Leading AgriTech Company) What do you need for this opportunity? Must have skills required: AgriTech Industry, Large Language Models, Nvidia Jetson, Raspberry PI, Blender, Computer Vision, opencv, Python, Pytorch/tensorflow, Segmentation, Extraction, Regression UK's Leading AgriTech Company is Looking for: Location: Remote Type: 6 months contract Experience Level : 3–5 Years Industry: Agritech | Sustainability | AI for Renewables About Us We're an AI-first company transforming the renewable and sustainable agriculture space. Our mission is to harness advanced computer vision and machine learning to enable smart, data-driven decisions in the livestock and agricultural ecosystem. We focus on practical applications such as automated weight estimation of cattle , livestock monitoring, and resource optimization to drive a more sustainable food system. Role Overview We are hiring a Computer Vision Engineer to develop intelligent image-based systems for livestock management, focusing on cattle weight estimation from images and video feeds. You will be responsible for building scalable vision pipelines, working with deep learning models, and bringing AI to production in real-world farm settings . Key Responsibilities Design and develop vision-based models to predict cattle weight from 2D/3D images, video, or depth data. Build image acquisition and preprocessing pipelines using multi-angle camera data. Implement classical and deep learning-based feature extraction techniques (e.g., body measurements, volume estimation). Conduct camera calibration, multi-view geometry analysis, and photogrammetry for size inference. Apply deep learning architectures (e.g., CNNs, ResNet, UNet, Mask R-CNN) for object detection, segmentation, and keypoint localization. Build 3D reconstruction pipelines using stereo imaging, depth sensors, or photogrammetry. Optimize and deploy models for edge devices (e.g., NVIDIA Jetson) or cloud environments. Collaborate with data scientists and product teams to analyze livestock datasets, refine prediction models, and validate outputs. Develop tools for automated annotation, model training pipelines, and continuous performance tracking. Required Qualifications & Skills Computer Vision: Object detection, keypoint estimation, semantic/instance segmentation, stereo imaging, and structure-from-motion. Weight Estimation Techniques: Experience in livestock monitoring, body condition scoring, and volumetric analysis from images/videos. Image Processing: Noise reduction, image normalization, contour extraction, 3D reconstruction, and camera calibration. Data Analysis & Modeling: Statistical modeling, regression techniques, and feature engineering for biological data. Technical Stack Programming Languages: Python (mandatory) Libraries & Frameworks: OpenCV, PyTorch, TensorFlow, Keras, scikit-learn 3D Processing: Open3D, PCL (Point Cloud Library), Blender (optional) Data Handling: NumPy, Pandas, DVC Annotation Tools: LabelImg, CVAT, Roboflow Cloud & DevOps: AWS/GCP, Docker, Git, CI/CD pipelines Deployment Tools: ONNX, TensorRT, FastAPI, Flask (for model serving) Preferred Qualifications Prior experience working in agritech, animal husbandry, or precision livestock farming. Familiarity with Large Language Models (LLMs) and integrating vision + language models for domain-specific insights. Knowledge of edge computing for on-farm device deployment (e.g., NVIDIA Jetson, Raspberry Pi). Contributions to open-source computer vision projects or relevant publications in CVPR, ECCV, or similar conferences. Soft Skills Strong problem-solving and critical thinking skills Clear communication and documentation practices Ability to work independently and collaborate in a remote, cross-functional team Why Join Us? Work at the intersection of AI and sustainability Be part of a dynamic and mission-driven team Opportunity to lead innovation in an emerging field of agritech Flexible remote work environment How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
3.0 years
0 - 0 Lacs
Visakhapatnam, Andhra Pradesh, India
Remote
Experience : 3.00 + years Salary : GBP 1785-2500 / month (based on experience) Expected Notice Period : 15 Days Shift : (GMT+01:00) Europe/London (BST) Opportunity Type : Remote Placement Type : Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - UK's Leading AgriTech Company) What do you need for this opportunity? Must have skills required: AgriTech Industry, Large Language Models, Nvidia Jetson, Raspberry PI, Blender, Computer Vision, opencv, Python, Pytorch/tensorflow, Segmentation, Extraction, Regression UK's Leading AgriTech Company is Looking for: Location: Remote Type: 6 months contract Experience Level : 3–5 Years Industry: Agritech | Sustainability | AI for Renewables About Us We're an AI-first company transforming the renewable and sustainable agriculture space. Our mission is to harness advanced computer vision and machine learning to enable smart, data-driven decisions in the livestock and agricultural ecosystem. We focus on practical applications such as automated weight estimation of cattle , livestock monitoring, and resource optimization to drive a more sustainable food system. Role Overview We are hiring a Computer Vision Engineer to develop intelligent image-based systems for livestock management, focusing on cattle weight estimation from images and video feeds. You will be responsible for building scalable vision pipelines, working with deep learning models, and bringing AI to production in real-world farm settings . Key Responsibilities Design and develop vision-based models to predict cattle weight from 2D/3D images, video, or depth data. Build image acquisition and preprocessing pipelines using multi-angle camera data. Implement classical and deep learning-based feature extraction techniques (e.g., body measurements, volume estimation). Conduct camera calibration, multi-view geometry analysis, and photogrammetry for size inference. Apply deep learning architectures (e.g., CNNs, ResNet, UNet, Mask R-CNN) for object detection, segmentation, and keypoint localization. Build 3D reconstruction pipelines using stereo imaging, depth sensors, or photogrammetry. Optimize and deploy models for edge devices (e.g., NVIDIA Jetson) or cloud environments. Collaborate with data scientists and product teams to analyze livestock datasets, refine prediction models, and validate outputs. Develop tools for automated annotation, model training pipelines, and continuous performance tracking. Required Qualifications & Skills Computer Vision: Object detection, keypoint estimation, semantic/instance segmentation, stereo imaging, and structure-from-motion. Weight Estimation Techniques: Experience in livestock monitoring, body condition scoring, and volumetric analysis from images/videos. Image Processing: Noise reduction, image normalization, contour extraction, 3D reconstruction, and camera calibration. Data Analysis & Modeling: Statistical modeling, regression techniques, and feature engineering for biological data. Technical Stack Programming Languages: Python (mandatory) Libraries & Frameworks: OpenCV, PyTorch, TensorFlow, Keras, scikit-learn 3D Processing: Open3D, PCL (Point Cloud Library), Blender (optional) Data Handling: NumPy, Pandas, DVC Annotation Tools: LabelImg, CVAT, Roboflow Cloud & DevOps: AWS/GCP, Docker, Git, CI/CD pipelines Deployment Tools: ONNX, TensorRT, FastAPI, Flask (for model serving) Preferred Qualifications Prior experience working in agritech, animal husbandry, or precision livestock farming. Familiarity with Large Language Models (LLMs) and integrating vision + language models for domain-specific insights. Knowledge of edge computing for on-farm device deployment (e.g., NVIDIA Jetson, Raspberry Pi). Contributions to open-source computer vision projects or relevant publications in CVPR, ECCV, or similar conferences. Soft Skills Strong problem-solving and critical thinking skills Clear communication and documentation practices Ability to work independently and collaborate in a remote, cross-functional team Why Join Us? Work at the intersection of AI and sustainability Be part of a dynamic and mission-driven team Opportunity to lead innovation in an emerging field of agritech Flexible remote work environment How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description And Requirements CareerArc Code CA-SW Hybrid "At BMC trust is not just a word - it's a way of life!" We are an award-winning, equal opportunity, culturally diverse, fun place to be. Giving back to the community drives us to be better every single day. Our work environment allows you to balance your priorities, because we know you will bring your best every day. We will champion your wins and shout them from the rooftops. Your peers will inspire, drive, support you, and make you laugh out loud! We help our customers free up time and space to become an Autonomous Digital Enterprise that conquers the opportunities ahead - and are relentless in the pursuit of innovation! The IZOT product line includes BMC’s Intelligent Z Optimization & Transformation products, which help the world’s largest companies to monitor and manage their mainframe systems. The modernization of mainframe is the beating heart of our product line, and we achieve this goal by developing products that improve the developer experience, the mainframe integration, the speed of application development, the quality of the code and the applications’ security, while reducing operational costs and risks. We acquired several companies along the way, and we continue to grow, innovate, and perfect our solutions on an ongoing basis. BMC is looking for a talented Python Developer to join our family working on complex and distributed software, developing, and debugging software products, implementing features, and assisting the firm in assuring product quality. Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Responsibilities We are seeking a Python with AI/ML Developer to join a highly motivated team responsible for developing and maintaining innovation for mainframe capacity and cost management. As an Application Developer at BMC, you will be responsible for: Developing and integrating AI/ML models with a focus on Generative AI (GenAI), Retrieval-Augmented Generation (RAG), and Vector Databases to enhance intelligent decision-making. Building scalable AI pipelines for real-time and batch inference, optimizing model performance, and deploying AI-driven applications. Implementing RAG-based architectures using LLMs (Large Language Models) for intelligent search, chatbot development, and knowledge management. Utilizing vector databases (e.g., FAISS, ChromaDB, Weaviate, Pinecone) to enable efficient similarity search and AI-driven recommendations. Developing modern web applications using Angular to create interactive and AI-powered user interfaces. To ensure you’re set up for success, you will bring the following skillset & experience: 7+ years of experience in designing and implementing AI/ML-driven applications Strong proficiency in Python and AI/ML frameworks like TensorFlow, PyTorch, Hugging Face Transformers, LangChain. Experience with Vector Databases (FAISS, ChromaDB, Weaviate, Pinecone) for semantic search and embeddings. Hands-on expertise in LLMs (GPT, LLaMA, Mistral, Claude, etc.) and fine-tuning/customizing models. Proficiency in Retrieval-Augmented Generation (RAG) and prompt engineering for AI-driven applications. Experience with Angular for developing interactive web applications. Experience with RESTful APIs, FastAPI, Flask, or Django for AI model serving. Working knowledge of SQL and NoSQL databases for AI/ML applications. Hands-on experience with Git/GitHub, Docker, and Kubernetes for AI/ML model deployment. BMC Software maintains a strict policy of not requesting any form of payment in exchange for employment opportunities, upholding a fair and ethical hiring process. At BMC we believe in pay transparency and have set the midpoint of the salary band for this role at 4,542,800 INR. Actual salaries depend on a wide range of factors that are considered in making compensation decisions, including but not limited to skill sets; experience and training, licensure, and certifications; and other business and organizational needs. The salary listed is just one component of BMC's employee compensation package. Other rewards may include a variable plan and country specific benefits. We are committed to ensuring that our employees are paid fairly and equitably, and that we are transparent about our compensation practices. ( Returnship@BMC ) Had a break in your career? No worries. This role is eligible for candidates who have taken a break in their career and want to re-enter the workforce. If your expertise matches the above job, visit to https://bmcrecruit.avature.net/returnship know more and how to apply. Min salary 3,407,100 Our commitment to you! BMC’s culture is built around its people. We have 6000+ brilliant minds working together across the globe. You won’t be known just by your employee number, but for your true authentic self. BMC lets you be YOU! If after reading the above, You’re unsure if you meet the qualifications of this role but are deeply excited about BMC and this team, we still encourage you to apply! We want to attract talents from diverse backgrounds and experience to ensure we face the world together with the best ideas! BMC is committed to equal opportunity employment regardless of race, age, sex, creed, color, religion, citizenship status, sexual orientation, gender, gender expression, gender identity, national origin, disability, marital status, pregnancy, disabled veteran or status as a protected veteran. If you need a reasonable accommodation for any part of the application and hiring process, visit the accommodation request page. Mid point salary 4,542,800 Max salary 5,678,500 Show more Show less
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About Us: Traya is an Indian direct-to-consumer hair care brand platform provides a holistic treatment for consumers dealing with hairloss. The Company provides personalized consultations that help determine the root cause of hair fall among individuals, along with a range of hair care products that are curated from a combination of Ayurveda, Allopathy, and Nutrition. Traya's secret lies in the power of diagnosis. Our unique platform diagnoses the patient’s hair & health history, to identify the root cause behind hair fall and delivers customized hair kits to them right at their doorstep. We have a strong adherence system in place via medically-trained hair coaches and proprietary tech, where we guide the customer across their hair growth journey, and help them stay on track. Traya is founded by Saloni Anand, a techie-turned-marketeer and Altaf Saiyed, a Stanford Business School alumnus. Our Vision: Traya was created with a global vision to create awareness around hair loss, de-stigmatise it while empathizing with the customers that it has an emotional and psychological impact. Most importantly, to combine 3 different sciences (Ayurveda, Allopathy and Nutrition) to create the perfect holistic solution for hair loss patients. Responsibilities: Data Analysis and Exploration: Conduct in-depth analysis of large and complex datasets to identify trends, patterns, and anomalies. Perform exploratory data analysis (EDA) to understand data distributions, relationships, and quality. Machine Learning and Statistical Modeling: Develop and implement machine learning models (e.g., regression, classification, clustering, time series analysis) to solve business problems. Evaluate and optimize model performance using appropriate metrics and techniques. Apply statistical methods to design and analyze experiments and A/B tests. Implement and maintain models in production environments. Data Engineering and Infrastructure: Collaborate with data engineers to ensure data quality and accessibility. Contribute to the development and maintenance of data pipelines and infrastructure. Work with cloud platforms (e.g., AWS, GCP, Azure) and big data technologies (e.g., Spark, Hadoop). Communication and Collaboration: Effectively communicate technical findings and recommendations to both technical and non-technical audiences. Collaborate with product managers, engineers, and other stakeholders to define and prioritize projects. Document code, models, and processes for reproducibility and knowledge sharing. Present findings to leadership. Research and Development: Stay up-to-date with the latest advancements in data science and machine learning. Explore and evaluate new tools and techniques to improve data science capabilities. Contribute to internal research projects. Qualifications: Bachelor's or Master's degree in Computer Science, Statistics, Mathematics, or a related field. 3-5 years of experience as a Data Scientist or in a similar role. Leverage SageMaker's features, including SageMaker Studio, Autopilot, Experiments, Pipelines, and Inference, to optimize model development and deployment workflows. Proficiency in Python and relevant libraries (e.g., scikit-learn, pandas, NumPy, TensorFlow, PyTorch). Solid understanding of statistical concepts and machine learning algorithms. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Experience deploying models to production. Experience with version control (Git) Preferred Qualifications: Experience with specific industry domains (e.g., e-commerce, finance, healthcare). Experience with natural language processing (NLP) or computer vision. Experience with building recommendation engines. Experience with time series forecasting. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
India
Remote
🔍 AI Engineer — phablo.ai (Remote) Location: Remote Type: Full-time Company: Phablo.ai Team: Founding Engineering Team Experience: 1–4 years (exceptional freshers welcome) 🌟 About Us At Phablo.ai , we’re transforming how compliance works for life sciences. No more clunky tools or manual processes — we’re building an AI-powered platform that helps teams stay compliant faster, smarter, and with confidence. Co-Founded by domain and tech experts from Germany & India, headquartered in SIngapore , our mission is global: to reimagine compliance workflows for the world’s most regulated industries. 🚀 Perks & Culture 🧠 Founding Opportunity – Join as part of our early core engineering team and shape the product and tech stack from the ground up. 🌍 Remote-First, Global Team – Collaborate with co-founders and teammates across continents . Our key markets are the EU and US. 💰 Compensation – Competitive startup salary with strong growth upside. As the company grows, so will your salary and other perks. Equity opportunities available for long-term, high-impact contributors at later stages. 📚 Limitless Learning – Work on cutting-edge LLM, RAG, and document AI systems with real-world impact. ⚙️ Culture of Innovation – We move fast, experiment often, and value initiative, autonomy, and team spirit. 🎯 Responsibilities Architect and implement components of our AI compliance engine using techniques like RAG, hybrid search, summarization, and entity extraction. Fine-tune and optimize foundation models (e.g., LLaMA, Mistral, GPT) for tasks such as regulation parsing, document comparison, and compliance Q&A. Build document ingestion, chunking, and embedding pipelines using LangChain , Haystack , or LlamaIndex . Integrate with vector databases (e.g., FAISS, Qdrant, Weaviate, Pinecone) to enable scalable semantic retrieval. Develop and deploy FastAPI/Flask-based APIs for real-time or batch AI inference services. Apply prompt engineering, memory techniques, and few-shot learning to improve response quality, accuracy, and explainability. Work closely with domain experts and product teams to build AI systems aligned with real regulatory workflows . Continuously monitor and evaluate AI model performance across metrics like latency, accuracy, hallucination rate, and compliance risk. 🛠️ Required Skills Strong Python skills and experience with ML frameworks like PyTorch , TensorFlow , or JAX . Deep understanding of LLMs and NLP workflows including: NER, summarization, semantic similarity, RAG, and hybrid search Embedding generation and vector search optimization Familiar with tools like: LangChain , Haystack , LlamaIndex Vector DBs : FAISS, Weaviate, Qdrant, Pinecone FastAPI , Flask , or similar frameworks for serving models Hands-on with MLOps tooling like MLflow, Weights & Biases, HuggingFace Hub, or cloud platforms (AWS, GCP, Vertex AI). Awareness of AI safety , data privacy , and security best practices in regulated industries. Excellent problem-solving, communication, and collaboration skills. Bonus: Experience in life sciences , healthcare , legal tech , or regulatory AI is highly desirable. 🎓 Qualifications 1–4 years of experience engineering AI systems in production settings. OR, if you’re a fresher : A strong academic background in AI/ML/NLP/Data Science from a reputed institution. Demonstrated ability through research publications , open-source contributions , or top performance in AI competitions. 💬 Let’s Build the Future of Compliance If you're excited about building AI systems that solve real-world problems in healthcare, pharma, and life sciences — let’s talk! Apply now or reach out directly to ts@phablo.ai Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Role We’re looking for top-tier AI/ML Engineers with 6+ years of experience to join our fast-paced and innovative team. If you thrive at the intersection of GenAI, Machine Learning, MLOps, and application development, we want to hear from you. You’ll have the opportunity to work on high-impact GenAI applications and build scalable systems that solve real business problems. Key Responsibilities Design, develop, and deploy GenAI applications using techniques like RAG (Retrieval Augmented Generation), prompt engineering, model evaluation, and LLM integration. Architect and build production-grade Python applications using frameworks such as FastAPI or Flask . Implement gRPC services , event-driven systems ( Kafka, PubSub ), and CI/CD pipelines for scalable deployment. Collaborate with cross-functional teams to frame business problems as ML use-cases — regression, classification, ranking, forecasting, and anomaly detection. Own end-to-end ML pipeline development : data preprocessing, feature engineering, model training/inference, deployment, and monitoring. Work with tools such as Airflow , Dagster , SageMaker , and MLflow to operationalize and orchestrate pipelines. Ensure model evaluation , A/B testing , and hyperparameter tuning is done rigorously for production systems. Must-Have Skills Hands-on experience with GenAI/LLM-based applications – RAG, Evals, vector stores, embeddings. Strong backend engineering using Python , FastAPI/Flask , gRPC, and event-driven architectures. Experience with CI/CD , infrastructure, containerization, and cloud deployment (AWS, GCP, or Azure). Proficient in ML best practices : feature selection, hyperparameter tuning, A/B testing, model explainability. Proven experience in batch data pipelines and training/inference orchestration . Familiarity with tools like Airflow/Dagster , SageMaker , and data pipeline architecture . Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role: Data Analyst - Product Analytics Job Description 4+ years of experience in data analytics with at least 2 years in product analytics. * Proficiency in SQL and Python for data analysis; strong understanding of experimentation methods and causal inference. * Experience working with Azure Data Services and Databricks (or equivalent cloud-based platforms). * Expertise in building compelling, user-friendly dashboards in Power BI. * Strong communication skills and ability to influence cross-functional stakeholders. * Passionate about solving customer problems and improving products through data. Show more Show less
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
The GlobalHealthX is a startup venture studio/innovation exchange working at the intersection of healthcare, life sciences and technology and is looking to hire a passionate R&D engineer who has prior hands on experience in building scalable E2E AI solutions. Some key requirements for the applicant are: Prior experience with neural networks and computer vision models Working experience with LLMs and associated tooling. Knowledge of LLM inference providers and associated integrations. Must possess strong system design acumen with a focus on engineering fundamentals. Should be able to design and build robust scalable systems. Should be able to translate problem statements into E2E design, development and delivery. An eagerness and appetite to keep on top of the developing AI space Hands on experience with technologies such as: Python, PyTorch, Langchain, Langraph, AutoGen, DSPy, Tracing and Eval tools. Inference backends such as Ollama, Llama.cpp, vLLM or others Integrations with inference providers such as OpenAI, Anthropic, VertexAI, DeepSeek etc. Training or fine tuning models using techniques such as LORA, SFT, DPO etc. Prototype building with Gradio, Streamlit. Database and schemas including vector databases: Great if any prior experience building use cases such as RAG, Graph RAG MLOps : Prompt management, deploying and maintaining AI models or workflows. Experience: 3-5 Years (At least 1 year working with GenAI based projects) Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Bengaluru, Karnataka
Remote
Principal Software Engineer Bangalore, Karnataka, India Date posted Jun 11, 2025 Job number 1822819 Work site Up to 50% work from home Travel 0-25 % Role type Individual Contributor Profession Software Engineering Discipline Software Engineering Employment type Full-Time Overview We are building the next-generation real-time enforcement platform that protects users, advertisers, and the integrity of Microsoft’s Ads and content ecosystems. This infrastructure processes and evaluates hundreds of billions of signals each day, applying safety and policy decisions with millisecond latency and global reliability. As a Principal Software Engineer , you will define and drive the architecture of the core systems behind this platform—spanning real-time decision services, streaming pipelines, and ML inference integration. You will also help lay the foundation for emerging AI-enabled enforcement flows, including agentic workflows that reason, adapt, and take multi-step actions using large language models and learned policies. This is a hands-on IC leadership role for someone who thrives at the intersection of deep system design, web-scale performance, and long-term platform evolution—and who is both curious about how AI can augment infrastructure, and pragmatic about where and how it should. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Qualifications Required Qualifications: Bachelor's Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. 6+ years of experience in backend or distributed systems engineering, with a proven record of system-level architectural leadership. Advanced proficiency in C++, C# or equivalent systems languages. Deep experience designing and scaling streaming or real-time systems (e.g., Kafka, Flink, Beam). Solid command of performance profiling, load testing, capacity planning, and operational rigor. Comfort designing systems for high QPS, low latency, and regulatory traceability. Familiarity with ML inference orchestration, model deployment workflows, or online feature pipelines. Other Requirements: Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Preferred Qualifications: Bachelor's Degree in Computer Science OR related technical field AND 10+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Master's Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. 10+ years of experience in backend or distributed systems engineering, with a proven record of system-level architectural leadership. Understanding of LLM integration, RAG, or agentic task flows, even at an architectural/infra layer. Experience building systems that support human-in-the-loop moderation, policy evolution, or adaptive enforcement logic. Experience building efficient, scalable ML inference platforms. #MicrosoftAI Responsibilities Design and evolve large-scale, low-latency distributed systems that evaluate ads, content, and signals in milliseconds across global workloads. Lead architectural efforts across stream processing pipelines, real-time scoring services, policy engines, and ML integration points. Define system-level strategies for scalability, performance optimization, observability, and failover resilience. Partner with ML engineers and applied scientists to integrate models into production with cost-efficiency, modularity, and runtime predictability. Guide technical direction for next-generation capabilities, including the early architecture of agentic/LLM-powered policy orchestration flows. Influence platform-wide standards, review designs across teams, and mentor senior engineers through deep technical leadership. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work. Industry leading healthcare Educational resources Discounts on products and services Savings and investments Maternity and paternity leave Generous time away Giving programs Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 1 week ago
0 years
0 Lacs
Jammu, Jammu & Kashmir, India
On-site
Job Title: Machine Learning Engineer Location: Cambridge, UK (Hybrid working – minimum 3 days on-site) Job Type: Permanent Salary: Competitive + Benefits Overview A high-performance computing company based in Cambridge is seeking a Machine Learning Engineer to join their expanding ML team. You will play a key role in developing and deploying machine learning models for applications such as speech recognition and large language models, with a strong focus on performance and efficiency. This role offers a unique opportunity to work at the intersection of machine learning research and systems optimisation, with a particular emphasis on scalable model training and low-latency inference deployment. Essential Skills And Experience Strong experience in training machine learning models for real-world applications. MSc or PhD in a technical discipline (e.g. Computer Science, Engineering, Maths) with solid grounding in deep learning. Proficiency in PyTorch or TensorFlow. Experience working with large datasets (multi-TB scale). Familiarity with model compression techniques such as reduced-precision computation. Practical experience with Linux development environments, Git, and CI tools. Excellent communication and documentation skills. Company And Culture You’ll be joining a small, collaborative team of engineers and researchers focused on creating cutting-edge ML solutions with an emphasis on performance and energy efficiency. The company fosters a friendly, inclusive environment where team members contribute directly to technical and strategic outcomes. Located in central Cambridge, the team works in a hybrid pattern with a minimum of three days per week on-site. Flexible arrangements can be considered for those with specific requirements. The company is an equal opportunities employer, committed to building a diverse and inclusive workforce. If you’re an AI/ML Engineer and looking for an exciting new opportunity, please do apply to learn more! If you’d like to find out more about this or other AI/ML and Computer Vision opportunities, please contact Oscar Harper at IC Resources. Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
India
Remote
About Lighthouz AI Lighthouz AI is automating the back office of freight finance with freight-native AI agents. We help freight brokers, 3PLs, and factoring companies process invoices, rate confirmations, and PoDs in seconds—not hours—by replacing manual audits and brittle RPA with intelligent automation. Our platform handles real-world document chaos—scanned and handwritten paperwork, NOAs, BOLs, emails, and portal logins—executing complex workflows automatically. The result: faster payments, fewer disputes, and 10x operational throughput. We’re a Y Combinator S24 company founded by a team with deep experience across AI, supply chain, and enterprise systems (Google, Georgia Tech, Progressive, Halliburton). At Lighthouz, we’re not just streamlining freight finance—we’re rebuilding it from the ground up. About The Role We’re hiring a Senior AI Engineer with expertise in Computer Vision, document understanding, and voice AI to help build the brains behind our AI agents. You’ll work on the two core components of our AI agents – first, the core perception systems that extract structured insights from messy, real-world freight documents—handwritten, scanned, distorted, or multi-page – and second, our AI agents for email and voice communications between freight entities. You will do a lot of prompt engineering, fine-tuning LLMs, building large-scale document classification and entity extraction models, communication understanding, intent classification, and voice AI – your code will be at the heart of automating financial decision-making in freight. You’ll collaborate closely with the backend and product teams to bring AI models to life in production environments and continuously improve performance in the wild. What You’ll Do 👉🏼 Build and fine-tune AI models for document classification, OCR, entity recognition, and layout parsing\ 👉🏼 Build AI agents for email and phone communications between different freight accounting parties – payer and payee 👉🏼 Develop scalable pipelines for pre-processing, training, inference, and feedback loops\ 👉🏼 Evaluate and integrate VLMs 👉🏼 Annotate, clean, and curate diverse freight documents for robust model performance 👉🏼Build training, evaluation, and test datasets 👉🏼Identify issues identified in production data and fix them asap 👉🏼Iterate on improving existing and new AI stack \ 👉🏼 Productionize AI models as part of Lighthouz’s intelligent automation stack\ 👉🏼 Collaborate with backend engineers to integrate model outputs into document, email, and voice workflows\ 👉🏼 Continuously monitor and improve model performance in real-world conditions What We’re Looking For 👉🏼 3–6 years experience in ML or AI roles, preferably focused on computer vision or document AI\ 👉🏼 Strong foundation in deep learning frameworks (e.g., PyTorch, TensorFlow)\ 👉🏼 Experience in fine-tuning VLMs and LLMs \ 👉🏼Experience in voice AI\ 👉🏼 Experience with document/image OCR, visual transformers, and multimodal models\ 👉🏼 Proficiency in Python and common ML tooling (e.g., Hugging Face, OpenCV, spaCy) \ 👉🏼 Hands-on experience training and deploying models in production\ 👉🏼 Strong problem-solving skills and a builder mindset—you move fast and iterate faster\ 👉🏼 Comfortable working with ambiguity and evolving datasets\ 👉🏼 Willingness to work long hours Nice to Have 👉🏼 Familiarity with freight, logistics, or fintech workflows\ 👉🏼 Experience with AWS, Azure, or GCP-based ML infrastructure\ 👉🏼 Exposure to RAG pipelines, foundation models, or vector search systems\ 👉🏼 Knowledge of document layout understanding (e.g., Donut, LayoutLM, PubLayNet)\ 👉🏼 Background in building secure, production-grade ML services What We Offer 💰 Competitive salary\ 🌎 Fully remote\ 🛠️ High ownership, zero bureaucracy—help shape our AI stack from day one\ 🚀 Work on impactful real-world problems that blend AI and automation at scale Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
India
Remote
About Lighthouz AI Lighthouz AI is automating the back office of freight finance with freight-native AI agents. We help freight brokers, 3PLs, and factoring companies process invoices, rate confirmations, and PoDs in seconds—not hours—by replacing manual audits and brittle RPA with intelligent automation. Our platform handles real-world document chaos—scanned and handwritten paperwork, NOAs, BOLs, emails, and portal logins—executing complex workflows automatically. The result: faster payments, fewer disputes, and 10x operational throughput. We’re a Y Combinator S24 company founded by a team with deep experience across AI, supply chain, and enterprise systems (Google, Georgia Tech, Progressive, Halliburton). At Lighthouz, we’re not just streamlining freight finance—we’re rebuilding it from the ground up. About The Role We’re hiring a Senior AI Engineer with expertise in Computer Vision, document understanding, and voice AI to help build the brains behind our AI agents. You’ll work on the two core components of our AI agents – first, the core perception systems that extract structured insights from messy, real-world freight documents—handwritten, scanned, distorted, or multi-page – and second, our AI agents for email and voice communications between freight entities. You will do a lot of prompt engineering, fine-tuning LLMs, building large-scale document classification and entity extraction models, communication understanding, intent classification, and voice AI – your code will be at the heart of automating financial decision-making in freight. You’ll collaborate closely with the backend and product teams to bring AI models to life in production environments and continuously improve performance in the wild. What You’ll Do 👉🏼 Build and fine-tune AI models for document classification, OCR, entity recognition, and layout parsing\ 👉🏼 Build AI agents for email and phone communications between different freight accounting parties – payer and payee 👉🏼 Develop scalable pipelines for pre-processing, training, inference, and feedback loops\ 👉🏼 Evaluate and integrate VLMs 👉🏼 Annotate, clean, and curate diverse freight documents for robust model performance 👉🏼Build training, evaluation, and test datasets 👉🏼Identify issues identified in production data and fix them asap 👉🏼Iterate on improving existing and new AI stack \ 👉🏼 Productionize AI models as part of Lighthouz’s intelligent automation stack\ 👉🏼 Collaborate with backend engineers to integrate model outputs into document, email, and voice workflows\ 👉🏼 Continuously monitor and improve model performance in real-world conditions What We’re Looking For 👉🏼 2+ years experience in ML or AI roles, preferably focused on computer vision or document AI\ 👉🏼 Strong foundation in deep learning frameworks (e.g., PyTorch, TensorFlow)\ 👉🏼 Experience in fine-tuning VLMs and LLMs \ 👉🏼Experience in voice AI\ 👉🏼 Experience with document/image OCR, visual transformers, and multimodal models\ 👉🏼 Proficiency in Python and common ML tooling (e.g., Hugging Face, OpenCV, spaCy) \ 👉🏼 Hands-on experience training and deploying models in production\ 👉🏼 Strong problem-solving skills and a builder mindset—you move fast and iterate faster\ 👉🏼 Comfortable working with ambiguity and evolving datasets\ 👉🏼 Willingness to work long hours Nice to Have 👉🏼 Familiarity with freight, logistics, or fintech workflows\ 👉🏼 Experience with AWS, Azure, or GCP-based ML infrastructure\ 👉🏼 Exposure to RAG pipelines, foundation models, or vector search systems\ 👉🏼 Knowledge of document layout understanding (e.g., Donut, LayoutLM, PubLayNet)\ 👉🏼 Background in building secure, production-grade ML services What We Offer 💰 Competitive salary\ 🌎 Fully remote\ 🛠️ High ownership, zero bureaucracy—help shape our AI stack from day one\ 🚀 Work on impactful real-world problems that blend AI and automation at scale Show more Show less
Posted 1 week ago
2.0 - 4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are looking for a highly skilled Generative AI Developer : Responsibilities We are looking for a highly skilled Generative AI Developer with expertise in Large Language Models (LLMs) to join our AI/ML innovation team. The ideal candidate will be responsible for building, fine-tuning, deploying, and optimizing generative AI models to solve complex real-world problems. You will collaborate with data scientists, machine learning engineers, product managers, and software developers to drive forward next-generation AI-powered Responsibilities : Design and develop AI-powered applications using large language models (LLMs) such as GPT, LLaMA, Mistral, Claude, or similar. Fine-tune pre-trained LLMs for specific tasks (e.g., text summarization, Q&A systems, chatbots, semantic search). Build and integrate LLM-based APIs into products and systems. Optimize inference performance, latency, and throughput of LLMs for deployment at scale. Conduct prompt engineering and design strategies for prompt optimization and output consistency. Develop evaluation frameworks to benchmark model quality, response accuracy, safety, and bias. Manage training data pipelines and ensure data privacy, compliance, and quality standards. Experiment with open-source LLM frameworks and contribute to internal libraries and tools. Collaborate with MLOps teams to automate deployment, CI/CD pipelines, and monitoring of LLM solutions. Stay up to date with state-of-the-art advancements in generative AI, NLP, and foundation Skills Required : LLMs & Transformers: Deep understanding of transformer-based architectures (e.g., GPT, BERT, T5, LLaMA, Falcon). Model Training/Fine-Tuning: Hands-on experience with training/fine-tuning large models using libraries such as Hugging Face Transformers, DeepSpeed, LoRA, PEFT. Prompt Engineering: Expertise in designing, testing, and refining prompts for specific tasks and outcomes. Python: Strong proficiency in Python with experience in ML and NLP libraries. Frameworks: Experience with PyTorch, TensorFlow, Hugging Face, LangChain, or similar frameworks. MLOps: Familiarity with tools like MLflow, Kubeflow, Airflow, or SageMaker for model lifecycle management. Data Handling: Experience with data pipelines, preprocessing, and working with structured and unstructured Desirable Skills : Deployment: Knowledge of deploying LLMs on cloud platforms like AWS, GCP, Azure, or edge devices. Vector Databases: Experience with FAISS, Pinecone, Weaviate, or ChromaDB for semantic search applications. LLM APIs: Experience integrating with APIs like OpenAI, Cohere, Anthropic, Mistral, etc. Containerization: Docker, Kubernetes, and cloud-native services for scalable model deployment. Security & Ethics: Understanding of LLM security, hallucination handling, and responsible AI : Bachelors or Masters degree in Computer Science, Artificial Intelligence, Machine Learning, or related field. 2-4 years of experience in ML/NLP roles with at least 12 years specifically focused on generative AI and LLMs. Prior experience working in a research or product-driven AI team is a plus. Strong communication skills to explain technical concepts and findings Skills : Analytical thinker with a passion for solving complex problems. Team player who thrives in cross-functional settings. Self-driven, curious, and always eager to learn the latest advancements in AI. Ability to work independently and deliver high-quality solutions under tight deadlines. (ref:hirist.tech) Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description and Requirements "At BMC trust is not just a word - it's a way of life!" Description And Requirements CareerArc Code CA-SW Hybrid "At BMC trust is not just a word - it's a way of life!" We are an award-winning, equal opportunity, culturally diverse, fun place to be. Giving back to the community drives us to be better every single day. Our work environment allows you to balance your priorities, because we know you will bring your best every day. We will champion your wins and shout them from the rooftops. Your peers will inspire, drive, support you, and make you laugh out loud! We help our customers free up time and space to become an Autonomous Digital Enterprise that conquers the opportunities ahead - and are relentless in the pursuit of innovation! The IZOT product line includes BMC’s Intelligent Z Optimization & Transformation products, which help the world’s largest companies to monitor and manage their mainframe systems. The modernization of mainframe is the beating heart of our product line, and we achieve this goal by developing products that improve the developer experience, the mainframe integration, the speed of application development, the quality of the code and the applications’ security, while reducing operational costs and risks. We acquired several companies along the way, and we continue to grow, innovate, and perfect our solutions on an ongoing basis. BMC is looking for a talented Python Developer to join our family working on complex and distributed software, developing, and debugging software products, implementing features, and assisting the firm in assuring product quality. Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Responsibilities We are seeking a Python with AI/ML Developer to join a highly motivated team responsible for developing and maintaining innovation for mainframe capacity and cost management. As an Application Developer at BMC, you will be responsible for: Developing and integrating AI/ML models with a focus on Generative AI (GenAI), Retrieval-Augmented Generation (RAG), and Vector Databases to enhance intelligent decision-making. Building scalable AI pipelines for real-time and batch inference, optimizing model performance, and deploying AI-driven applications. Implementing RAG-based architectures using LLMs (Large Language Models) for intelligent search, chatbot development, and knowledge management. Utilizing vector databases (e.g., FAISS, ChromaDB, Weaviate, Pinecone) to enable efficient similarity search and AI-driven recommendations. Developing modern web applications using Angular to create interactive and AI-powered user interfaces. To ensure you’re set up for success, you will bring the following skillset & experience: 7+ years of experience in designing and implementing AI/ML-driven applications Strong proficiency in Python and AI/ML frameworks like TensorFlow, PyTorch, Hugging Face Transformers, LangChain. Experience with Vector Databases (FAISS, ChromaDB, Weaviate, Pinecone) for semantic search and embeddings. Hands-on expertise in LLMs (GPT, LLaMA, Mistral, Claude, etc.) and fine-tuning/customizing models. Proficiency in Retrieval-Augmented Generation (RAG) and prompt engineering for AI-driven applications. Experience with Angular for developing interactive web applications. Experience with RESTful APIs, FastAPI, Flask, or Django for AI model serving. Working knowledge of SQL and NoSQL databases for AI/ML applications. Hands-on experience with Git/GitHub, Docker, and Kubernetes for AI/ML model deployment. BMC Software maintains a strict policy of not requesting any form of payment in exchange for employment opportunities, upholding a fair and ethical hiring process. At BMC we believe in pay transparency and have set the midpoint of the salary band for this role at 4,542,800 INR. Actual salaries depend on a wide range of factors that are considered in making compensation decisions, including but not limited to skill sets; experience and training, licensure, and certifications; and other business and organizational needs. The salary listed is just one component of BMC's employee compensation package. Other rewards may include a variable plan and country specific benefits. We are committed to ensuring that our employees are paid fairly and equitably, and that we are transparent about our compensation practices. ( Returnship@BMC ) Had a break in your career? No worries. This role is eligible for candidates who have taken a break in their career and want to re-enter the workforce. If your expertise matches the above job, visit to https://bmcrecruit.avature.net/returnship know more and how to apply. Show more Show less
Posted 1 week ago
4.0 - 7.0 years
0 Lacs
Greater Kolkata Area
Remote
Role : AI/ML Engineer Exp : 4-7 Years Location : Nagpur/remote Job Description We are looking for a talented and experienced AI/ML Engineer to join our growing data science team. The ideal candidate will have strong expertise in machine learning, deep learning, and model deployment, with a passion for solving complex real-world problems using AI. Role & Responsibilities Design, develop, and deploy machine learning and deep learning models for various use cases Collaborate with cross-functional teams to gather requirements and identify AI-driven solutions Perform data preprocessing, feature engineering, and data visualization Evaluate model performance using appropriate metrics and improve as needed Optimize and scale models for performance, accuracy, and real-time inference Deploy ML models into production environments using CI/CD pipelines and MLOps best practices Document model architecture, data flow, and performance reports Stay up-to-date with the latest advancements in AI, ML, and related technologies Strong experience with Python and ML libraries such as scikit-learn, TensorFlow, Keras, PyTorch Proficiency in data analysis, feature engineering, and model selection techniques Hands-on experience with deep learning architectures: CNNs, RNNs, Transformers, etc. Experience with NLP, computer vision, or time-series forecasting is a plus Familiarity with ML pipelines, model versioning, and MLOps tools (MLflow, Kubeflow, etc.) Strong understanding of data structures, algorithms, and statistical methods Experience with cloud platforms like AWS, Azure, or GCP for model training and deployment Proficiency with SQL and NoSQL databases for data retrieval and storage (ref:hirist.tech) Show more Show less
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Job Description Responsibilities : Identify relevant data sources - a combination of data sources to make it useful. Build the automation of the collection processes. Pre-processing of structured and unstructured data, leveraging NLP techniques for text data. Handle large amounts of information to create the input for analytical models, incorporating Gen AI for advanced data processing and generation. Build predictive models, machine learning, and deep learning algorithms; innovate with Gen AI applications in model development. Build network graphs, apply NLP techniques for text analysis, and design forecasting models while building data pipelines for end-to-end solutions. Propose solutions and strategies to address business challenges, integrating Gen AI and NLP in practical applications. Collaborate with product development teams and communicate with Senior Leadership teams. Participate in problem-solving sessions, leveraging NLP and Gen AI for innovative solutions. Requirements Bachelor's degree in a highly quantitative field (e., Computer Science, Engineering, Physics,Math, Operations Research, etc.) or equivalent experience. Extensive machine learning and algorithmic background with deep expertise in Gen AI andNatural Language Processing (NLP) techniques, along with a strong understanding of supervised and unsupervised learning methods, reinforcement learning, deep learning, Bayesian inference, and network graph analysis. Advanced knowledge of NLP methods, including text generation, sentiment analysis, namedentity recognition, and language modelling. Strong math skills, including proficiency in statistics, linear algebra, and probability, with the ability to apply these concepts in Gen AI and NLP solutions. Proven problem-solving aptitude with the ability to apply NLP and Gen AI tools to real-world business challenges. Excellent communication skills with the ability to translate complex technical information, especially related to Gen AI and NLP, into clear insights for non-technical stakeholders. Fluency in at least one data science/analytics programming language (e., Python, R, Julia), with expertise in NLP and Gen AI libraries like TensorFlow, PyTorch, Hugging Face, or OpenAI tools. Start-up experience is a plus, with ideally 5-8 years of advanced analytics experience in startups or marquee companies, particularly in roles leveraging Gen AI and NLP for product or business innovations. Required Skills Deep Learning (DL), Algorithms, Computer Science & Engineering, Operations Research, Math Skills, Communication Skills, SAAS Product, IT Services, Artificial Intelligence (AI), ERP Systems, Product Management, Automation, Analytical Models, Predictive Models, Natural Language Processing (NLP), Gen AI, Forecasting Models, Product Development, Leadership, Problem Solving, Unsupervised Learning, Reinforcement Learning, Algebra, Data Science, Programming Languages: Python, Julia (ref:hirist.tech) Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Job Title : Data Scientist- Lead Location : Bangalore, Pune, Mumbai, Kolkata Experience : 5 to 14 Years About Prismforce Prismforce is a Vertical SaaS company revolutionizing the Talent Supply Chain for global Technology, R&D/Engineering, and IT Services companies. Our AI-powered product suite enhances business performance by enabling operational flexibility, accelerating decision-making, and boosting profitability. Our mission is to become the leading industry cloud/SaaS platform for tech services and talent organizations worldwide. Responsibilities Identify relevant data sources a combination of data sources to make it useful. Build the automation of the collection processes. Pre-processing of structured and unstructured data. Handle large amounts of information to create the input to analytical Models. Build predictive models and machine-learning algorithms Innovate Machine-Learning , Deep-Learning algorithms. Build Network graphs , NLP , Forecasting Models Building data pipelines for end-to-end solutions. Propose solutions and strategies to business challenges. Collaborate with product development teams and communicate with the Senior Leadership teams. Participate in Problem solving sessions Requirements Bachelor's degree in a highly quantitative field (e.g., Computer Science, Engineering, Physics, Math, Operations Research, etc.) or equivalent experience. Extensive machine learning and algorithmic background with deep expertise in Gen AI and Natural Language Processing (NLP) techniques, along with a strong understanding of supervised and unsupervised learning methods, reinforcement learning, deep learning, Bayesian inference, and network graph analysis. Advanced knowledge of NLP methods, including text generation, sentiment analysis, named entity recognition, and language modelling. Strong math skills, including proficiency in statistics, linear algebra, and probability, with the ability to apply these concepts in Gen AI and NLP solutions. Proven problem-solving aptitude with the ability to apply NLP and Gen AI tools to real-world business challenges. Excellent communication skills with the ability to translate complex technical information, especially related to Gen AI and NLP, into clear insights for non-technical stakeholders. Fluency in at least one data science/analytics programming language (e.g., Python, R, Julia), with expertise in NLP and Gen AI libraries like TensorFlow, PyTorch, Hugging Face, or OpenAI tools. Start-up experience is a plus, with ideally 5-8 years of advanced analytics experience in startups or marquee companies, particularly in roles leveraging Gen AI and NLP for product or business innovations. Required Skills Machine Learning, Deep Learning, Algorithms Computer Science, Engineering, Operations Research Math Skills, Communication Skills, SAAS Product, IT Services Artificial Intelligence, ERP Product Management, Automation, Analytical Models Predictive Models, NLP What Makes Us Unique First-Mover Advantage : We are the only Vertical SaaS product company addressing Talent Supply Chain challenges in the IT services industry. Innovative Product Suite : Our solutions offer forward-thinking features that outshine traditional ERP systems. Strategic Expertise : Guided by an advisory board of ex-CXOs from top global IT firms, providing unmatched industry insights. Experienced Leadership : Our founding team brings deep expertise from leading firms like McKinsey, Deloitte, Amazon, Infosys, TCS, and Uber. Diverse and Growing Team : We have grown to 160+ employees across India, with hubs in Mumbai, Pune, Bangalore, and Kolkata. Strong Financial Backing : Series A-funded by Sequoia, with global IT companies using our product as a core solution. Why Join Prismforce Competitive Compensation : We offer an attractive salary and benefits package that rewards your contributions. Innovative Projects : Work on pioneering projects with cutting-edge technologies transforming the Talent Supply Chain. Collaborative Environment : Thrive in a dynamic, inclusive culture that values teamwork and innovation. Growth Opportunities : Continuous learning and development are core to our philosophy, helping you advance your career. Flexible Work : Enjoy flexible work arrangements that balance your work-life needs. By joining Prismforce, you'll become part of a rapidly expanding, innovative company that's reshaping the future of tech services and talent management. Perks & Benefits Work with the best in the industry: Work with a high-pedigree leadership team that will challenge you, build on your strengths and invest in your personal development Insurance Coverage-Group Mediclaim cover for self, spouse, kids and parents & Group Term Life Insurance Policy for self. Flexible Policies Retiral Benefits Hybrid Work Model Self-driven career progression tool Attractive ESOPs (ref:hirist.tech) Show more Show less
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Responsibilities Identify relevant data sources a combination of data sources to make it useful. Build the automation of the collection processes. Pre-processing of structured and unstructured data. Handle large amounts of information to create the input to analytical Models. Build predictive models and machine-learning algorithms Innovate Machine-Learning , Deep-Learning algorithms. Build Network graphs , NLP , Forecasting Models Building data pipelines for end-to-end solutions. Propose solutions and strategies to business challenges. Collaborate with product development teams and communicate with the Senior Leadership teams. Participate in Problem solving sessions Requirements Bachelor's degree in a highly quantitative field (e.g., Computer Science, Engineering, Physics, Math, Operations Research, etc.) or equivalent experience. Extensive machine learning and algorithmic background with deep expertise in Gen AI and Natural Language Processing (NLP) techniques, along with a strong understanding of supervised and unsupervised learning methods, reinforcement learning, deep learning, Bayesian inference, and network graph analysis. Advanced knowledge of NLP methods, including text generation, sentiment analysis, named entity recognition, and language modelling. Strong math skills, including proficiency in statistics, linear algebra, and probability, with the ability to apply these concepts in Gen AI and NLP solutions. Proven problem-solving aptitude with the ability to apply NLP and Gen AI tools to real-world business challenges. Excellent communication skills with the ability to translate complex technical information, especially related to Gen AI and NLP, into clear insights for non-technical stakeholders. Fluency in at least one data science/analytics programming language (e.g., Python, R, Julia), with expertise in NLP and Gen AI libraries like TensorFlow, PyTorch, Hugging Face, or OpenAI tools. Start-up experience is a plus, with ideally 5-8 years of advanced analytics experience in startups or marquee companies, particularly in roles leveraging Gen AI and NLP for product or business innovations. Required Skills Machine Learning, Deep Learning, Algorithms Computer Science, Engineering, Operations Research Math Skills, Communication Skills, SAAS Product, IT Services Artificial Intelligence, ERP Product Management, Automation, Analytical Models Predictive Models, NLP (ref:hirist.tech) Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Us Job Description - Data Scientist - NoBroker.com NoBroker.com is the world's largest brokerage-free real estate marketplace, dedicated to transforming the real estate experience with cutting-edge products. Founded by IIT Bombay, IIT Kanpur, and IIM Ahmedabad alumni in 2014, we've raised $366M from prominent investors like General Atlantic, Tiger Global, and Saif Partners. Our comprehensive real estate ecosystem includes NoBrokerHood for community management, NoBroker HomeServices for property management, and a range of services like Legal Services, Rental Agreements, Packers & Movers, HomeLoans, Rent Payments, and more. Role As a Data Scientist at NoBroker.com, you will play a crucial role in advancing ConvoZen.AI. You'll work with a dedicated team to develop and enhance our conversational AI capabilities, applying NLP, LLMs, and other machine learning techniques to create impactful solutions. This role offers the opportunity to work with modern tools and technologies. Key Responsibilities Develop Conversational AI Models : Build and refine NLP and LLM models to enhance ConvoZen.AI's ability to derive insights from customer conversations using modern frameworks. Fine-Tuning LLMs : Engage in fine-tuning large language models (LLMs) to improve their performance on specific tasks, leveraging techniques like transfer learning and domain adaptation. Deploy and Monitor Models : Implement and maintain NLP models in production environments, ensuring they perform reliably and efficiently. Utilize deployment tools and best practices to streamline this process. Collaborate with Data Science & ML Team : Work closely with the team to tackle large-scale data problems and ensure solutions are production-ready. End-to-End AI System Development : Engage in the full development cycle of AI systems, from ETL processes to production deployment, with a focus on conversational AI. Consumer and Enterprise Applications : Develop applications for speech and language models that serve both consumer and enterprise needs, leveraging ConvoZen.AI. Model Maintenance and Enhancement : Participate in the validation, monitoring, and maintenance of existing ML models and heuristic algorithms in production. ML Lifecycle Management : Oversee the ML lifecycle from experimentation and analysis to modeling and productionizing relevant workloads. Project Planning and Execution : Plan, develop, and execute team projects focused on both research and development, with an emphasis on conversational AI. Infrastructure Development : Enhance our ML infrastructure, ensuring robust support for training and inference stages. Automation and Smart Solutions : Partner with multiple teams to develop automation engines and smart solutions, enhancing efficiency and effectiveness in sales and : Engineering background with strong mathematical and programming skills in C++ and Python. Minimum of 2 years of relevant experience in data science or machine learning. Proven track record in Analytics, Machine Learning, and Computer Sciences. Experience in building and deploying large-scale ML models, particularly in NLP, ASR & Computer Vision. Deep understanding and passion for NLP, LLMs, and conversational AI technologies. Proficiency in Python, databases, and Linux environments. Expertise in modern ML frameworks and tools like PyTorch, vLLM, TensorFlow, and Scikit-Learn. Strong analytical and quantitative skills, with the ability to communicate results effectively across teams. Ability to thrive in a fast-paced, deadline-driven environment. (ref:hirist.tech) Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
The Role The ideal candidate for this role will be an innovative self-starter. You will be an AI expert with experience in making architectural tradeoffs to transform AI performance for a variety of use cases. You will collaborate with internal and external development engineers (architecture, hardware, validation, software services). You will contribute to development, support device characterization and benchmarking : Proficiency in Large Models & Deep Neural Networks. Hands-on experience in working with large models & deep neural networks. Expertise in LLMs with working knowledge of large language models (LLMs). Extensive experience in System platform Architecture. Experience in Development Preferable for memory/storage/ any embedded system. In depth knowledge and extensive experience in dealing with Standardizations/Technical Papers/Patents. Extensive experience with C/C++ and Python programming. Develop and fine-tune LLMs (GPT, Llama, Mistral, Falcon, Claude, etc.) for domain-specific applications. Implement RAG pipelines using LlamaIndex, LangGraph, and vector databases (FAISS, Pinecone, Weaviate, ChromaDB) to enhance response accuracy. Build AI-powered chatbots and autonomous agents using LangGraph, CrewAI, LlamaIndex, and OpenAI APIs. Optimize and deploy generative AI models for real-time inference using cloud platforms (AWS, GCP, Azure) and MLOps tools (Docker, Kubernetes, MLflow). Fine-tune models using LoRA, QLoRA, PEFT, and RLHF to improve efficiency and personalization. Develop AI-driven workflows for structured reasoning and decision-making using CrewAI and LangGraph. Integrate multi-modal AI models (text, image, speech) into enterprise solutions. Implement memory and retrieval strategies for LLM-based systems using vector search and caching techniques. Ensure AI models follow ethical AI guidelines, bias mitigation, and security best : Tech in Computer Science, Electrical Engineering. 8 to 10 Years of experience in relevant domain. Strong analytical & abstract thinking ability as well as technical communication skills. Able to work independently and perform in fast paced environment. Ability to troubleshoot and debug complex issues. Working Knowledge on Device Driver is desirable. Prior experience in working with Skills : Languages & Frameworks : Proficiency in Python, PyTorch, TensorFlow, JAX (ref:hirist.tech) Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
With the rapid growth of technology and data-driven decision making, the demand for professionals with expertise in inference is on the rise in India. Inference jobs involve using statistical methods to draw conclusions from data and make predictions based on available information. From data analysts to machine learning engineers, there are various roles in India that require inference skills.
These major cities are known for their thriving tech industries and are actively hiring professionals with expertise in inference.
The average salary range for inference professionals in India varies based on experience level. Entry-level positions may start at around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.
In the field of inference, a typical career path may start as a Data Analyst or Junior Data Scientist, progress to a Data Scientist or Machine Learning Engineer, and eventually lead to roles like Senior Data Scientist or Principal Data Scientist. With experience and expertise, professionals can also move into leadership positions such as Data Science Manager or Chief Data Scientist.
In addition to expertise in inference, professionals in India may benefit from having skills in programming languages such as Python or R, knowledge of machine learning algorithms, experience with data visualization tools like Tableau or Power BI, and strong communication and problem-solving abilities.
As you explore opportunities in the inference job market in India, remember to prepare thoroughly by honing your skills, gaining practical experience, and staying updated with industry trends. With dedication and confidence, you can embark on a rewarding career in this field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.