Jobs
Interviews

2024 Inference Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

20 Lacs

India

On-site

Experience- 10+ years JD- Generative AI Architect will play a pivotal role in designing, developing, and implementing advanced generative AI solutions that drive significant business impact for our clients. This role offers the exciting opportunity to work at the forefront of AI innovation, leveraging the power of AI. Role Overview: The Generative AI Architect will be responsible for the end-to-end architecture, design, and deployment of scalable and robust generative AI systems. This includes conceptualizing solutions, selecting appropriate models and frameworks, overseeing development, and ensuring the successful integration of generative AI capabilities into existing and new platforms. You will work closely with business stakeholders to translate complex requirements into high-performance AI solutions. Key Responsibilities: Architect and Design Generative AI Solutions : Lead the architectural design of generative AI systems, including model selection (LLMs), RAG and fine-tuning approaches. Azure AI Expertise : Design and deploy scalable AI solutions leveraging a comprehensive suite of Azure AI services. Python Development : Write clean, efficient, and maintainable Python code for data processing, automation, and API integrations. Model Optimization and Performance : Optimize generative AI models for performance, scalability, and cost-efficiency. Data Strategy: Design data architectures and pipelines to ingest, process, and prepare data for generative AI model training and inference, utilizing Azure data services. Integration and Deployment: Oversee the integration of generative AI models into existing enterprise systems and applications. Implement robust MLOps practices, CI/CD pipelines (e.g., Azure DevOps, GitHub, Jenkins), and containerization (Docker, Kubernetes) for seamless deployment. Technical Leadership & Mentorship: Provide technical leadership and guidance to development teams, fostering best practices in AI model development, deployment, and maintenance. Research and Innovation: Stay abreast of the latest advancements in generative AI technologies, research methodologies, and industry trends. Drive proof-of-concepts (PoCs) and pilot implementations for new AI capabilities. Collaboration and Communication: Collaborate effectively with cross-functional teams, including product managers, data scientists, software engineers, and business analysts, to ensure AI solutions align with business goals and deliver tangible value. Articulate complex technical concepts to non-technical stakeholders. Required Skills and Qualifications: Minimum of 12-16 years of experience in IT, with at least 3+ years specifically focused on Gen AI architecture and development. Technical Proficiency: Deep expertise in Azure Cloud Platform and its AI services Strong proficiency in Python and relevant AI/ML libraries (e.g., TensorFlow, PyTorch, Hugging Face, LangChain, LlamaIndex). Solid understanding of large language models (LLMs), transformer architectures, diffusion models, and other generative AI techniques. Hands-on experience with prompt engineering, RAG pipelines, and vector databases (e.g., Pinecone, Weaviate, Chroma). Experience with MLOps, CI/CD, and deployment tools (Azure DevOps, GitHub, Kubernetes). Familiarity with RESTful API design and development. Architectural Principles: Strong understanding of cloud architecture, microservices architecture, and design patterns. Problem-Solving: Excellent analytical and problem-solving skills with the ability to think critically and creatively. Communication: Exceptional communication and interpersonal skills, with the ability to convey complex technical concepts clearly to both technical and non-technical audiences. Team Player: Ability to work collaboratively in a fast-paced, agile environment and lead projects with multiple stakeholders. Desirable: Relevant Azure AI certifications Experience with other cloud platforms (AWS). Experience with fine-tuning LLMs for specific use cases. Contributions to open-source AI projects or publications in AI/ML. Job Type: Full-time Pay: From ₹2,000,000.00 per year Schedule: Fixed shift Application Question(s): How many years of total experience do you currently have? How many years of experience do you have in Azure Cloud Platform and its AI services? How many years of experience do you have in Python and relevant AI/ML libraries (e.g., TensorFlow, PyTorch, Hugging Face, LangChain, LlamaIndex)? How many years of experience do you have as a Architect? What is your current CTC? What is your expected CTC? What is your notice period/ LWD? What is your current location? How many years of experience do you have in Gen AI architecture?

Posted 1 week ago

Apply

2.0 years

4 - 7 Lacs

Hyderābād

On-site

ABOUT FLUTTER ENTERTAINMENT: Flutter Entertainment is the world’s largest sports betting and iGaming operator with 13.9 million average monthly players worldwide and an annual revenue of $14Bn in 2024. We have a portfolio of iconic brands, including Paddy Power, Betfair, FanDuel, PokerStars, Junglee Games and Sportsbet. Flutter Entertainment is listed on both the New York Stock Exchange (NYSE) and the London Stock Exchange (LSE). In 2024, we were recognized in TIME’s 100 Most Influential Companies under the 'Pioneers' category—a testament to our innovation and impact. Our ambition is to transform global gaming and betting to deliver long-term growth and a positive, sustainable future for our sector. Together, we are Changing the Game! Working at Flutter is a chance to work with a growing portfolio of brands across a range of opportunities. We will support you every step of the way to help you grow. Just like our brands, we ensure our people have everything they need to succeed. FLUTTER ENTERTAINMENT INDIA: Our Hyderabad office, located in one of India’s premier technology parks is the Global Capability Center for Flutter Entertainment. A center of expertise and innovation, this hub is now home to over 10 00+ talented colleagues working across Customer Service Operations, Data and Technology, Finance Operations, HR Operations, Procurement Operations, and other key enabling functions. We are committed to crafting impactful solutions for all our brands and divisions to power Flutter's incredible growth and global impact. With the scale of a leader and the mindset of a challenger, we’re dedicated to creating a brighter future for our customers, colleagues, and communities. OVERVIEW OF THE ROLE We are seeking a technically skilled Regulatory Data Analyst to join our dynamic Data & Analytics (ODA) department in Hyderabad, India. As a globally recognized and highly regulated brand, we are deeply committed to delivering accurate reporting and critical business insights that push the boundaries of our understanding through innovation. You'll be joining a team of exceptional data professionals with a strong command over analytical tools and statistical techniques . You’ll help shape the future of online gaming by leveraging robust technical capabilities to ensure regulatory compliance, support risk management, and strengthen business operations through advanced data solutions. You shall work with large, complex datasets—interrogating and manipulating data using advanced SQL and Python, building scalable dashboards, and developing automation pipelines. Beyond in-depth analysis, you’ll create regulatory reports and visualizations for diverse audiences, and proactively identify areas to enhance efficiency and compliance through technical solutions. KEY RESPONSIBILITES Query data from various database environments (e.g., DB2 , MS SQL Server , Azure ) using Advanced SQL techniques Perform data processing and statistical analysis using tools such as Python, R and Excel Translate regulatory data requirements into structured analysis using robust scripting and automation Design and build interactive dashboards and reporting pipelines using Power BI, Tableau, or MicroStrategy to highlight key metrics and regulatory KPIs Develop compelling data visualizations and executive summaries to communicate complex insights clearly to technical and non-technical stakeholders alike Collaborate with global business stakeholders to interpret jurisdiction-specific regulations and provide technically sound, data-driven insights Recommend enhancements to regulatory processes through data modelling , root cause analysis , and applied statistical techniques (e.g., regression, hypothesis testing) Ensure data quality, governance, and lineage in all deliverables, applying technical rigor and precision TO EXCEL IN THIS ROLE, YOU WILL NEED: 2 to 4 years of relevant work experience as a Data Analyst or in a role focused in regulatory or compliance-based analytics Bachelor's degree in a quantitative or technical discipline (e.g, Mathematics, Statistics, Economics, or Computer Science) Proficiency in SQL with the ability to write and optimize complex queries from scratch Strong programming skills in Python (or R) for automation, data wrangling, and statistical analysis Experience using data visualization and BI tools (MicroStrategy, Tableau, PowerBI) to create dynamic dashboards and visual narratives Knowledge of data warehousing environments like Microsoft SQL Server Management Studio or Amazon RedShift Ability to apply statistical methods such as time series analysis, regression, and causal inference to solve regulatory and business problems BENEFITS WE OFFER: Access to Learnerbly, Udemy, and a Self-Development Fund for upskilling. Career growth through Internal Mobility Programs. Comprehensive Health Insurance for you and dependents. Well-Being Fund and 24/7 Assistance Program for holistic wellness. Hybrid Model: 2 office days/week with flexible leave policies, including maternity, paternity, and sabbaticals. Free Meals, Cab Allowance, and a Home Office Setup Allowance. Employer PF Contribution, gratuity, Personal Accident & Life Insurance. Sharesave Plan to purchase discounted company shares. Volunteering Leave and Team Events to build connections. Recognition through the Kudos Platform and Referral Rewards. WHY CHOOSE US: Flutter is an equal-opportunity employer and values the unique perspectives and experiences that everyone brings. Our message to colleagues and stakeholders is clear: everyone is welcome, and every voice matters. We have ambitious growth plans and goals for the future. Here's an opportunity for you to play a pivotal role in shaping the future of Flutter Entertainment India

Posted 1 week ago

Apply

7.0 years

0 Lacs

Hyderābād

On-site

Overview: As a key member of the team, you will be responsible for designing, building, and maintaining the data pipelines and platforms that support analytics, machine learning, and business intelligence. You will lead a team of data engineers and collaborate closely with cross-functional stakeholders to ensure that data is accessible, reliable, secure, and optimized for AI-driven applications Responsibilities: Architect and implement scalable data solutions to support LLM training, fine-tuning, and inference workflows. Lead the development of ETL/ELT pipelines for structured and unstructured data across diverse sources. Ensure data quality, governance, and compliance with industry standards and regulations. Collaborate with Data Scientists, MLOps, and product teams to align data infrastructure with GenAI product goals. Mentor and guide a team of data engineers, promoting best practices in data engineering and DevOps. Optimize data workflows for performance, cost-efficiency, and scalability in cloud environments. Drive innovation by evaluating and integrating modern data tools and platforms (e.g., Databricks, Azure etc) Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related technical field. 7+ years of experience in data engineering, with at least 2+ years in a leadership or senior role. Proven experience designing and managing data platforms and pipelines in cloud environments (Azure, AWS, or GCP). Experience supporting AI/ML workloads, especially involving Large Language Models (LLMs) Strong proficiency in SQL and Python Hands-on experience with data orchestration tools

Posted 1 week ago

Apply

5.0 years

9 - 16 Lacs

India

On-site

Gen-AI Tech Lead - Enterprise AI Applications About Us We're a cutting-edge technology company building enterprise-grade AI solutions that transform how businesses operate. Our platform leverages the latest in Generative AI to create intelligent applications for document processing, automated decision-making, and knowledge management across industries. Role Overview We're seeking an exceptional Gen-AI Tech Lead to architect, build, and scale our next-generation AI-powered enterprise applications. You'll lead the technical strategy for implementing Large Language Models, fine-tuning custom models, and deploying production-ready AI systems that serve millions of users. Key Responsibilities - AI/ML Leadership (90% Hands-on) Design and implement enterprise-scale Generative AI applications using custom LLMs or (GPT, Claude, Llama, Gemini) Lead fine-tuning initiatives for domain-specific models and custom use cases Build and optimize model training pipelines for large-scale data processing Develop RAG (Retrieval-Augmented Generation) systems with vector databases and semantic search Implement prompt engineering strategies and automated prompt optimization Create AI evaluation frameworks and model performance monitoring systems Enterprise Application Development Build scalable Python applications integrating multiple AI models and APIs Develop microservices architecture for AI model serving and orchestration Implement real-time AI inference systems with sub-second response times Design fault-tolerant systems with fallback mechanisms and error handling Create APIs and SDKs for enterprise AI integration Build AI model version control and A/B testing frameworks MLOps & Infrastructure Containerize AI applications using Docker and orchestrate with Kubernetes Design and implement CI/CD pipelines for ML model deployment Set up model monitoring, drift detection, and automated retraining systems Optimize inference performance and cost efficiency in cloud environments Implement security and compliance measures for enterprise AI applications Technical Leadership Lead a team of 3-5 AI engineers and data scientists Establish best practices for AI development, testing, and deployment Mentor team members on cutting-edge AI technologies and techniques Collaborate with product and business teams to translate requirements into AI solutions Drive technical decision-making for AI architecture and technology stack Required Skills & Experience Core AI/ML Expertise Python : 5+ years of production Python development with AI/ML libraries LLMs : Hands-on experience with GPT-4, Claude, Llama 2/3, Gemini, or similar models Fine-tuning : Proven experience fine-tuning models using LoRA, QLoRA, or full parameter tuning Model Training : Experience training models from scratch or continued pre-training Frameworks : Expert-level knowledge of PyTorch, TensorFlow, Hugging Face Transformers Vector Databases : Experience with Pinecone, Weaviate, ChromaDB, or Qdrant Technical StackAI/ML Stack Models : OpenAI GPT, Anthropic Claude, Meta Llama, Google Gemini Frameworks : PyTorch, Hugging Face Transformers, LangChain, LlamaIndex Training : Distributed training with DeepSpeed, Accelerate, or Fairscale Serving : vLLM, TensorRT-LLM, or Triton Inference Server Vector Search : Pinecone, Weaviate, FAISS, Elasticsearch Infrastructure & DevOps Containerization : Docker, Kubernetes, Helm charts Cloud : AWS (ECS, EKS, Lambda, SageMaker), GCP Vertex AI Databases : PostgreSQL, MongoDB, Redis, Neo4j Monitoring : Prometheus, Grafana, DataDog, MLflow CI/CD : GitHub Actions, Jenkins, ArgoCD Professional Growth Work directly with founders and C-level executives Opportunity to publish research and speak at AI conferences Access to latest AI models and cutting-edge research Mentorship from industry experts and AI researchers Budget for attending top AI conferences (NeurIPS, ICML, ICLR) Ideal Candidate Profile Passionate about pushing the boundaries of AI technology Strong engineering mindset with focus on production systems Experience shipping AI products used by thousands of users Stays current with latest AI research and implements cutting-edge techniques Excellent problem-solving skills and ability to work under ambiguity Leadership experience in fast-paced, high-growth environments Apply now and help us democratize AI for enterprise customers worldwide. Job Type: Full-time Pay: ₹900,000.00 - ₹1,600,000.00 per year Schedule: Monday to Friday Supplemental Pay: Performance bonus

Posted 1 week ago

Apply

3.0 years

6 - 10 Lacs

Gurgaon

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Positions in this function are responsible for the management and manipulation of mostly structured data, with a focus on building business intelligence tools, conducting analysis to distinguish patterns and recognize trends, performing normalization operations and assuring data quality. Depending on the specific role and business line, example responsibilities in this function could include creating specifications to bring data into a common structure, creating product specifications and models, developing data solutions to support analyses, performing analysis, interpreting results, developing actionable insights and presenting recommendations for use across the company. Roles in this function could partner with stakeholders to understand data requirements and develop tools and models such as segmentation, dashboards, data visualizations, decision aids and business case analysis to support the organization. Other roles involved could include producing and managing the delivery of activity and value analytics to external stakeholders and clients. Team members will typically use business intelligence, data visualization, query, analytic and statistical software to build solutions, perform analysis and interpret data. Positions in this function work on predominately descriptive and regression-based analytics and tend to leverage subject matter expert views in the design of their analytics and algorithms. This function is not intended for employees performing the following work: production of standard or self-service operational reporting, casual inference led (healthcare analytics) or data pattern recognition (data science) analysis; and/or image or unstructured data analysis using sophisticated theoretical frameworks. Primary Responsibilities: Analyze data and extract actionable insights, findings, and recommendations Develop data validation strategies to ensure accuracy and reliability of data Communicate data and findings effectively to internal and external senior executives with clear supporting evidence Relate analysis to the organization's overall business objectives Implement generative AI techniques to reduce manual efforts and automate processes Analyzes and investigates Provides explanations and interpretations within area of expertise Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Undergraduate degree or equivalent experience Experience in creating summarized reports of findings and recommendations Proficiency in SQL language, Snowflake, and AI techniques Solid skills in Microsoft Excel, Word, PowerPoint, and Visio Ability to multitask, take initiative, and adapt to changing priorities Proven self-motivated team player with solid problem-solving and analytical skills Preferred Qualifications: 3+ years of work experience 2+ years of experience working with a healthcare consulting firm Experience in data analytics and hands-on experience in Python Programming, SQL, and Snowflake Proven creative, strategic thinker with excellent critical thinking skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 week ago

Apply

2.0 years

1 - 5 Lacs

Ahmedabad

On-site

Experience: 2+ years in AI/ML, with hands-on development & leadership Key Responsibilities: ● Architect, develop, and deploy AI/ML solutions across various business domains. ● Research and implement cutting-edge deep learning, NLP, and computer vision models. ● Optimize AI models for performance, scalability, and real-time inference. ● Develop and manage data pipelines, model training, and inference workflows. ● Integrate AI solutions into microservices and APIs using scalable architectures. ● Lead AI-driven automation and decision-making systems. ● Ensure model monitoring, explainability, and continuous improvement in production. ● Collaborate with data engineering, software development, and DevOps teams. ● Stay updated with LLMs, transformers, federated learning, and AI ethics. ● Mentor AI engineers and drive AI research & development initiatives. Technical Requirements: ● Programming: Python (NumPy, Pandas, Scikit-learn). ● Deep Learning Frameworks: TensorFlow, PyTorch, JAX. ● NLP & LLMs: Hugging Face Transformers, BERT, GPT models, RAG, fine-tuning LLMs. ● Computer Vision: OpenCV, YOLO, Faster R-CNN, Vision Transformers (ViTs). ● Data Engineering: Spark, Dask, Apache Kafka, SQL/NoSQL databases. ● Cloud & MLOps: AWS/GCP/Azure, Kubernetes, Docker, CI/CD for ML pipelines. ● Optimization & Scaling: Model quantization, pruning, knowledge distillation. ● Big Data & Distributed Computing: Ray, Dask, TensorRT, ONNX. ● Security & Ethics: Responsible AI, Bias detection, Model explainability (SHAP, LIME). Preferred Qualifications: ● Experience with real-time AI applications, reinforcement learning, or edge AI. ● Contributions to AI research (publications, open-source contributions). ● Experience integrating AI with ERP, CRM, or enterprise solutions. Job Types: Full-time, Permanent Pay: ₹100,000.00 - ₹500,000.00 per year Schedule: Day shift Application Question(s): What is your current CTC? Experience: AI: 2 years (Required) Machine learning: 2 years (Required) Work Location: In person

Posted 1 week ago

Apply

8.0 - 12.0 years

5 - 10 Lacs

Noida

On-site

Senior Assistant Vice President EXL/SAVP/1418398 Digital SolutionsNoida Posted On 24 Jul 2025 End Date 07 Sep 2025 Required Experience 8 - 12 Years Basic Section Number Of Positions 1 Band D2 Band Name Senior Assistant Vice President Cost Code D014959 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 4500000.0000 - 6000000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group EXL Digital Sub Group Digital Solutions Organization Digital Solutions LOB CX Transformation Practice SBU CX Capability Development Country India City Noida Center Noida - Centre 59 Skills Skill TECHNICAL CONSULTING DATA SCIENCE - AI PRE-SALES CONSULTING SOLUTIONING Minimum Qualification GRADUATION Certification No data available Job Description Job Description Designation and SRF Name- Technical CX Consultant role Role- Permanent/ Full time Panel and Hiring Manager- Sanjay Pathak Experience- 8-12 years relevant experience Location- Noida/ Gurgaon/ Pune/ Bangalore Shift- 12PM to 10PM (10 Hours Shift. Also depends on the project/work dependencies) Working Days- 5 days Work Mode- Hybrid Job Description: Highly skilled CX Consulting with deep expertise in CCaasS, Integrations, IVR, Natural Language Processing (NLP), Language Models, and scalable cloud-based solution deployment. Skills: Technical Expertise: Having a deep understanding of Conversational AI, Smart Agent Assist, CCaaS and their technical capabilities. Stay current with industry trends, emerging technologies, and competitor offerings. Customer Engagement: Engage with prospective clients to understand their technical requirements and business challenges. Conduct needs assessments and provide tailored technical solutions. Solution Demonstrations: Deliver compelling product demonstrations that showcase the features and benefits of our solutions. Customize demonstrations to align with the specific needs and use cases of potential customers. Strong NLP and Language Model fundamentals (e.g., transformer architectures, embeddings, tokenization, fine-tuning). Expert in Python, with clean, modular, and scalable coding practices. Experience developing and deploying solutions on Azure, AWS, or Google Cloud Platform. Familiarity with Vertex AI, including Model Registry, Pipelines, and RAG integrations (preferred). Experience with PyTorch, including model training, evaluation, and serving. Knowledge of GPU-based inferencing (e.g., ONNX, Torch Script, Triton Inference Server). Understanding of ML lifecycle management, including MLOps best practices. Experience with containerization (Docker) and orchestration tools (e.g., Kubernetes). Exposure to REST APIs, gRPC, and real-time data pipelines is a plus. Degree in Computer Science, Mathematics, Computational Linguistics, AI, ML or similar field. PhD is a plus. Responsibilities: Consulting and design end-to-end AI solutions for CX. Consulting engagement of scalable AI services on cloud infrastructure (Azure/AWS/GCP). Collaborate with engineering, product, and data teams to define AI-driven features and solutions. Optimize model performance, scalability, and cost across CPU and GPU environments. Ensure reliable model serving with a focus on low-latency, high-throughput inferencing. Keep abreast of the latest advancements in NLP, LLMs, and AI infrastructure. Workflow Workflow Type Digital Solution Center

Posted 1 week ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Role Description: We are looking for a highly skilled Senior Developer with strong AI/ML expertise to lead the next phase of intelligent automation in RadEze PACS , our cloud-native radiology workflow platform. You’ll work closely with the founding team to enhance DICOM image handling, integrate diagnostic AI features, and ship high-impact features requested by global radiology clients. Location: Noida (Preferred) Experience: 5–10 years Company: Ezewok Healthtech Pvt Ltd Compensation: Competitive with equity potential Requirements: B.Tech/M.Tech from IIT/NIT/BITS or top-tier institutes (preferred) 5–10 years of full stack development experience (Node.js + React.js) Strong hands-on experience with AI/ML : PyTorch, TensorFlow, scikit-learn Image classification, segmentation, NLP OpenAI CLIP, Whisper, or HuggingFace models Experience in medical imaging (DICOM, PACS, Cornerstone, MONAI, SimpleITK) is a major plus Solid understanding of cloud infrastructure (AWS EC2, S3, GPU, Docker, Lambda) Comfort with real-world product building, debugging, and scaling Responsibilities: Full Stack Development Own and scale our Node.js + React-based PACS infrastructure Implement new features from diagnostic centers and radiologists Optimize performance of DICOM viewer report workflows, and metadata processing Enhance real-time dashboards, billing engines, and multi-user reporting modules AI/ML & Diagnostic Intelligence Develop and deploy AI models for radiology use cases: X-ray/CT/MRI classification. Anomaly detection, segmentation, or report summarization Integrate dictation and NLP tools for auto-reporting Build scalable pipelines for image-based inference on AWS/GPU-backed infrastructure Contribute to building our AI assistant for radiologists and coordinators Product & Client Collaboration Translate client feature requests into scalable product modules Guide implementation of quality and TAT tracking metrics Mentor junior developers; work with product and infra leads Shape our AI roadmap based on user feedback and imaging standards Bonus Skills: Worked on teleradiology , healthcare AI , or FDA/CE-regulated medical software Experience with multi-user workflows , audit trails , and data privacy (HIPAA/GDPR) Experience contributing to open-source radiology tools. Why Join Us? Be a foundational member of India’s most flexible, AI-enhanced PACS platform Lead the AI vision for radiology , working with real-world data at scale Shape a fast-growing product used by radiologists and hospitals across India and abroad Remote-first, product-led, and deeply impact-driven culture To Apply Send your resume, portfolio, and a brief on your most exciting AI/radiology project to hr@ezewok.com or connect directly with the founders on LinkedIn.

Posted 1 week ago

Apply

2.0 years

0 Lacs

India

Remote

About the Company: Aerobotics7 (A7) is a mission-driven deep-tech startup focused on developing an autonomous next-gen sensing and advanced AI platform to detect and identify hidden threats like landmines and UXOs, in real-time. Our driven team is committed to building mission-critical products through continuous learning, rapid execution and close cross-collaboration. We are intentionally a small, strong, and highly technical team to deliver impactful products that can solve some of the biggest problems in the world. What you’ll do: Lead full ML model lifecycle , from research and experiments to implementation and deployment. Build and deploy deep learning models on GCP and edge devices , ensuring real-time inference. Combine multiple sensor inputs into powerful multi-modal ML models . Implement and refine CNNs, Vision Transformers(ViT) , and other architectures. Design sensor-fusion methods for better perception and decision-making. Optimize inference for low-latency , efficient production use. Work closely with software and hardware teams to bring AI into mission-critical systems. Create and scale pipelines for training, validating, and improving models. What you’ll bring: Deep expertise in TensorFlow and PyTorch . Hands-on experience with CNNs, ViTs , and DL architectures. Experience with multi-modal ML and sensor fusion . Cloud deployment skills- GCP preferred . Edge AI know-how (NVIDIA Jetson, TensorRT, OpenVINO). Proficiency in quantization, pruning, and real-time model optimization. Solid computer vision and object detection experience. Ability to work with limited datasets (using VAEs or similar) to generate synthetic data , and experience with annotation and augmentation . Strong coding skills in Python and C++ , with high-performance computing expertise. Nice to have: 2-4 years of relevant experience. MLOps experience- CI/CD , model versioning , monitoring . Knowledge of reinforcement learning . Experience in working in fast-paced startup environments. AI for autonomy, robotics, or UAV systems. Knowledge of embedded systems and hardware acceleration for AI. Benefits: NOTE: THIS ROLE IS UNDER AEROBOTICS7 INVENTIONS PVT. LTD., AN INDIAN ENTITY. IT IS A REMOTE INDIA-BASED ROLE WITH COMPENSATION ALIGNED TO INDIAN MARKET STANDARDS. WHILE OUR PARENT COMPANY IS US-BASED, THIS POSITION IS FOR CANDIDATES RESIDING AND WORKING IN INDIA. Competitive salary and comprehensive benefits package. Future opportunity for equity options in the company. Opportunity to work on impactful, cutting-edge technology in a collaborative startup environment. Professional growth with extensive learning and career development opportunities. Direct contribution to tangible, real-world impact.

Posted 1 week ago

Apply

4.0 - 6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Senior Data Scientist Location: Bangalore Reporting to: Senior Manager PURPOSE OF ROLE Understand and solve complex business problems with sound analytical prowess and help business with impactful insights in decision making Ensure any roadblocks in implementation is brought to the notice of Analytics Manager so that project timelines do not get affected Document every aspect of the project in standard way, for future purposes Articulate technical complexities to the senior leadership in a simple and easy manner KEY TASKS AND ACCOUNTABILITIES Understand the business problem and work with business stakeholders to translate that to a data-driven analytical/statistical problem; participate in the solution building process Create appropriate datasets and develop statistical data models Translate complex statistical analysis over large datasets into insights and actions Analyze results and present to stakeholders Communicate the insights using business-friendly presentations Help and mentor other Data Scientists/Associate Data Scientists Build a pipeline of the project in Databricks which is production ready Build dashboards (preferably in Power BI) for easy consumption of the solutions QUALIFICATIONS, EXPERIENCE, SKILLS Level Of Educational Attainment Required Bachelor’s/Master’s Degree in Statistics, Applied Statistics, Economics, Econometrics, Operations Research or any other quantitative discipline. Previous Work Experience Minimum 4-6 years’ experience in data science role in building, implementing & operationalizing end-to-end solutions Expertise strongly desired in building statistical and machine learning models for classification, regression, forecasting, anomaly detection, dimensionality reduction, clustering etc. Exposure to optimization and simulation techniques (good to have) Expertise in building NLP based language models, sentiment analysis, text summarization and Named Entity Recognition Proven skills in translating statistics into insights. Sound knowledge in statistical inference and hypothesis testing Microsoft Office (mandatory) Expert in Python (mandatory) Advanced Excel (mandatory) And above all of this, an undying love for beer! We dream big to create future with more cheers

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Role Description: We are seeking a full-time Computer Vision Developer for an on-site role based in Gurugram . You will be responsible for designing, developing, and optimizing computer vision pipelines and algorithms for real-time applications. The role involves working on object detection, tracking, OCR, and edge AI deployment. You’ll collaborate closely with cross-functional teams to integrate vision models into embedded systems and smart devices, contributing to the development of AI-powered products from prototype to production. Key Responsibilities: Develop and implement computer vision algorithms Optimize models for real-time inference on edge devices (e.g., Jetson, Raspberry Pi, ARM boards) Preprocess and annotate image/video datasets for training and validation Train, fine-tune, and evaluate deep learning models using frameworks like PyTorch or TensorFlow Integrate models into production pipelines, working with embedded engineers where required Conduct research on state-of-the-art vision techniques and evaluate applicability Debug, profile, and optimize performance for low-latency deployments Collaborate across software, hardware, and product teams for end-to-end solution delivery Qualifications: Strong hands-on experience in computer vision Proficiency with deep learning frameworks (PyTorch, TensorFlow, ONNX) Experience with pattern recognition, object detection, segmentation, or OCR Familiarity with embedded systems, NVIDIA Jetson, or ARM-based platforms is a plus Solid understanding of image processing, linear algebra, and optimization techniques Experience with data annotation tools (LabelImg, CVAT, Roboflow) Strong problem-solving and debugging skills Bachelor’s or Master’s degree in Computer Science, AI, or related fields Bonus: Experience with real-time video processing, GStreamer, or edge model quantization

Posted 1 week ago

Apply

3.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Job Title: AI/ML Engineer (Python + AWS + REST APIs) Department: Web Location: Indore Job Type: Full-time Experience: 3-5 years Notice Period: 0-15 days (immediate joiners preferred) Work Arrangement: On-site (Work from Office) Overview: Advantal Technologies is seeking a passionate AI/ML Engineer to join our team in building the core AI-driven functionality an intelligent visual data encryption system. The role involves designing, training, and deploying AI models (e.g., CLIP, DCGANs, Decision Trees), integrating them into a secure backend, and operationalizing the solution via AWS cloud services and Python-based APIs. Key Responsibilities: AI/ML Development Design and train deep learning models for image classification and sensitivity tagging using CLIP, DCGANs, and Decision Trees. Build synthetic datasets using DCGANs for balancing. Fine-tune pre-trained models for customized encryption logic. Implement explainable classification logic for model outputs. Validate model performance using custom metrics and datasets. API Development Design and develop Python RESTful APIs using FastAPI or Flask for: Image upload and classification Model inference endpoints Encryption trigger calls Integrate APIs with AWS Lambda and Amazon API Gateway. AWS Integration Deploy and manage AI models on Amazon SageMaker for training and real-time inference. Use AWS Lambda for serverless backend compute. Store encrypted image data on Amazon S3 and metadata on Amazon RDS (PostgreSQL). Use AWS Cognito for secure user authentication and KMS for key management. Monitor job status via CloudWatch and enable secure, scalable API access. Required Skills & Experience: Must-Have 3–5 years of experience in AI/ML (especially vision-based systems). Strong experience with PyTorch or TensorFlow for model development. Proficient in Python with experience building RESTful APIs. Hands-on experience with Amazon SageMaker, Lambda, API Gateway, and S3. Knowledge of OpenSSL/PyCryptodome or basic cryptographic concepts. Understanding of model deployment, serialization, and performance tuning. Nice-to-Have Experience with CLIP model fine-tuning. Familiarity with Docker, GitHub Actions, or CI/CD pipelines. Experience in data classification under compliance regimes (e.g., GDPR, HIPAA). Familiarity with multi-tenant SaaS design patterns. Tools & Technologies: Python, PyTorch, TensorFlow FastAPI, Flask AWS: SageMaker, Lambda, S3, RDS, Cognito, API Gateway, KMS Git, Docker, Postgres, OpenCV, OpenSSL If interested, please share resume to kratika.vijaywargiya@advantal.net

Posted 1 week ago

Apply

0.0 - 1.0 years

0 Lacs

Pimple Soudagar, Pune, Maharashtra

On-site

Job Summary: We are looking for a highly motivated and skilled Machine Learning Engineer with 1–3 years of experience and a strong interest in Generative AI technologies. The ideal candidate will contribute to the full lifecycle of Generative AI models — from training and optimization to deployment on AWS and inference API development. You will collaborate with a team of engineers and researchers to build cutting-edge AI-powered solutions that have real-world impact. Key Responsibilities: 1. Generative AI Model Development Design, develop, and implement Generative AI models for applications such as text, image, and code generation. Work with large language models (LLMs) or other generative models using frameworks like TensorFlow, PyTorch, and Hugging Face Transformers. Experiment with model architectures, training techniques, and hyperparameter tuning to optimize performance. 2. Model Optimization & Efficiency Apply techniques such as quantization, pruning, and distillation to improve inference speed and reduce resource consumption. Profile and analyze model performance to identify and eliminate bottlenecks. 3. Cloud Deployment on AWS Deploy AI models to AWS using services such as SageMaker, EC2, ECS, or Lambda. Develop scalable and reliable inference pipelines. Use AWS tools for data storage, model management, and monitoring. 4. Inference API Development Design and develop efficient, secure, and scalable RESTful or gRPC APIs to serve deployed AI models. Ensure high availability, maintainability, and performance of APIs. 5. Troubleshooting & Debugging Diagnose and resolve issues related to model performance, deployment, and API functionality. Implement monitoring and logging systems to proactively manage issues. 6. Model Retraining & Continuous Improvement Implement strategies for continuous learning by retraining models with new data and user feedback. Monitor production model performance and trigger retraining workflows as needed. 7. Collaboration & Communication Work closely with ML engineers, data scientists, and software developers to drive project goals. Communicate progress, insights, and challenges effectively with stakeholders. 8. Documentation Create comprehensive documentation for models, training workflows, deployment processes, and APIs. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Machine Learning, AI, or a related field. 1–3 years of hands-on experience in machine learning model development and deployment. Sound understanding of Generative AI concepts (e.g., Transformers, GANs, Diffusion Models). Experience with at least one deep learning framework: TensorFlow or PyTorch. Proficiency in Python. Solid knowledge of ML algorithms, data preprocessing, and evaluation metrics. Proven experience in deploying models on AWS. Strong skills in developing and consuming RESTful or gRPC APIs. Familiarity with Git or other version control systems. Excellent problem-solving, communication, and teamwork skills. Preferred Qualifications: Experience working with LLMs and the Hugging Face Transformers library. Knowledge of MLOps practices and related tools. Experience with Docker, Kubernetes, or other containerization technologies. Familiarity with data engineering tools and pipelines. Experience with monitoring tools such as CloudWatch, Prometheus, or Grafana. Contributions to open-source projects or notable personal AI projects. Job Type: Full-time Benefits: Health insurance Provident Fund Ability to commute/relocate: Pimple Soudagar, Pune, Maharashtra: Reliably commute or planning to relocate before starting work (Required) Application Question(s): What is your Notice period? Experience: Machine learning: 1 year (Required) Work Location: In person

Posted 1 week ago

Apply

2.0 years

3 - 10 Lacs

India

Remote

This role is for one of our clients Industry: Technology, Information and Media Seniority level: Associate level Min Experience: 2 years Location: Remote (India) JobType: full-time About The Role We’re on the lookout for a driven Generative AI Engineer with hands-on experience in natural language processing and large language models. In this role, you'll be at the forefront of shaping conversational AI experiences—chatbots, intelligent reporting agents, campaign QAs—within a dynamic, data-driven advertising platform. You’ll get to work with state-of-the-art frameworks like LangGraph , LangChain , OpenAI APIs , and Hugging Face Transformers , helping us design smarter, more human-like interfaces that power decisions, insights, and automation for marketing teams. What You’ll Do Build and deploy generative AI applications using LLMs , including GPT-series models, with frameworks like LangChain , LangGraph , and Hugging Face . Develop AI agents for use cases like campaign QA, performance analytics, and intelligent prompt-based reporting assistants. Create and integrate backend chatbot layers using Python and Django (or Flask/FastAPI), ensuring seamless integration with the platform UI. Design and maintain scalable NLP pipelines with a focus on latency , cost-efficiency , and robust performance . Collaborate closely with data scientists, product managers, and marketing ops to refine AI use cases and deploy production-ready solutions. Monitor AI model behavior, run evaluations, and ensure fairness, consistency, and quality across outputs. Maintain technical documentation, testing workflows, and API contracts. What You Bring Experience : Minimum 2 years working in NLP or generative AI, preferably on customer-facing applications. Core Skills : Proficient in Python Hands-on with LangChain , LangGraph , OpenAI APIs , and Hugging Face Solid understanding of LLM architecture, prompt design, fine-tuning, and inference Backend Development : Proven experience building chatbot services or interactive tools using Django , Flask , or equivalent frameworks. Software Engineering Best Practices : Clean, modular code with unit tests, version control, and documentation. Team Fit : Strong written and verbal communication; proactive in cross-functional collaboration. Nice-to-Have Background in programmatic advertising , DSPs , or marketing automation . Experience with cloud infrastructure (AWS/GCP/Azure) for hosting and scaling AI workloads. Familiarity with Docker , Kubernetes , or MLOps pipelines for model deployment and maintenance.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Responsibilities and Duties: Strategic Leadership (10%): Champion the AI/ML roadmap, driving strategic planning and execution for all initiatives. Provide guidance on data science projects (Agentic AI, Generative AI, and Machine Learning), aligning them with business objectives and best practices. Foster a data-driven culture, advocating for AI-powered solutions to business challenges and efficiency improvements. Collaborate with product management, engineering, and business stakeholders to identify opportunities and deliver impactful solutions Technical Leadership (40%): Architect and develop Proof-of-Concept (POC) solutions for Agentic AI, Generative AI, and ML. Utilize Python and relevant data science libraries, leveraging MLflow. Provide technical guidance on AI projects, ensuring alignment with business objectives and best practices. Assist in development and documentation of standards for ethical and regulatory-compliant AI usage. Stay current with AI advancements, contributing to the team's knowledge and expertise. Perform hands-on data wrangling and AI model development Operational Leadership (50%): Drive continuous improvement through Agentic AI, Generative AI, and predictive modeling. Participate in Agile development processes (Scrum and Kanban). Ensure compliance with regulatory and ethical AI standards. Other duties as assigned Required Abilities and Skills: Agentic AI development and deployment. Statistical modeling, machine learning algorithms, and data mining techniques. Databricks and MLflow for model training, deployment, and management on AWS. Working with large datasets on AWS and Databricks Strong hands-on experience with: Agentic AI development and deployment. Working with large datasets on AWS and Databricks. Desired Experience: Statistical modeling, machine learning algorithms, and data mining techniques. Databricks and MLflow for model training, deployment, and management on AWS. Experience integrating AI with IoT/event data. Experience with real-time and batch inference integration with SaaS applications. International team management experience. Track record of successful product launches in regulated environments Education and Experience: 5+ years of data science/AI experience Bachelor's degree in Statistics, Data Science, Computer Engineering, Mathematics, or a related field (Master's preferred). Proven track record of deploying successful Agentic AI, Generative AI, and ML projects from concept to production. Excellent communication skills, able to explain complex technical concepts to both technical and non-technical audiences.

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

Remote

Compensation: Up to $300,000 USD annually + Equity Type: Full-time | Remote We're looking for a Senior Python Backend Engineer to help build infrastructure powering real-time AI workflows and ML pipelines. You’ll work on high-availability systems and collaborate with a global team of engineers and researchers. Responsibilities: Build and optimize backend APIs and services for AI products Integrate with task queues, model inference APIs, and vector search systems Ensure system reliability and scalability across deployments Requirements: 5+ years of Python backend development Experience with FastAPI or Django, PostgreSQL, Celery, Redis Familiarity with LLM infrastructure, data orchestration, or AI tools is a big plus Strong problem-solving, documentation, and testing practices

Posted 1 week ago

Apply

2.0 - 3.0 years

0 Lacs

India

On-site

If solving business challenges drives you. This is the place to be. Fornax is a team of cross-functional individuals who solve critical business challenges using core concepts of analytics, critical thinking. We are seeking a skilled Data Scientist who has worked in the Marketing domain. The ideal candidate will possess a strong blend of statistical expertise and business acumen, particularly in Marketing Mix Modeling (MMM), Causality Analysis, and Marketing Incrementality. Good understanding of the entire marketing value chain and measurement strategies. The Data Scientist will play a critical role in developing advanced analytical solutions to measure marketing effectiveness, optimize marketing spend, and drive data-driven decision making. This role involves working closely with marketing teams, analysts, and business stakeholders to deliver actionable insights through statistical modeling and experimentation. The ideal candidate has a strong background in statistical analysis, causal inference, and marketing analytics. Responsibilities : Modeling & Analysis (70%) : Develop and maintain Marketing Mix Models (MMM) to measure the effectiveness of marketing channels and campaigns Design and implement causal inference methodologies to identify true incremental impact of marketing activities Build attribution models to understand customer journey and touchpoint effectiveness Conduct advanced statistical analysis including regression, time series, and Bayesian methods Develop predictive models for customer behavior, campaign performance, and ROI optimization Create experimental designs for A/B testing and incrementality studies Perform promotional analysis to measure lift, cannibalization, and optimal discount strategies across products and channels Stakeholder Management & Collaboration ( 30% ) : Partner with business teams to understand business objectives and analytical needs Translate complex statistical findings into actionable business recommendations Present analytical insights and model results to non-technical stakeholders Collaborate with data engineers to ensure data quality and availability for modeling Work with business teams to design and implement measurement strategies Create documentation and knowledge transfer materials for analytical methodologies Key Qualifications Education: Bachelor's degree in Statistics, Economics, Mathematics, Computer Science, or related quantitative field. Master's degree preferred. Experience: 2-3 years of experience as a Data Scientist with focus on marketing analytics. Technical Skills: Strong proficiency in Python or R for statistical analysis Expertise in statistical modeling techniques (regression, time series, Bayesian methods) Experience with Marketing Mix Modeling (MMM) frameworks and tools Knowledge of causal inference methods (DiD, IV, RDD, Synthetic Controls) Proficiency in SQL for data manipulation and analysis Understanding of machine learning algorithms and their applications Deep understanding of marketing channels and measurement strategies Familiarity with marketing metrics (CAC, LTV, ROAS, etc.) Understanding of media planning and optimization concepts

Posted 1 week ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Join MSBC as an AI/ML Engineer – Deliver Real-World Intelligent Systems At MSBC, we design and implement practical AI solutions that solve real business problems across industries. As an AI/ML Engineer, you will play a key role in building and deploying machine learning models and data-driven systems that are used in production. This role is ideal for engineers with solid hands-on experience delivering end-to-end AI/ML projects. Key Tools and Frameworks • Programming Languages – Python (FastAPI, Flask, Django) • Machine Learning Libraries – scikit-learn, XGBoost, TensorFlow or PyTorch • Data Pipelines – Pandas, Spark, Airflow • Model Deployment – FastAPI, Flask, Docker, MLflow • Cloud Platforms – AWS, GCP, Azure (any one) • Version Control – Git Key Responsibilities • Design and develop machine learning models to address business requirements. • Build and manage data pipelines for training and inference workflows. • Train, evaluate, and optimise models for accuracy and performance. • Deploy models in production environments using containerised solutions. • Work with structured and unstructured data from various sources. • Ensure robust monitoring, retraining, and versioning of models. • Contribute to architecture and design discussions for AI/ML systems. • Document processes, results, and deployment procedures clearly. • Collaborate with software engineers, data engineers, and business teams. Required Skills and Qualifications • 4+ years of hands-on experience delivering ML solutions in production environments. • Strong programming skills in Python and deep understanding of ML fundamentals. • Experience with supervised and unsupervised learning, regression, classification, and clustering techniques. • Practical experience in model deployment and lifecycle management. • Good understanding of data preprocessing, feature engineering, and model evaluation. • Experience with APIs, containers, and cloud deployment. • Familiarity with CI/CD practices and version control. • Ability to work independently and deliver results in fast-paced projects. • Excellent English communication skills for working with distributed teams. • Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. MSBC Group has been a trusted technology partner for over 20 years, delivering the latest systems and software solutions for financial services, manufacturing, logistics, construction, and startup ecosystems. Our expertise includes Accessible AI, Custom Software Solutions, Staff Augmentation, Managed Services, and Business Process Outsourcing. We are at the forefront of developing advanced AI-enabled services and supporting transformative projects. Operating globally, we drive innovation, making us a trusted AI and automation partner.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Nagpur, Maharashtra, India

Remote

Contract Psychometrist (12–15 weeks) Location: Remote – India Company: Quirk Bank Media (QB Media) Contract Fee: ₹3,00,000 (fixed, all-inclusive) Start Date: Immediate Employment Type: Contract (12–15 weeks) About: QB Media is building a personality quiz platform designed to capture India-specific (and in some cases, region-specific) psychological traits. We’re looking for someone who can help us identify the right traits to measure, design a clear scoring logic, and work with our team to turn quiz responses into actionable insights. Key Responsibilities Review and streamline our draft list into a concise, non-clinical trait set Map quiz items to traits and build an easy-to-implement scoring grid Run internal consistency stats (Cronbach’s α / KR-20) on pilot data; refine if needed Define clear low / medium / high trait thresholds Collaborate with our Data Science and Behavioural teams to integrate logic into our inference engine Deliver a brief spec (CSV + 2-page summary) and participate in final handover What We’re Looking For 2+ years experience designing or validating assessments, surveys, or inventories Proficiency in basic psychometric stats using R, SPSS, or Python Ability to translate psych concepts into clear, actionable insights Strong communication skills; remote-friendly, NDA-compliant Compensation & Contract ₹3,00,000 fixed fee (paid monthly or by milestones) Duration: 12–15 weeks (confirmed during onboarding) Weekly check-ins; resources provided by QB Media NDA and contractor agreement required before kickoff How to Apply Send one PDF to contact@quirkbankmedia.com containing: A concise CV (max 2 pages) A ~300-word note on a quiz or assessment you’ve helped create or refine Your earliest start date

Posted 1 week ago

Apply

4.0 years

0 Lacs

Greater Chennai Area

On-site

Role : Data Scientist Experience : 4-8 Years Working Days : 5 Days Work Mode : Hybrid (One week work from office in a month) Location : Bangalore/Chennai Notice Period - Immediate or 15 days maximum Key Responsibilities Design and develop predictive and prescriptive models using real-time and simulation-based data. Apply advanced techniques in time-series analysis, Bayesian inference, and probabilistic forecasting. Build and train deep learning models using PyTorch or TensorFlow for dynamic and high-dimensional datasets. Leverage reinforcement learning (RL) concepts, particularly in multi-agent environments, to model complex decision-making problems. Collaborate with engineering and product teams to integrate models into production environments. Conduct trajectory prediction, anomaly detection, or behavior modeling in contexts such as vehicle movement or intelligent systems. Utilize domain-specific knowledge such as vehicle kinematics and intelligent transportation systems to inform model design. Skills And Experiences Required 3+ years of experience in applied data science, preferably in real-time or simulation-based environments. Strong proficiency in Python, NumPy, Pandas, and deep learning frameworks like PyTorch or TensorFlow. Experience with time-series analysis, Bayesian models, or probabilistic forecasting. Understanding of reinforcement learning, especially multi-agent settings. Knowledge of vehicle kinematics, trajectory forecasting, or intelligent transportation systems. (ref:hirist.tech)

Posted 1 week ago

Apply

2.0 - 5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About The Role We are seeking a highly motivated and creative Platform Engineer with a true research mindset. This is a unique opportunity to move beyond traditional development and step into a role where you will ideate, prototype, and build production-grade applications powered by Generative AI. You will be a core member of a platform team, responsible for developing both internal and customer-facing solutions that are not just functional but intelligent. If you are passionate about the MERN stack, Python, and the limitless possibilities of Large Language Models (LLMs), and you thrive on building things from the ground up, this role is for you. Core Responsibilities - Full-Stack Engineering : Write clean, scalable, and robust code across the MERN stack (MongoDB, Express.js, React, Node.js) and Python. AI-Powered Product Development : Create and enhance key products such as : Intelligent chatbots for customer service and internal support. Automated quality analysis and call auditing systems using LLMs for transcription and sentiment analysis. AI-driven internal portals and dashboards to surface insights and streamline workflows. Gen AI Integration & Optimization : Work hands-on with foundation LLMs, fine-tuning custom models, and implementing advanced prompting techniques (zero-shot, few-shot) to solve specific business problems. Research & Prototyping : Explore and implement cutting-edge AI techniques, including setting up systems for offline LLM inference to ensure privacy and performance. Collaboration : Partner closely with product managers, designers, and business stakeholders to transform ideas into tangible, high-impact technical solutions. Required Skills & Experience Experience : 2-5 years of professional experience in a software engineering role. Full-Stack Proficiency : Strong command of the MERN stack (MongoDB, Express.js, React, Node.js) for building modern web applications. Python Expertise : Solid programming skills in Python, especially for backend services and AI/ML workloads. Generative AI & LLM Experience (Must-Have) Demonstrable experience integrating with foundation LLMs (e.g., OpenAI API, Llama, Mistral, etc.). Hands-on experience building complex AI systems and implementing architectures such as Retrieval-Augmented Generation (RAG) to ground models with external knowledge. Practical experience with AI application frameworks like LangChain and LangGraph to create agentic, multi-step workflows. Deep understanding of prompt engineering techniques (zero-shot, few-shot prompting). Experience or strong theoretical understanding of fine-tuning custom models for specific domains. Familiarity with concepts or practical experience in deploying LLMs for offline inference. R&D Mindset : A natural curiosity and passion for learning, experimenting with new technologies, and solving problems in novel ways. Bonus Points (Nice-to-Haves) Cloud Knowledge : Hands-on experience with AWS services (e.g., EC2, S3, Lambda, SageMaker). (ref:hirist.tech)

Posted 1 week ago

Apply

0 years

0 Lacs

Gandhinagar, Gujarat, India

On-site

We are looking for a backend developer with hands-on experience integrating open-source LLMs (e.g., Mistral, LLaMA) and vector databases (like FAISS, Qdrant, Weaviate, Milvus) in offline or edge environments. You will help develop core APIs and infrastructure to support embedding pipelines, retrieval-augmented generation (RAG), and LLM inference on local or mobile hardware. Selected Intern's Day-to-day Responsibilities Include Design and implement backend services that interface with local LLMs and vector databases Develop APIs to support prompt engineering, retrieval, and inference Integrate embedding models (e.g., BGE, MiniLM) and manage chunking/processing pipelines Optimize performance for offline or constrained environments (mobile, embedded devices, etc.) Package and deploy models via frameworks like Ollama, llama.cpp, gguf, or on-device runtimes Handle local file I/O, document ingestion, metadata tagging, and indexing About Company: Infoware is a process-driven software solutions provider specializing in bespoke software solutions. We work with several enterprises and startups and provide them with end-to-end solutions. You may visit the company website at https://www.infowareindia.com/

Posted 1 week ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

We are looking for a backend developer with hands-on experience integrating open-source LLMs (e.g., Mistral, LLaMA) and vector databases (like FAISS, Qdrant, Weaviate, Milvus) in offline or edge environments. You will help develop core APIs and infrastructure to support embedding pipelines, retrieval-augmented generation (RAG), and LLM inference on local or mobile hardware. Selected Intern's Day-to-day Responsibilities Include Design and implement backend services that interface with local LLMs and vector databases Develop APIs to support prompt engineering, retrieval, and inference Integrate embedding models (e.g., BGE, MiniLM) and manage chunking/processing pipelines Optimize performance for offline or constrained environments (mobile, embedded devices, etc.) Package and deploy models via frameworks like Ollama, llama.cpp, gguf, or on-device runtimes Handle local file I/O, document ingestion, metadata tagging, and indexing About Company: Infoware is a process-driven software solutions provider specializing in bespoke software solutions. We work with several enterprises and startups and provide them with end-to-end solutions. You may visit the company website at https://www.infowareindia.com/

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Positions in this function are responsible for the management and manipulation of mostly structured data, with a focus on building business intelligence tools, conducting analysis to distinguish patterns and recognize trends, performing normalization operations and assuring data quality. Depending on the specific role and business line, example responsibilities in this function could include creating specifications to bring data into a common structure, creating product specifications and models, developing data solutions to support analyses, performing analysis, interpreting results, developing actionable insights and presenting recommendations for use across the company. Roles in this function could partner with stakeholders to understand data requirements and develop tools and models such as segmentation, dashboards, data visualizations, decision aids and business case analysis to support the organization. Other roles involved could include producing and managing the delivery of activity and value analytics to external stakeholders and clients. Team members will typically use business intelligence, data visualization, query, analytic and statistical software to build solutions, perform analysis and interpret data. Positions in this function work on predominately descriptive and regression-based analytics and tend to leverage subject matter expert views in the design of their analytics and algorithms. This function is not intended for employees performing the following work: production of standard or self-service operational reporting, casual inference led (healthcare analytics) or data pattern recognition (data science) analysis; and/or image or unstructured data analysis using sophisticated theoretical frameworks. Primary Responsibilities Analyze data and extract actionable insights, findings, and recommendations Develop data validation strategies to ensure accuracy and reliability of data Communicate data and findings effectively to internal and external senior executives with clear supporting evidence Relate analysis to the organization's overall business objectives Implement generative AI techniques to reduce manual efforts and automate processes Analyzes and investigates Provides explanations and interpretations within area of expertise Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience Experience in creating summarized reports of findings and recommendations Proficiency in SQL language, Snowflake, and AI techniques Solid skills in Microsoft Excel, Word, PowerPoint, and Visio Ability to multitask, take initiative, and adapt to changing priorities Proven self-motivated team player with solid problem-solving and analytical skills Preferred Qualifications 3+ years of work experience 2+ years of experience working with a healthcare consulting firm Experience in data analytics and hands-on experience in Python Programming, SQL, and Snowflake Proven creative, strategic thinker with excellent critical thinking skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Overview As a key member of the team, you will be responsible for designing, building, and maintaining the data pipelines and platforms that support analytics, machine learning, and business intelligence. You will lead a team of data engineers and collaborate closely with cross-functional stakeholders to ensure that data is accessible, reliable, secure, and optimized for AI-driven applications Responsibilities Architect and implement scalable data solutions to support LLM training, fine-tuning, and inference workflows. Lead the development of ETL/ELT pipelines for structured and unstructured data across diverse sources. Ensure data quality, governance, and compliance with industry standards and regulations. Collaborate with Data Scientists, MLOps, and product teams to align data infrastructure with GenAI product goals. Mentor and guide a team of data engineers, promoting best practices in data engineering and DevOps. Optimize data workflows for performance, cost-efficiency, and scalability in cloud environments. Drive innovation by evaluating and integrating modern data tools and platforms (e.g., Databricks, Azure etc) Qualifications Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related technical field. 7+ years of experience in data engineering, with at least 2+ years in a leadership or senior role. Proven experience designing and managing data platforms and pipelines in cloud environments (Azure, AWS, or GCP). Experience supporting AI/ML workloads, especially involving Large Language Models (LLMs) Strong proficiency in SQL and Python Hands-on experience with data orchestration tools

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies