Jobs
Interviews

9537 Tensorflow Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About the role We’re looking for Senior Engineering Manager to lead our Data / AI Platform and MLOps teams at slice. In this role, you’ll be responsible for building and scaling a high-performing team that powers data infrastructure, real-time streaming, ML enablement, and data accessibility across the company. You'll partner closely with ML, product, platform, and analytics stakeholders to build robust systems that deliver high-quality, reliable data at scale. You will drive AI initiatives to centrally build AP platform and apps which can be leveraged by various functions like legal, CX, product in a secured manner This is a hands-on leadership role perfect for someone who enjoys solving deep technical problems while growing people and teams. What You Will Do Lead and grow the data platform pod focused on all aspects of data (batch + real-time processing, ML platform, AI tooling, Business reporting, Data products – enabling product experience through data) Maintain hands-on technical leadership - lead by example through code reviews, architecture decisions, and direct technical contribution Partner closely with product and business stakeholders to identify data-driven opportunities and translate business requirements into scalable data solutions Own the technical roadmap for our data platform including infra modernization, performance, scalability, and cost efficiency Drive the development of internal data products like self-serve data access, centralized query layers, and feature stores Build and scale ML infrastructure with MLOps best practices including automated pipelines, model monitoring, and real-time inference systems Lead AI platform development for hosting LLMs, building secure AI applications, and enabling self-service AI capabilities across the organization Implement enterprise AI governance including model security, access controls, and compliance frameworks for internal AI applications Collaborate with engineering leaders across backend, ML, and security to align on long-term data architecture Establish and enforce best practices around data governance, access controls, and data quality Ensure regulatory compliance with GDPR, PCI-DSS, SOX through automated compliance monitoring and secure data pipelines Implement real-time data processing for fraud detection and risk management with end-to-end encryption and audit trails Coach engineers and team leads through regular 1:1s, feedback, and performance conversations What You Will Need 10+ years of engineering experience, including 2+ years managing data or infra teams with proven hands-on technical leadership Strong stakeholder management skills with experience translating business requirements into data solutions and identifying product enhancement opportunities Strong technical background in data platforms, cloud infrastructure (preferably AWS), and distributed systems Experience with tools like Apache Spark, Flink, EMR, Airflow, Trino/Presto, Kafka, and Kubeflow/Ray plus modern stack: dbt, Databricks, Snowflake, Terraform Hands on experience building AI/ML platforms including MLOps tools and experience with LLM hosting, model serving, and secure AI application development Proven experience improving performance, cost, and observability in large-scale data systems Expert-level cloud platform knowledge with container orchestration (Kubernetes, Docker) and Infrastructure-as-Code Experience with real-time streaming architectures (Kafka, Redpanda, Kinesis) Understanding of AI/ML frameworks (TensorFlow, PyTorch), LLM hosting platforms, and secure AI application development patterns Comfort working in fast-paced, product-led environments with ability to balance innovation and regulatory constraints Bonus: Experience with data security and compliance (PII/PCI handling), LLM infrastructure, and fintech regulations Life at slice Life so good, you’d think we’re kidding: Competitive salaries. Period. An extensive medical insurance that looks out for our employees & their dependents. We’ll love you and take care of you, our promise. Flexible working hours. Just don’t call us at 3AM, we like our sleep schedule. Tailored vacation & leave policies so that you enjoy every important moment in your life. A reward system that celebrates hard work and milestones throughout the year. Expect a gift coming your way anytime you kill it here. Learning and upskilling opportunities. Seriously, not kidding. Good food, games, and a cool office to make you feel like home. An environment so good, you’ll forget the term “colleagues can’t be your friends”.

Posted 1 day ago

Apply

2.0 years

0 - 0 Lacs

Kalighat, Kolkata, West Bengal

On-site

We are seeking a highly analytical and technically skilled Data Analyst with hands-on experience in Machine Learning to join our team. The ideal candidate will be responsible for analyzing large datasets, generating actionable insights, and building ML models to drive business solutions and innovation. Key Responsibilities: Collect, clean, and analyze structured and unstructured data from multiple sources. Develop dashboards, visualizations, and reports to communicate trends and insights to stakeholders. Identify business challenges and apply machine learning algorithms to solve them. Build, evaluate, and deploy predictive and classification models using tools like Python, R, Scikit-learn, TensorFlow, etc. Collaborate with cross-functional teams including product, marketing, and engineering to implement data-driven strategies. Optimize models for performance, accuracy, and scalability. Automate data processing and reporting workflows using scripting and cloud-based tools. Stay updated with the latest industry trends in data analytics and machine learning. Required Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Data Science, Statistics, Mathematics, or related field . 2+ years of experience in data analytics and machine learning. Strong proficiency in SQL , Python (Pandas, NumPy, Scikit-learn), and data visualization tools like Tableau, Power BI , or Matplotlib/Seaborn . Experience with machine learning techniques such as regression, classification, clustering, NLP, and recommendation systems. Solid understanding of statistics, probability, and data mining concepts. Familiarity with cloud platforms like AWS, GCP, or Azure is a plus. Excellent problem-solving and communication skills. Job Types: Full-time, Permanent Pay: ₹10,000.00 - ₹15,000.00 per month Ability to commute/relocate: Kalighat, Kolkata, West Bengal: Reliably commute or planning to relocate before starting work (Preferred) Language: English (Preferred) Work Location: In person

Posted 1 day ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description: We are looking for a Lead Generative AI Engineer with 3–5 years of experience to spearhead development of cutting-edge AI systems involving Large Language Models (LLMs) , Vision-Language Models (VLMs) , and Computer Vision (CV) . You will lead model development, fine-tuning, and optimization for text, image, and multi-modal use cases. This is a hands-on leadership role that requires a deep understanding of transformer architectures, generative model fine-tuning, prompt engineering, and deployment in production environments. Roles and Responsibilities: Lead the design, development, and fine-tuning of LLMs for tasks such as text generation, summarization, classification, Q&A, and dialogue systems. Develop and apply Vision-Language Models (VLMs) for tasks like image captioning, VQA, multi-modal retrieval, and grounding. Work on Computer Vision tasks including image generation, detection, segmentation, and manipulation using SOTA deep learning techniques. Leverage frameworks like Transformers, Diffusion Models, and CLIP to build and fine-tune multi-modal models. Fine-tune open-source LLMs and VLMs (e.g., LLaMA, Mistral, Gemma, Qwen, MiniGPT, Kosmos, etc.) using task-specific or domain-specific datasets. Design data pipelines , model training loops, and evaluation metrics for generative and multi-modal AI tasks. Optimize model performance for inference using techniques like quantization, LoRA, and efficient transformer variants. Collaborate cross-functionally with product, backend, and ML ops teams to ship models into production. Stay current with the latest research and incorporate emerging techniques into product pipelines. Requirements: Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or related field. 3–5 years of hands-on experience in building, training, and deploying deep learning models, especially in LLM, VLM , and/or CV domains. Strong proficiency with Python , PyTorch (or TensorFlow), and libraries like Hugging Face Transformers, OpenCV, Datasets, LangChain, etc. Deep understanding of transformer architecture , self-attention mechanisms , tokenization , embedding , and diffusion models . Experience with LoRA , PEFT , RLHF , prompt tuning , and transfer learning techniques. Experience with multi-modal datasets and fine-tuning vision-language models (e.g., BLIP, Flamingo, MiniGPT, Kosmos, etc.). Familiarity with MLOps tools , containerization (Docker), and model deployment workflows (e.g., Triton Inference Server, TorchServe). Strong problem-solving, architectural thinking, and team mentorship skills.

Posted 1 day ago

Apply

0.0 - 3.0 years

0 - 0 Lacs

Mohali, Punjab

On-site

Job Description: Nogiz is hiring a passionate and skilled Python Developer (AI/ML) with 3+ years of experience to join our on-site team. If you're looking to work on impactful machine learning projects, collaborate with a motivated team, and grow in a technology-first environment, we’d love to hear from you. Responsibilities & Skills: Develop and deploy AI/ML models using Python and modern frameworks. Handle data preprocessing, feature engineering, and algorithm tuning. Work closely with cross-functional teams to integrate models into live systems. Optimize model performance and scalability. Write clean, maintainable code and clear documentation. Strong understanding of Python, OOPs concepts, and ML libraries (e.g., TensorFlow, PyTorch, scikit-learn, Pandas, NumPy). Experience in model evaluation and statistical analysis. Good communication skills and team collaboration. Exposure to Agile methodologies is a plus. Job Type: Full-time Pay: ₹50000 -₹80000 per month Schedule: Day shift Morning shift Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Preferred) Experience: Python: 3 years (Preferred) Location Mohali, Punjab (Preferred) Work Location: In person Job Types: Full-time, Permanent Pay: ₹50,000.00 - ₹75,000.00 per month Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Preferred) Experience: Python development: 3 years (Preferred) AI/ML: 3 years (Preferred) Location: Mohali, Punjab (Preferred) Work Location: In person

Posted 1 day ago

Apply

1.5 years

0 Lacs

India

Remote

Urgent Opening: Web Scraping – Data Crawling, AI/ML Location: Permanent Work From Home Job Type: Full-Time | Permanent Experience: 1.5+ Years (Preferred) About the Role We are looking for a skilled and experienced Python Developer with strong expertise in data crawling, web scraping, AI/ML, and CAPTCHA solving techniques. The ideal candidate is passionate about automation, data pipelines, and problem-solving with a deep understanding of the web ecosystem. This is a permanent remote opportunity, ideal for professionals looking to work in a flexible and innovative environment while delivering high-quality solutions in data acquisition and intelligent automation. Key Responsibilities Design and implement scalable data crawling/scraping solutions using Python. Develop tools to bypass or solve CAPTCHAs (e.g., reCAPTCHA, hCaptcha) using AI/ML or third-party APIs. Write efficient and robust data extraction and parsing logic for large-scale web data. Build and maintain AI/ML models for tasks such as image recognition, pattern detection, and anomaly detection. Optimize crawling infrastructure for speed, reliability, and anti-blocking strategies (rotating proxies, headless browsers, etc.). Integrate with APIs and databases to store, manage, and process scraped data. Monitor and troubleshoot scraping systems and adapt to changes in target websites. Collaborate with the team to define requirements, plan deliverables, and implement best practices. Required Skills & Qualifications 1.5+ years of hands-on experience with Python in web scraping/data crawling. Strong experience with Scrapy and Selenium. Deep understanding of CAPTCHA types and proven experience in solving or bypassing them. Proficient in AI/ML frameworks: TensorFlow, PyTorch, scikit-learn, or OpenCV. Experience with OCR tools (Tesseract, EasyOCR) and image pre-processing techniques. Familiarity with anti-bot techniques, headless browsers, and proxy rotation. Solid understanding of HTML, CSS, JavaScript, HTTP protocols, and website structure. Strong problem-solving skills and attention to detail. Perks & Benefits Permanent Work from Home Flexible work hours Competitive salary based on experience Opportunities for skill development and upskilling Performance-based incentives How to Apply Interested candidates can email their updated resume and portfolio (if any) to jyoti@transformez.in with the subject line: Python Developer – Data Crawling & AI.

Posted 1 day ago

Apply

0 years

0 Lacs

India

Remote

Data Science Intern (Paid) Company: WebBoost Solutions by UM Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with a Certificate of Internship About WebBoost Solutions by UM WebBoost Solutions by UM provides aspiring professionals with hands-on experience in data science , offering real-world projects to develop and refine their analytical and machine learning skills for a successful career. Responsibilities ✅ Collect, preprocess, and analyze large datasets. ✅ Develop predictive models and machine learning algorithms . ✅ Perform exploratory data analysis (EDA) to extract meaningful insights. ✅ Create data visualizations and dashboards for effective communication of findings. ✅ Collaborate with cross-functional teams to deliver data-driven solutions . Requirements 🎓 Enrolled in or graduate of a program in Data Science, Computer Science, Statistics, or a related field . 🐍 Proficiency in Python for data analysis and modeling. 🧠 Knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch (preferred) . 📊 Familiarity with data visualization tools (Tableau, Power BI, or Matplotlib) . 🧐 Strong analytical and problem-solving skills. 🗣 Excellent communication and teamwork abilities. Stipend & Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based). ✔ Hands-on experience in data science projects . ✔ Certificate of Internship & Letter of Recommendation . ✔ Opportunity to build a strong portfolio of data science models and applications. ✔ Potential for full-time employment based on performance. How to Apply 📩 Submit your resume and a cover letter with the subject line "Data Science Intern Application." 📅 Deadline: 02nd August 2025 Equal Opportunity WebBoost Solutions by UM is committed to fostering an inclusive and diverse environment and encourages applications from all backgrounds. Let me know if you need any modifications! 🚀

Posted 1 day ago

Apply

0 years

0 Lacs

India

Remote

Job Title : Machine Learning Intern Company: Onetrueweb Software Solution Pvt Ltd. Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with Certificate of Internship About Onetrueweb Software Solution Pvt Ltd. Onetrueweb Software Solution Pvt Ltd. provides students and graduates with hands-on learning opportunities and career growth in Machine Learning and Data Science. Role Overview As a Machine Learning Intern, you will work on real-world projects, enhancing your practical skills in data analysis and model development. Responsibilities ✅ Design, test, and optimize machine learning models. ✅ Analyze and preprocess datasets. ✅ Develop algorithms and predictive models. ✅ Use tools like TensorFlow, PyTorch, and Scikit-learn. ✅ Document findings and create reports. Requirements 🎓 Enrolled in or a graduate of a relevant program (Computer Science, AI, Data Science, or related field). 🧠 Knowledge of machine learning concepts and algorithms. 💻 Proficiency in Python or R (preferred). 🤝 Strong analytical and teamwork skills. Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based) (Paid). ✔ Hands-on machine learning experience. ✔ Internship Certificate & Letter of Recommendation. ✔ Real-world project contributions for your portfolio. How to Apply 📩 Submit your application with "Machine Learning Intern Application" as the subject. 📅 Deadline: 23rd July 2025 Note: Onetrueweb Software Solution Pvt Ltd . is an equal opportunity employer, welcoming diverse applicants.

Posted 1 day ago

Apply

0.0 - 3.0 years

30 - 35 Lacs

Hyderabad, Telangana

On-site

Job Title: Data Scientist / Machine Learning Specialist Location: Hyderabad (Hybrid Model) Experience: 3 to 5 Years Compensation: Up to ₹30 LPA Joining: Immediate or Short Notice Preferred About the Role: We are looking for a highly skilled and motivated Machine Learning Specialist / Data Scientist with a strong foundation in data science and a deep understanding of clinical supply chain or supply chain operations. This individual will play a critical role in developing predictive models, optimizing logistics, and enabling data-driven decision-making within our clinical trial supply chain ecosystem. Key Responsibilities: * Design, develop, and deploy machine learning models for demand forecasting, inventory optimization, and supply chain efficiency * Analyze clinical trial and logistics data to uncover insights and enable proactive planning * Collaborate with cross-functional teams including clinical operations, IT, and supply chain to integrate ML solutions into workflows * Build interactive dashboards and tools for real-time analytics and scenario modeling * Ensure models are scalable, maintainable, and compliant with regulatory frameworks (e.g., GxP, 21 CFR Part 11) * Stay up to date with the latest advancements in ML/AI and bring innovative solutions to complex clinical supply challenges Required Qualifications: * Master’s or Ph.D. in Computer Science, Data Science, Engineering, or a related field * 3–5 years of hands-on experience in machine learning, data science, or AI (preferably in healthcare or life sciences) * Proven experience with clinical or supply chain operations such as demand forecasting, IRT systems, and logistics planning * Proficiency in Python, R, SQL, and ML frameworks like scikit-learn, TensorFlow, or PyTorch * Solid knowledge of statistical modeling, time series forecasting, and optimization techniques * Strong analytical mindset and excellent communication skills * Ability to thrive in a fast-paced, cross-functional environment Preferred Qualifications: * Experience working with clinical trial systems and data (e.g., EDC, CTMS, IRT) * Understanding of regulatory requirements in clinical research * Familiarity with cloud platforms such as AWS, Azure, or GCP * Exposure to MLOps practices for model deployment and monitoring Job Type: Full-time Pay: ₹3,000,000.00 - ₹3,500,000.00 per year Ability to commute/relocate: Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: Data science: 3 years (Required) Machine learning: 3 years (Preferred) Python: 3 years (Required) PyTorch: 3 years (Required) Work Location: In person

Posted 1 day ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Role-AIML Engineer Location- Remote Expereince-8 to 12 years Notice-Immediate Only Interested candidated share your resume to sunilkumar@xpetize.com Job description: Seeking a highly experienced and technically adept AI/ML Engineer to spearhead a strategic initiative focused on analyzing annual changes in IRS-published TRCs and identifying their downstream impact on codebases. Role demands deep expertise in machine learning, knowledge graph construction, and software engineering processes. The ideal candidate will have a proven track record of delivering production-grade AI solutions in complex enterprise environments. Key Responsibilities: Design and development of an AI/ML-based system to detect and analyze differences in IRS TRC publications year-over-year. Implement knowledge graphs to model relationships between TRC changes and impacted code modules. Collaborate with tax domain experts, software engineers, and DevOps teams to ensure seamless integration of the solution into existing workflows. Define and enforce engineering best practices, including CI/CD, version control, testing, and model governance. Drive the end-to-end lifecycle of the solution—from data ingestion and model training to deployment and monitoring. Ensure scalability, performance, and reliability of the deployed system in a production environment. Mentor junior engineers and contribute to a culture of technical excellence and innovation. Required Skills & Experience: 8+ years of experience in software engineering, with at least 5 years in AI/ML solution delivery. Strong understanding of tax-related data structures, especially IRS TRCs, is a plus. Expertise in building and deploying machine learning models using Python, TensorFlow/PyTorch, and ML Ops frameworks. Hands-on experience with Knowledge graph technologies (e.g., Neo4j, RDF, SPARQL, GraphQL). Deep familiarity with software architecture, microservices, and API design. Experience with NLP techniques for document comparison and semantic analysis. Proven ability to lead cross-functional teams and deliver complex projects on time. Strong communication and stakeholder management skills.

Posted 1 day ago

Apply

2.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Argus is where smart people belong and where they can grow. We answer the challenge of illuminating markets and shaping new futures. What We’re Looking For Join our Generative AI team as a Senior Data Scientist, reporting directly to the Lead Data Scientist in India. You will play a crucial role in building, optimizing, and maintaining AI-ready data infrastructure for advanced Generative AI applications. Your focus will be on hands-on implementation of cutting-edge data extraction, curation, and metadata enhancement techniques for both text and numerical data. You will be a key contributor to the development of innovative solutions, ensuring rapid iteration and deployment, and supporting the Lead in achieving the team's strategic goals. What Will You Be Doing AI-Ready Data Development: Design, develop, and maintain high-quality AI-ready datasets, ensuring data integrity, usability, and scalability to support advanced generative AI models. Advanced Data Processing: Drive hands-on efforts in complex data extraction, cleansing, and curation for diverse text and numerical datasets. Implement sophisticated metadata enrichment strategies to enhance data utility and accessibility for AI systems. Algorithm Implementation & Optimization: Implement and optimize state-of-the-art algorithms and pipelines for efficient data processing, feature engineering, and data transformation tailored for LLM and GenAI applications. GenAI Application Development: Apply and integrate frameworks like LangChain and Hugging Face Transformers to build modular, scalable, and robust Generative AI data pipelines and applications. Prompt Engineering Application: Apply advanced prompt engineering techniques to optimize LLM performance for specific data extraction, summarization, and generation tasks, working closely with the Lead's guidance. LLM Evaluation Support: Contribute to the systematic evaluation of Large Language Models (LLMs) outputs, analysing quality, relevance, and accuracy, and supporting the implementation of LLM-as-a-judge frameworks. Retrieval-Augmented Generation (RAG) Contribution: Actively contribute to the implementation and optimization of RAG systems, including working with embedding models, vector databases, and, where applicable, knowledge graphs, to enhance data retrieval for GenAI. Technical Mentorship: Act as a technical mentor and subject matter expert for junior data scientists, providing guidance on best practices in coding and PR reviews, data handling, and GenAI methodologies. Cross-Functional Collaboration: Collaborate effectively with global data science teams, engineering, and product stakeholders to integrate data solutions and ensure alignment with broader company objectives. Operational Excellence: Troubleshoot and resolve data-related issues promptly to minimize potential disruptions, ensuring high operational efficiency and responsiveness. Documentation & Code Quality: Produce clean, well-documented, production-grade code, adhering to best practices for version control and software engineering. Skills And Experience Academic Background: Advanced degree in AI, statistics, mathematics, computer science, or a related field. Programming and Frameworks: 2+ years of hands-on experience with Python, TensorFlow or PyTorch, and NLP libraries such as spaCy and Hugging Face. GenAI Tools: 1+ years Practical experience with LangChain, Hugging Face Transformers, and embedding models for building GenAI applications. Prompt Engineering: Deep expertise in prompt engineering, including prompt tuning, chaining, and optimization techniques. LLM Evaluation: Experience evaluating LLM outputs, including using LLM-as-a-judge methodologies to assess quality and alignment. RAG and Knowledge Graphs: Practical understanding and experience using vector databases. In addition, familiarity with graph-based RAG architectures and the use of knowledge graphs to enhance retrieval and reasoning would be a strong plus. Cloud: 2+ years of experience with Gemini/OpenAI models and cloud platforms such as AWS, Google Cloud, or Azure. Proficient with Docker for containerization. Data Engineering: Strong understanding of data extraction, curation, metadata enrichment, and AI-ready dataset creation. Collaboration and Communication: Excellent communication skills and a collaborative mindset, with experience working across global teams. What’s In It For You Our rapidly growing, award-winning business offers a dynamic environment for talented, entrepreneurial professionals to achieve results and grow their careers. Argus recognizes and rewards successful performance and as an Investor in People, we promote professional development and retain a high-performing team committed to building our success. Competitive salary Hybrid Working Policy (3 days in Mumbai office/ 2 days WFH once fully inducted) Group healthcare scheme 18 days annual leave 8 days of casual leave Extensive internal and external training Hours This is a full-time position operating under a hybrid model, with three days in the office and up to two days working remotely. The team supports Argus’ key business processes every day, as such you will be required to work on a shift-based rota with other members of the team supporting the business until 8pm. Typically support hours run from 11am to 8pm with each member of the team participating up to 2/3 times a week. Argus is the leading independent provider of market intelligence to the global energy and commodity markets. We offer essential price assessments, news, analytics, consulting services, data science tools and industry conferences to illuminate complex and opaque commodity markets. Headquartered in London with 1,500 staff, Argus is an independent media organisation with 30 offices in the world’s principal commodity trading hubs. Companies, trading firms and governments in 160 countries around the world trust Argus data to make decisions, analyse situations, manage risk, facilitate trading and for long-term planning. Argus prices are used as trusted benchmarks around the world for pricing transportation, commodities and energy. Founded in 1970, Argus remains a privately held UK-registered company owned by employee shareholders and global growth equity firm General Atlantic.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Argus is where smart people belong and where they can grow. We answer the challenge of illuminating markets and shaping new futures. What We’re Looking For Join our Generative AI team to lead a new group in India, focused on creating and maintaining AI-ready data. As the point of contact in Mumbai, you will guide the local team and ensure seamless collaboration with our global counterparts. Your contributions will directly impact the development of innovative solutions used by industry leaders worldwide, supporting text and numerical data extraction, curation, and metadata enhancements to accelerate development and ensure rapid response times. You will play a pivotal role in transforming how our data are seamlessly integrated with AI systems, paving the way for the next generation of customer interactions. What Will You Be Doing Lead and Develop the Team: Oversee a team of data scientists in Mumbai. Mentoring and guiding junior team members, fostering their professional growth and development. Strategic Planning: Develop and implement strategic plans for data science projects, ensuring alignment with the company's goals and objectives. AI-Ready Data Development: Design, develop, and maintain high-quality AI-ready datasets, ensuring data integrity, usability, and scalability to support advanced Generative AI models. Advanced Data Processing: Drive hands-on efforts in complex data extraction, cleansing, and curation for diverse text and numerical datasets. Implement sophisticated metadata enrichment strategies to enhance data utility and accessibility for AI systems. Algorithm Implementation & Optimization: Implement and optimize state-of-the-art algorithms and pipelines for efficient data processing, feature engineering, and data transformation tailored for LLM and GenAI applications. GenAI Application Development: Apply and integrate frameworks like LangChain and Hugging Face Transformers to build modular, scalable, and robust Generative AI data pipelines and applications. Prompt Engineering Application: Apply advanced prompt engineering techniques to optimize LLM performance for specific data extraction, summarization, and generation tasks, working closely with the Lead's guidance. LLM Evaluation Support: Contribute to the systematic evaluation of Large Language Models (LLMs) outputs, analysing quality, relevance, and accuracy, and supporting the implementation of LLM-as-a-judge frameworks. Retrieval-Augmented Generation (RAG) Contribution: Actively contribute to the implementation and optimization of RAG systems, including working with embedding models, vector databases, and, where applicable, knowledge graphs, to enhance data retrieval for GenAI. Technical Leadership: Act as a technical leader and subject matter expert for junior data scientists, providing guidance on best practices in coding and PR reviews, data handling, and GenAI methodologies. Cross-Functional Collaboration: Collaborate effectively with global data science teams, engineering, and product stakeholders to integrate data solutions and ensure alignment with broader company objectives. Operational Excellence: Troubleshoot and resolve data-related issues promptly to minimize potential disruptions, ensuring high operational efficiency and responsiveness. Documentation & Code Quality: Produce clean, well-documented, production-grade code, adhering to best practices for version control and software engineering. Skills And Experience Leadership Experience: Proven track record in leading and mentoring data science teams, with a focus on strategic planning and operational excellence. Academic Background: Advanced degree in AI, statistics, mathematics, computer science, or a related field. Programming and Frameworks: 5+ years of hands-on experience with Python, TensorFlow or PyTorch, and NLP libraries such as spaCy and Hugging Face. GenAI Tools: 2+ years of Practical experience with LangChain, Hugging Face Transformers, and embedding models for building GenAI applications. Prompt Engineering: Deep expertise in prompt engineering, including prompt tuning, chaining, and optimization techniques. LLM Evaluation: Experience evaluating LLM outputs, including using LLM-as-a-judge methodologies to assess quality and alignment. RAG and Knowledge Graphs: Practical understanding and experience using vector databases. In addition, familiarity with graph-based RAG architectures and the use of knowledge graphs to enhance retrieval and reasoning would be a strong plus. Cloud: 2+ years of experience with Gemini/OpenAI models and cloud platforms such as AWS, Google Cloud, or Azure. Proficient with Docker for containerization. Data Engineering: Strong understanding of data extraction, curation, metadata enrichment, and AI-ready dataset creation. Collaboration and Communication: Excellent communication skills and a collaborative mindset, with experience working across global teams. What’s In It For You Our rapidly growing, award-winning business offers a dynamic environment for talented, entrepreneurial professionals to achieve results and grow their careers. Argus recognizes and rewards successful performance and as an Investor in People, we promote professional development and retain a high-performing team committed to building our success. Competitive salary Hybrid Working Policy (3 days in Mumbai office/ 2 days WFH once fully inducted) Group healthcare scheme 18 days annual leave 8 days of casual leave Extensive internal and external training Hours This is a full-time position operating under a hybrid model, with three days in the office and up to two days working remotely. The team supports Argus’ key business processes every day, as such you will be required to work on a shift-based rota with other members of the team supporting the business until 8pm. Typically support hours run from 11am to 8pm with each member of the team participating up to 2/3 times a week. Argus is the leading independent provider of market intelligence to the global energy and commodity markets. We offer essential price assessments, news, analytics, consulting services, data science tools and industry conferences to illuminate complex and opaque commodity markets. Headquartered in London with 1,500 staff, Argus is an independent media organisation with 30 offices in the world’s principal commodity trading hubs. Companies, trading firms and governments in 160 countries around the world trust Argus data to make decisions, analyse situations, manage risk, facilitate trading and for long-term planning. Argus prices are used as trusted benchmarks around the world for pricing transportation, commodities and energy. Founded in 1970, Argus remains a privately held UK-registered company owned by employee shareholders and global growth equity firm General Atlantic.

Posted 1 day ago

Apply

1.0 - 3.0 years

0 Lacs

India

On-site

We’re looking for a hands-on, product-minded full-stack developer with a strong interest in AI and automation . This role is ideal for someone who loves to build, experiment, and bring ideas to life — fast. You'll work closely with the founding team to prototype AI-powered tools and products from scratch.This is a highly AI-focused role where you will build tools powered by LLMs, workflow automation, and real-time data intelligence — not just build web apps, but create AI-first products . Location - Kochi, Bangalore | Years of experience - 1-3 Years Hire22. ai connects top talent with executive role s anonymously and confidential ly, transforming hiring through a n AI-first, instant CoNCT mod el. Companies ge t interview-ready candidates in just 22 hours . No telecalling, no spam, no manual filtering. Responsibilities Build and experiment with AI-first features powered by LLMs, embeddings, vector databases, and prompt-based workflows Fine-tune or adapt AI/ML models for specific use cases such as job matching, summarization, scoring, and classification Integrate and orchestrate AI capabilities using tools like Vertex AI, LangChain, Cursor, n8n, Flowise, etc. Work with vector databases and implement retrieval-augmented generation (RAG) patterns to build intelligent, context-aware AI applications. Design, build, and maintain full-stack web applications using Next.js and Python as supporting layers around core AI functionality Rapidly prototype ideas, test hypotheses, and iterate fast based on feedback Collaborate with product, design, and founders to transform internal ideas into deployable, AI-powered tools Building internal AI agents, assistants, or copilots Building tools for automated decision-making, resume/job matching, or workflow automation Skills Full-Stack Proficiency: Strong command of JavaScript/TypeScript with experience in modern frameworks like React or Next.js. Back-end experience with Python (FastAPI), orGo. Database Fluent: Comfortable working with both SQL (MySQL) and NoSQL databases (MongoDB, Redis), with good data modeling instincts. AI/ML First Mindset: Hands-on with integrating and optimizing AI models using frameworks like OpenAI, Hugging Face, LangChain, or TensorFlow. You understand LLM architecture, prompt engineering, embeddings, and AI orchestration tools. You’ve ideally built or experimented with AI-driven applications beyond just using APIs.. Builder Mentality: Passionate about product thinking and going from zero to one. You take ownership, work independently, and execute quickly without waiting for perfect clarity. Problem Solver: You break down complex problems, learn fast, and deliver clean, efficient solutions. You value both speed and quality. Communicator & Collaborator: You express your ideas clearly, ask good questions, and keep teams in sync by sharing progress and blockers openly.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka

Remote

Senior Applied Scientist Bangalore, Karnataka, India Date posted Aug 01, 2025 Job number 1854651 Work site Up to 50% work from home Travel 0-25 % Role type Individual Contributor Profession Research, Applied, & Data Sciences Discipline Applied Sciences Employment type Full-Time Overview Do you want to be part of a team which delivers innovative products and machine learning solutions across Microsoft to hundreds of millions of users every month? Microsoft Turing team is an innovative engineering and applied research team working on state-of-the-art deep learning models, large language models and pioneering conversational search experiences. The team spearheads the platform and innovation for conversational search and the core copilot experiences across Microsoft’s ecosystem including BizChat, Office and Windows. As a Senior Applied Scientist in the Turing team, you will be involved in tight timeline-based hands on data science activity and work, including training models, creating evaluation sets, building infrastructure for training and evaluation, and more. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Qualifications Required Qualifications: Bachelor's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 5+ years related experience (e.g., statistics predictive analytics, research) OR Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research) OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 1+ year(s) related experience (e.g., statistics, predictive analytics, research) OR equivalent experience. 3+ years of industrial experience coding in C++, C#, C, Java or Python. Prior experience with data analysis or understanding, looking at data from a large scale systems to identify patterns or create evaluation datasets. Familiarity with common machine learning, deep learning frameworks and concepts, using use of LLMs, prompting. Experience in pytorch or tensorflow is a bonus. Ability to communicate technical details clearly across organizational boundaries. Other Requirements: Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check : This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Preferred Qualifications: Solid ability and effectiveness working end-to-end in a challenging technical problem domain (plan, design, execution, continuous release, and service operation). Some prior experience in applying deep learning techniques and drive end-to-end AI product development (Search, Recommendation, NLP, Document Understanding, etc). Prior experience with Azure or any other cloud pipelines or execution graphs. Self-driven, results oriented, high integrity, ability to work collaboratively, solve problems with groups, find win/win solutions and celebrate successes. Customer/End-result/Metrics driven in design and development. Keen ability and motivation to learn, enter new domains, and manage through ambiguity. Solid publication track records at top conferences like ACL, EMNLP, SIGKDD, AAAI, WSDM, COLING, WWW, NIPS, ICASSP, etc. #M365Core Responsibilities As an Applied Scientist on our team, you'll be responsible for and will engage in: Driving projects from design through implementation, experimentation and finally shipping to our users. This requires deep dive into data to identify gaps, come up with heuristics and possible solutions, using LLMs to create the right model or evaluation prompts, and setup the engineering pipeline or infrastructure to run them. Come up with evaluation techniques, datasets, criteria and metrics for model evaluations. These are often SOTA models or metrics / datasets. Hands on own the fine-tuning, use of language models, including dataset creation, filtering, review, and continuous iteration. This requires working in a diverse geographically distributed team environment where collaboration and innovation are valued. You will have an opportunity for direct impact on design, functionality, security, performance, scalability, manageability, and supportability of Microsoft products that use our deep learning technology. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.  Industry leading healthcare  Educational resources  Discounts on products and services  Savings and investments  Maternity and paternity leave  Generous time away  Giving programs  Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 day ago

Apply

4.0 - 8.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

The primary responsibility of the Data Science & Analysis role in India is to design, train, and fine-tune advanced foundational models (text, audio, vision) using healthcare and other relevant datasets, with a key focus on accuracy and context relevance. Collaboration with cross-functional teams (Business, engineering, IT) is essential to seamlessly integrate AI/ML technologies into solution offerings. Deployment, monitoring, and management of AI models in a production environment are crucial to ensure high availability, scalability, and performance. Continuous research and evaluation of the latest advancements in AI/ML and industry trends are required to drive innovation. Comprehensive documentation for AI models, covering development, training, fine-tuning, and deployment procedures, needs to be developed and maintained. Providing technical guidance and mentorship to junior AI engineers and team members is also a part of the role. Collaboration with stakeholders to understand business needs and translate them into technical requirements for model fine-tuning and development is critical. Selecting and curating appropriate datasets for fine-tuning foundational models to address specific use cases is an essential aspect of the job. Ensuring that AI solutions can seamlessly integrate with existing systems and applications is also part of the responsibilities. For this role, a Bachelor's or Master's degree in computer science, Artificial Intelligence, Machine Learning, or a related field is required. The ideal candidate should have 4 to 6 years of hands-on experience in AI/ML, with a proven track record of training and deploying LLMs and other machine learning models. Strong proficiency in Python and familiarity with popular AI/ML frameworks such as TensorFlow, PyTorch, Hugging Face Transformers, etc., is necessary. Practical experience in deploying and managing AI models in production environments, including expertise in serving and inference frameworks like Triton, TensorRT, VLLM, TGI, etc., is expected. Experience in Voice AI applications, a solid understanding of healthcare data standards (FHIR, HL7, EDI), and regulatory compliance (HIPAA, SOC2) is preferred. Excellent problem-solving and analytical abilities are required to tackle complex challenges and evaluate multiple factors. Exceptional communication and collaboration skills are necessary for effective teamwork in a dynamic environment. The ideal candidate should have worked on a minimum of 2 AI/LLM projects from start to finish, demonstrating proven business value. Experience with cloud computing platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes) is a plus. Familiarity with MLOps practices for continuous integration, continuous deployment (CI/CD), and automated monitoring of AI models would also be advantageous. Guidehouse offers a comprehensive total rewards package, including competitive compensation and a flexible benefits package that reflects the commitment to creating a diverse and supportive workplace. Guidehouse is an Equal Opportunity Employer that considers qualified applicants with criminal histories in accordance with applicable laws and regulations, including the Fair Chance Ordinance of Los Angeles and San Francisco. If accommodation is required to apply for a position or for information about employment opportunities, applicants can contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information provided will be kept confidential and used only as needed to provide necessary reasonable accommodation. All recruitment communication from Guidehouse will be sent from Guidehouse email domains, including @guidehouse.com or guidehouse@myworkday.com. Any correspondence received from other domains should be considered unauthorized and will not be honored by Guidehouse. Guidehouse does not charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in recruitment events. Banking information should never be provided to a third party claiming to need it for the hiring process. If any demand for money related to a job opportunity with Guidehouse arises, it should be reported to Guidehouse's Ethics Hotline. For verification of received correspondence, applicants can contact recruiting@guidehouse.com. Guidehouse is not liable for any losses incurred from an applicant's dealings with unauthorized third parties.,

Posted 1 day ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

As a Python Developer specializing in Generative AI, you will play a key role in designing, developing, and deploying intelligent AI-powered systems during the night shift in Bangalore. Your primary responsibility will involve building and maintaining Python-based APIs and backends integrated with cutting-edge Generative AI models. You will collaborate with global teams to implement prompt engineering, fine-tuning, and model deployment pipelines using tools such as GPT, Claude, LLaMA, DALLE, and Stable Diffusion. Your expertise in PyTorch, TensorFlow, Hugging Face, LangChain, or OpenAI API will be crucial in optimizing model performance for latency, accuracy, and scalability. Additionally, you will deploy models using FastAPI, Flask, Docker, or cloud platforms while ensuring thorough testing, monitoring, and documentation of AI integrations. To excel in this role, you should possess at least 4 years of Python development experience along with 1 year of hands-on experience with Generative AI tools and models. Familiarity with vector databases such as FAISS, Pinecone, and Weaviate is also desirable. Exposure to GPU-based training or inference, MLOps tools like MLflow, Airflow, or Kubeflow, and a strong understanding of AI ethics, model safety, and bias mitigation are considered advantageous. This full-time, permanent position offers health insurance, Provident Fund benefits, and requires working in person during the night shift. If you are passionate about leveraging AI to address real-world challenges and thrive in a fast-paced environment, we encourage you to apply and contribute to innovative GenAI and ML projects.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

ahmedabad, gujarat

On-site

You will be stepping into the role of Lead AI Data Scientist with our client's team based in Ahmedabad. Your primary responsibility will involve spearheading the development of advanced machine learning and deep learning models to tackle intricate business challenges. By managing project timelines, scope, and deliverables, you will ensure timely completion while upholding quality standards. Your role will also entail designing and executing machine learning models and AI applications, custom-tailored for both internal and customer-centric solutions. Your expertise will be put to the test as you engage in XML regression analysis and the construction of robust C++ pipelines for data processing. Constant optimization of models for enhanced performance, accuracy, and scalability will be key, all the while considering real-world constraints such as latency and interpretability. In addition, you will play a crucial role in developing tools aimed at boosting personal and organizational productivity, in collaboration with cross-functional teams to identify automation prospects. You will oversee the data pipeline and infrastructure, ensuring data quality, consistency, and accessibility for seamless model development. Your analytical skills will be put to good use as you delve into complex datasets to extract actionable insights. Lastly, your involvement in strategic brainstorming sessions will align AI initiatives with the company's overarching vision. To thrive in this role, you should possess a Bachelor's or Master's degree in data science, Computer Science, or a related field. Proficiency in XML regression analysis and C++ programming is a must, along with familiarity with machine learning frameworks like TensorFlow and PyTorch. Strong problem-solving abilities, a deep-rooted passion for AI, and effective communication skills are all essential traits for success in this position. As a part of our team, you will have the opportunity to work on cutting-edge AI technologies in a collaborative and innovative work environment. In addition to a competitive salary, we offer a comprehensive benefits package and ample opportunities for career growth within our fast-evolving company.,

Posted 1 day ago

Apply

2.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As a Senior Data Scientist with our fast-growing team, you should possess a total of 7-8 years of experience, with a specific focus on 3-5 years in Machine & Deep Machine learning. Your expertise should include working with Convolution Neural Network (CNN), Image Analytics, TensorFlow, Open CV, among others. Your primary responsibilities will revolve around designing and developing highly scalable machine learning solutions that have a significant impact on various aspects of our business. You will play a crucial role in creating Neural Network solutions, particularly Convolution Neural Networks, and ML solutions based on our architecture supported by big data, cloud technology, micro-service architecture, and high-performing compute infrastructure. Your daily tasks will involve contributing to all stages of algorithm development, from ideation to design, prototyping, and production implementation. To excel in this role, you should have a solid foundation in software engineering and data science, along with a deep understanding of machine learning algorithms, statistical analysis tools, and distributed systems. Experience in developing machine learning applications, familiarity with various machine learning APIs, tools, and open source libraries, as well as proficiency in coding, data structures, predictive modeling, and big data concepts are essential. Additionally, expertise in designing full-stack ML solutions in a distributed compute environment is crucial. Proficiency in Python, Tensor Flow, Keras, Sci-kit, pandas, NumPy, AZURE, AWS GPU is required. Strong communication skills to effectively collaborate with various levels of the organization are also necessary. If you are a Junior Data Scientist looking to join our team, you should have 2-4 years of experience and hands-on experience in Deep Learning, Computer Vision, Image Processing, and related skills. We are seeking self-motivated individuals who are eager to tackle challenges in the realm of AI predictive image analytics and machine learning.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Senior Data Scientist I at Dotdash Meredith, you will collaborate with the business team to understand problems, objectives, and desired outcomes. Your primary responsibility will be to work with cross-functional teams to assess data science use cases & solutions, lead and execute end-to-end data science projects, and collaborate with stakeholders to ensure alignment of data solutions with business goals. You will be expected to build custom data models with an initial focus on content classification, utilize advanced machine learning techniques to improve model accuracy and performance, and build necessary visualizations to interpret data models by business teams. Additionally, you will work closely with the engineering team to integrate models into production systems, monitor model performance in production, and make improvements as necessary. To excel in this role, you must possess a Masters degree (or equivalent experience) in Data Science, Mathematics, Statistics, or a related field with 3+ years of experience in ML/Data Science/Predictive-Analytics. Strong programming skills in Python and experience with standard data science tools and libraries are essential. Experience or understanding of deploying machine learning models in production on at least one cloud platform is required, and hands-on experience with LLMs API and the ability to craft effective prompts are preferred. It would be beneficial to have experience in the Media domain, familiarity with vector databases like Milvus, and E-commerce or taxonomy classification experience. In this role, you will have the opportunity to learn about building ML models using industry-standard frameworks, solving Data Science problems for the media industry, and the use of Gen AI in Media. This position is based in Eco World, Bengaluru, with shift timings from 1 p.m. to 10 p.m. IST. If you are a bright, engaged, creative, and fun individual with a passion for data science, we invite you to join our inspiring team at Dotdash Meredith India Services Pvt. Ltd.,

Posted 1 day ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Please Read Carefully Before Applying Do NOT apply unless you have 3+ years of real-world, hands-on experience in the requirements listed below. Do NOT apply if you are not in Delhi or the NCR OR are unwilling to relocate. This is NOT a WFO opportunity. We work 5 days from office, so please do NOT apply if you are looking for hybrid or WFO. About Gigaforce Gigaforce is a California-based InsurTech company delivering a next-generation, SaaS-based claims platform purpose-built for the Property and Casualty industry. Our blockchain-optimized solution integrates artificial intelligence (AI)-powered predictive models with deep domain expertise to streamline and accelerate subrogation and claims processing. Whether for insurers, recovery vendors, or other ecosystem participants, Gigaforce transforms the traditionally fragmented claims lifecycle into an intelligent, end-to-end digital experience. Recognized as one of the most promising emerging players in the insurance technology space, Gigaforce has already achieved significant milestones. We were a finalist for InsurtechNY, a leading platform accelerating innovation in the insurance industry, and twice named a Top 50 company by the TiE Silicon Valley community. Additionally, Plug and Play Tech Center, the world's largest early-stage investor and innovation accelerator, selected Gigaforce to join its prestigious global accelerator headquartered in Sunnyvale, California. At the core of our platform is a commitment to cutting-edge innovation. We harness the power of technologies such as AI, Machine Learning, Robotic Process Automation, Blockchain, Big Data, and Cloud Computing-leveraging modern languages and frameworks like Java, Kotlin, Angular, and Node.js. We are driven by a culture of curiosity, excellence, and inclusion. At Gigaforce, we hire top talent and provide an environment where every voice matters and every idea is valued. Our employees enjoy comprehensive medical benefits, equity participation, meal cards and generous paid time off. As an equal opportunity employer, we are proud to foster a diverse, equitable, and inclusive workplace that empowers all team members to thrive. We're seeking a NLP & Generative AI Engineers with 2-8 years of hands-on experience in traditional machine learning, natural language processing, and modern generative AI techniques. If you have experience deploying GenAI solutions to production, working with open-source technologies, and handling document-centric pipelines, this is the role for you. You'll work in a high-impact role, leading the design, development, and deployment of innovative AI/ML solutions for insurance claims processing and beyond. In this agile environment, you'll work within structured sprints and leverage data-driven insights and user feedback to guide decision-making. You'll balance strategic vision with tactical execution to ensure we continue to lead the industry in subrogation automation and claims optimization for the property and casualty insurance market. Key Responsibilities Build and deploy end-to-end NLP and GenAI-driven products focused on document understanding, summarization, classification, and retrieval. Design and implement models leveraging LLMs (e.g., GPT, T5, BERT) with capabilities like fine-tuning, instruction tuning, and prompt engineering. Work on scalable, cloud-based pipelines for training, serving, and monitoring models. Handle unstructured data from insurance-related documents such as claims, legal texts, and contracts. Collaborate cross-functionally with data scientists, ML engineers, product managers, and developers. Utilize and contribute to open-source tools and frameworks in the ML ecosystem. Deploy production-ready solutions using MLOps practices : Docker, Kubernetes, Airflow, MLflow, etc. Work on distributed/cloud systems (AWS, GCP, or Azure) with GPU-accelerated workflows. Evaluate and experiment with open-source LLMs and embeddings models (e.g., LangChain, Haystack, LlamaIndex, HuggingFace). Champion best practices in model validation, reproducibility, and responsible AI. Required Skills & Qualifications 2 - 8 years of experience as a Data Scientist, NLP Engineer, or ML Engineer. Strong grasp of traditional ML algorithms (SVMs, gradient boosting, etc.) and NLP fundamentals (word embeddings, topic modeling, text classification). Proven expertise in modern NLP & GenAI models, including : Transformer architectures (e.g., BERT, GPT, T5) Generative tasks : summarization, QA, chatbots, etc. Fine-tuning & prompt engineering for LLMs Experience with cloud platforms (especially AWS SageMaker, GCP, or Azure ML). Strong coding skills in Python, with libraries like Hugging Face, PyTorch, TensorFlow, Scikit-learn. Experience with open-source frameworks (LangChain, LlamaIndex, Haystack) preferred. Experience in document processing pipelines and understanding structured/unstructured insurance documents is a big plus. Familiar with MLOps tools such as MLflow, DVC, FastAPI, Docker, KubeFlow, Airflow. Familiarity with distributed computing and large-scale data processing (Spark, Hadoop, Databricks). Preferred Qualifications Experience deploying GenAI models in production environments. Contributions to open-source projects in ML/NLP/LLM space. Background in insurance, legal, or financial domain involving text-heavy workflows. Strong understanding of data privacy, ethical AI, and responsible model usage. (ref:hirist.tech)

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

Gruve is an innovative software services startup dedicated to transforming enterprises into AI powerhouses. Specializing in cybersecurity, customer experience, cloud infrastructure, and advanced technologies such as Large Language Models (LLMs), our mission is to assist customers in utilizing their data for making intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks. We are currently looking for a talented Engineer to join our AI team. In this role, you will lead experienced software and machine learning engineers to develop, test, and deploy AI-based solutions, focusing on large language models and other machine learning applications. This position presents an excellent opportunity to leverage your software engineering skills in a real-world environment and gain hands-on experience with cutting-edge AI technology. Key Responsibilities: - Design and Develop AI-Powered Solutions: Architect and implement scalable AI/ML systems, with a focus on Large Language Models (LLMs) and deep learning applications. - End-to-End Model Development: Lead the entire lifecycle of AI models, from data collection and preprocessing to training, fine-tuning, evaluation, and deployment. - Fine-Tuning & Customization: Utilize techniques like LoRA (Low-Rank Adaptation) and Q-LoRA to efficiently fine-tune large models for specific business applications. - Reasoning Model Implementation: Work with advanced reasoning models such as DeepSeek-R1 to explore their applications in enterprise AI workflows. - Data Engineering & Dataset Creation: Design and curate high-quality datasets optimized for fine-tuning AI models, ensuring robust training and validation processes. - Performance Optimization & Efficiency: Optimize model inference, computational efficiency, and resource utilization for large-scale AI applications. - MLOps & CI/CD Pipelines: Implement best practices for MLOps to ensure automated training, deployment, monitoring, and continuous improvement of AI models. - Cloud & Edge AI Deployment: Deploy and manage AI solutions in cloud environments (AWS, Azure, GCP) and explore edge AI deployment where applicable. - API Development & Microservices: Develop RESTful APIs and microservices to seamlessly integrate AI models into enterprise applications. - Security, Compliance & Ethical AI: Ensure AI solutions comply with industry standards, data privacy laws (e.g., GDPR, HIPAA), and ethical AI guidelines. - Collaboration & Stakeholder Engagement: Collaborate closely with product managers, data engineers, and business teams to translate business needs into AI-driven solutions. - Mentorship & Technical Leadership: Guide and mentor junior engineers to foster best practices in AI/ML development, model fine-tuning, and software engineering. - Research & Innovation: Stay informed on emerging AI trends, conduct experiments with cutting-edge architectures and techniques, and drive innovation within the team. Basic Qualifications: - Master's degree or PhD in Computer Science, Data Science, Engineering, or related field - 5-8 years of experience - Strong programming skills in Python and Java - Good understanding of machine learning fundamentals - Hands-on experience with Python and common ML libraries (e.g., PyTorch, TensorFlow, scikit-learn) - Familiarity with frontend development and frameworks like React - Basic knowledge of LLMs and transformer-based architectures is a plus Preferred Qualifications: - Excellent problem-solving skills and eagerness to learn in a fast-paced environment - Strong attention to detail and ability to communicate technical concepts clearly At Gruve, we promote a culture of innovation, collaboration, and continuous learning. We are dedicated to creating a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you are passionate about technology and eager to make an impact, we would love to hear from you. Gruve is an equal opportunity employer, welcoming applicants from all backgrounds. We appreciate all applicants, and only those selected for an interview will be contacted.,

Posted 1 day ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

About Us At NexusLink Services, we are passionate about building intelligent, scalable solutions that drive real-world impact. As we expand our AI/ML capabilities, were looking for a Generative AI-focused Data Scientist to join our innovative and rapidly growing team. Role Overview We are seeking an experienced Data Scientist with deep expertise in Generative AI to design and implement cutting-edge models that solve real business problems. You will work with LLMs, GANs, RAG frameworks, and transformer-based architectures to build production-ready solutions across industries. Key Responsibilities Design, develop, and fine-tune Generative AI models (LLMs, GANs, Diffusion models, etc. Work on RAG (Retrieval-Augmented Generation) and transformer-based architectures. Customize and fine-tune LLMs for domain-specific use cases. Build and manage robust ML pipelines for training, evaluation, and deployment. Collaborate with engineering teams to integrate models into real-world applications. Stay updated with the latest GenAI research, tools, and trends. Analyze model outputs, monitor performance, and ensure responsible AI use. Required Skills Strong Python skills with experience in ML/DL libraries: TensorFlow, PyTorch, HuggingFace Transformers. Deep understanding of LLMs, GANs, RAG, Autoencoders, etc. Experience with LoRA, PEFT, or similar fine-tuning techniques. Familiarity with Vector Databases (FAISS, Pinecone) and embeddings. Expertise in data preprocessing, synthetic data generation. Solid knowledge of NLP, prompt engineering, and language model safety. Hands-on experience with model deployment, APIs, and cloud platforms (AWS/GCP/Azure). Nice To Have Experience with Chatbots, Conversational AI, or AI Assistants. Familiarity with LangChain, LLMOps, or serverless model deployment. Background in MLOps, Docker/Kubernetes, and CI/CD pipelines. Exposure to OpenAI, Anthropic, Google Gemini, or Meta LLaMA. (ref:hirist.tech)

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

You are a highly experienced FinTech Academic Expert, sought to join our team full-time. As a domain expert with profound knowledge in fintech, excellent communication skills, and a dedication to training, content development, and thought leadership, you will play a crucial role in connecting fintech product expertise with educational innovation. Your responsibilities include designing and producing top-notch educational materials, case studies, and technical guides within the fintech domain. Your role will also involve staying updated on global fintech trends and incorporating these insights into learning content and internal training sessions. You will conduct sessions for employees, partners, or students to enhance their comprehension of fintech tools and technologies. Additionally, you will provide domain-specific knowledge to support fintech product design, testing, and rollout with a focus on user understanding. In terms of skills and tools, you must be proficient in Advanced Excel, Prompt Engineering, Agentic AI frameworks, MCP (Model Context Protocol), and have coding experience in Java, Python, or R. It would be advantageous to have experience with tools such as Bloomberg Terminal for Financial Markets, Ethereum/Blockchain development, TensorFlow, or other AI/ML toolkits, as well as familiarity with Agile and Kanban methodologies. Your other skills should include strong verbal and written communication, the ability to simplify complex concepts into easily understandable learning modules, a passion for education, innovation, and industry transformation. To be considered for this role, you should have a minimum of 5+ years of industry experience in fintech, financial services, or banking. Prior experience in teaching, mentoring, or training is highly desirable, and product development or product management experience in a fintech environment is a significant advantage.,

Posted 1 day ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Machine Learning Engineer, you will play a key role in developing and enhancing a Telecom Artificial Intelligence Product. This role requires a strong background in machine learning and deep learning, along with extensive experience in implementing advanced algorithms and models to solve complex problems. You will be working on cutting-edge technologies to develop solutions for anomaly detection, forecasting, event correlation, and fraud detection. Your responsibilities will include developing production-ready implementations of proposed solutions using various machine learning and deep learning algorithms. You will test these solutions on live customer data to ensure efficacy and robustness. Additionally, you will research and test novel machine learning approaches for large-scale distributed computing applications. In this role, you will be responsible for implementing and managing the full machine learning operations lifecycle using tools such as Kubeflow, MLflow, AutoML, and Kserve for model deployment. You will develop and deploy machine learning models using PyTorch and TensorFlow to ensure high performance and scalability. Furthermore, you will run and manage PySpark and Kafka on distributed systems with large-scale, non-linear network elements. To excel in this position, you should be proficient in Python programming and experienced with machine learning libraries such as Scikit-Learn and NumPy. Experience in time series analysis, data mining, text mining, and creating data architectures will be beneficial. You should also be able to utilize batch processing and incremental approaches to manage and analyze large datasets. As a Machine Learning Engineer, you will experiment with multiple algorithms, optimizing hyperparameters to identify the best-performing models. You will execute machine learning algorithms in cloud environments, leveraging cloud resources effectively. Continuous feedback gathering, model retraining, and updating will be essential to maintain and improve model performance. Moreover, you should have expertise in network characteristics, transformer architectures, GAN AI techniques, and end-to-end machine learning projects. Experience with leading supervised and unsupervised machine learning methods and familiarity with Python packages like Pandas, Numpy, and DL frameworks like Keras, TensorFlow, PyTorch are required. Knowledge of Big Data tools and environments, as well as MySQL/NoSQL databases, will be advantageous. You will collaborate with cross-functional teams of data scientists, software engineers, and stakeholders to integrate implemented systems into the SaaS platform. Your innovative thinking and creative ideas will contribute to improving the overall platform. Additionally, you will create use cases specific to the domain to solve business problems effectively. Ideally, you should have a Bachelor's degree in Science/IT/Computing or equivalent with at least 4 years of experience in a QA Engineering role. Strong quantitative and applied mathematical skills are essential, along with certification courses in Data Science/ML. In-depth knowledge of statistical techniques, machine learning techniques, and experience with Telecom Product development are preferred. Experience in MLOps is a plus for deploying developed models, and familiarity with scalable SaaS platforms is advantageous for this role.,

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As the A.I. Lead at Airbus Innovation Centre India & South Asia (AIC), you will play a pivotal role in leading a team of data scientists and data analysts to develop cutting-edge AI products within the aviation industry. With a minimum of 10 years of hands-on experience in AI, you will guide the team in the creation of innovative solutions with a particular focus on Large Language Models (LLM), Computer Vision, and open-source machine learning models. Your expertise in system engineering, especially Model-Based Systems Engineering (MBSE), will be essential in ensuring the successful integration of AI technologies into aviation systems while adhering to safety and regulatory standards. Your responsibilities will include leading and managing the team to design and deploy AI-driven solutions for aviation, collaborating with cross-functional teams to define project goals, and providing technical leadership in selecting and optimizing machine learning models. You will drive the development of AI algorithms to address complex aviation challenges such as predictive maintenance and anomaly detection. Additionally, your role will involve mentoring team members, staying updated with the latest advancements in AI, and communicating project progress effectively to stakeholders. To excel in this role, you should have a proven track record of successfully leading teams in AI product development, a deep understanding of system engineering principles, and extensive experience in designing and optimizing AI models using Python and relevant libraries such as TensorFlow and PyTorch. Effective communication skills, problem-solving abilities, and a data-driven approach to decision-making are crucial for collaborating with cross-functional teams and conveying technical concepts to non-technical stakeholders. A master's or doctoral degree in a relevant field is preferred. This permanent position at Airbus India Private Limited offers a professional level of experience and the opportunity to work in a collaborative and innovative environment. By applying for this role, you consent to Airbus using and storing information about you for monitoring purposes. Airbus is committed to equal opportunities and does not engage in any monetary exchanges during the recruitment process. Flexibility and innovation are encouraged at Airbus, and flexible working arrangements are supported to facilitate a conducive work environment.,

Posted 1 day ago

Apply

36.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Roles And Responsibilities Build and deploy AI-driven applications and services, focusing on Generative AI and Large Language Models (LLMs). Design and implement Agentic AI systemsautonomous agents capable of planning and executing multi-step tasks. Collaborate with cross-functional teams including product, design, and engineering to integrate AI capabilities into products. Write clean, scalable code and build robust APIs and services to support AI model deployment. Own feature delivery end-to-endfrom research and experimentation to deployment and monitoring. Stay current with emerging AI frameworks, tools, and best practices and apply them in product development. Contribute to a high-performing team culture and mentor junior team members as needed. Skill Set 36 years of overall software development experience, with 3+ years specifically in AI/ML engineering. Strong proficiency in Python, with hands-on experience in PyTorch, TensorFlow, and Transformers (Hugging Face). Proven experience working with LLMs (e.g., GPT, Claude, Mistral) and Generative AI models (text, image, or audio). Practical knowledge of Agentic AI frameworks (e.g., LangChain, AutoGPT, Semantic Kernel). Experience building and deploying ML models to production environments. Familiarity with vector databases (Pinecone, Weaviate, FAISS) and prompt engineering concepts. Comfortable working in a startup-like environment self-motivated, adaptable, and willing to take ownership. Solid understanding of API development, version control, and modern DevOps/MLOps practices. (ref:hirist.tech)

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies