Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
Kochi, Kerala, India
On-site
Your First Job? Tell Us We’re Doing It Wrong. Location: Onsite // Kerala Experience: MERN Stack // Min. 3 Year About Us Helixo is a bootstrapped, profitable company on a mission to create next-gen Shopify apps for next-gen ecommerce brands. We’re building the Apple of Shopify - an ecosystem of 15 AI-first apps so seamless and loved, they become the operating system for ecommerce. Brands like Decathlon, Nykaa, and Bellavita have already generated over $120 million in extra revenue through our products. But that’s not enough. Our eyes are on the $1 billion impact mark in next 3 years (₹1,00,00,00,000 - yes, those zeros are real!). And we’re rebuilding our flagship app to get there. Mission: MUGA - Make UFE Great Again We’re on a mission to reclaim the UFE.ai app #1 spot on the Shopify App Store. For that, we need an A+ Product Lead - someone who: Thinks like a founder, Owns like a CPO, Executes like a startup operator We only live once - so why waste it doing ordinary work? Join the $1B impact rocketship. Who We're Looking For We’re not hiring a Product Lead to follow plans - but someone to challenge them. Rebuild them. Someone who walks in and says: “This part? It doesn’t make sense. Let’s fix it.” “That doesn't scale, Here’s a better one.” “Claude is good, but let’s use Kimi K2 in Groq inference.” I've personally taken the CPO role to rebuild the foundation. But I can’t do it alone. I need someone to stand beside me - not under me. This is not a coordinator job. This is a leadership seat - in a company where decisions are fast, outcomes matter, and 100-year brand is the endgame. What We Need in Our Product Lead You’ll take full ownership of our product direction and delivery - from roadmap to execution. This role is equal parts strategy, execution, and communication. You’ll work closely with developers, support, and the CEO, CPO to make sure our roadmap is aligned, focused, and shipping fast. You’ll drive the product vision forward while building a high-performance product culture. Responsibilities // What You’ll Own Full product lifecycle - discovery, planning, delivery, QA, and launch Create clear PRDs and RFCs with context, clarity, and structured logic Prioritize what matters (and cut what doesn’t) - aligning roadmap with business needs Breaking down features into sprintable chunks for devs Collaborate with support and marketing to turn feedback into features Oversee production deployments and internal comms Maintain feature quality and completeness through rigorous pre-launch review Set up systems for repeatability - docs, checklists, SOPs Conduct 1:1s and standups with product team to drive clarity and accountability Raise the bar on thinking, ownership, and communication in the team Must-Have Skills 3+ years in a Product Management, Founder, or Tech/Product hybrid role Proven experience shipping features in a B2B SaaS or Shopify app environment Strong grasp of UX, usability, and feature prioritization Excellent writing and communication - especially for async, structured comms Hands-on experience working closely with devs, QA, and support Familiarity with tools like Linear, Notion, Loom, and ClickUp High clarity of thought - you can simplify chaos into clear action Deep ownership mindset - you follow through, not just follow up Bonus Skills Past experience as a founder or early employee in a product startup Worked in a Shopify ecosystem or ecommerce domain Experience reviewing code / managing technical delivery with developers Understanding of Polaris and Shopify app experience What You Get High freedom, zero micromanagement Full product ownership and visibility to business impact Access to a passionate, founder-led team building for the long term A chance to play a core CXO-track role in a product-led rocketship Do your best work now - earn Financial Freedom + Time Freedom later. ❌ What We’re NOT Looking For (Read This Twice) Someone who needs repeated follow-ups or lacks a proactive mindset Strategy-talkers who avoid real execution Vague communicators who say “maybe”, “later”, or “I’ll try” Folks who avoid feedback or accountability Deadline-missers without updates Inconsistent executors with excuse loops A Personal Note My personal mission? You should be able to build your own kickass startup after Helixo. This isn’t a safe role. It’s a front-row seat to chaos, ownership, and clarity. If that excites you more than it scares you - let’s talk. How to apply? Send your resume and a 2-3 minute Loom video telling us: Why you’re interested in this role, What caught your eyes One product you’ve shipped that you’re proud of How would you build a team you can go to war with, one that fits in your hand and never drops the ball? Email: join@helixo.co Subject: I'm your Product Lead for $1B impact // [Your Name]
Posted 1 week ago
3.0 years
0 Lacs
Kochi, Kerala, India
On-site
Your First Job? Tell Us We’re Doing It Wrong. Location: Onsite // Kerala Experience: MERN Stack // Min. 3 Year About Us Helixo is a bootstrapped, profitable company on a mission to create next-gen Shopify apps for next-gen ecommerce brands. We’re building the Apple of Shopify - an ecosystem of 15 AI-first apps so seamless and loved, they become the operating system for ecommerce. Brands like Decathlon, Nykaa, and Bellavita have already generated over $120 million in extra revenue through our products. But that’s not enough. Our eyes are on the $1 billion impact mark in next 3 years (₹1,00,00,00,000 - yes, those zeros are real!). And we’re rebuilding our flagship app to get there. Mission: MUGA - Make UFE Great Again We’re on a mission to reclaim the UFE.ai app #1 spot on the Shopify App Store. For that, we need an A+ Product Lead - someone who: Thinks like a founder, Owns like a CPO, Executes like a startup operator We only live once - so why waste it doing ordinary work? Join the $1B impact rocketship. Who We're Looking For We’re not hiring a Product Lead to follow plans - but someone to challenge them. Rebuild them. Someone who walks in and says: “This part? It doesn’t make sense. Let’s fix it.” “That doesn't scale, Here’s a better one.” “Claude is good, but let’s use Kimi K2 in Groq inference.” I've personally taken the CPO role to rebuild the foundation. But I can’t do it alone. I need someone to stand beside me - not under me. This is not a coordinator job. This is a leadership seat - in a company where decisions are fast, outcomes matter, and 100-year brand is the endgame. What We Need in Our Product Lead You’ll take full ownership of our product direction and delivery - from roadmap to execution. This role is equal parts strategy, execution, and communication. You’ll work closely with developers, support, and the CEO, CPO to make sure our roadmap is aligned, focused, and shipping fast. You’ll drive the product vision forward while building a high-performance product culture. Responsibilities // What You’ll Own Full product lifecycle - discovery, planning, delivery, QA, and launch Create clear PRDs and RFCs with context, clarity, and structured logic Prioritize what matters (and cut what doesn’t) - aligning roadmap with business needs Breaking down features into sprintable chunks for devs Collaborate with support and marketing to turn feedback into features Oversee production deployments and internal comms Maintain feature quality and completeness through rigorous pre-launch review Set up systems for repeatability - docs, checklists, SOPs Conduct 1:1s and standups with product team to drive clarity and accountability Raise the bar on thinking, ownership, and communication in the team Must-Have Skills Min 3 years experience in MERN Stack and 1 year experience in Team lead/Product Lead role Proven experience shipping features in a B2B SaaS or Shopify app environment Strong grasp of UX, usability, and feature prioritization Excellent writing and communication - especially for async, structured comms Hands-on experience working closely with devs, QA, and support Familiarity with tools like Linear, Notion, Loom, and ClickUp High clarity of thought - you can simplify chaos into clear action Deep ownership mindset - you follow through, not just follow up Bonus Skills 3+ years in a Product Management, Founder, or Tech/Product hybrid role Past experience as a founder or early employee in a product startup Worked in a Shopify ecosystem or ecommerce domain Experience reviewing code / managing technical delivery with developers Understanding of Polaris and Shopify app experience What You Get High freedom, zero micromanagement Full product ownership and visibility to business impact Access to a passionate, founder-led team building for the long term A chance to play a core CXO-track role in a product-led rocketship Do your best work now - earn Financial Freedom + Time Freedom later. ❌ What We’re NOT Looking For (Read This Twice) Someone who needs repeated follow-ups or lacks a proactive mindset Strategy-talkers who avoid real execution Vague communicators who say “maybe”, “later”, or “I’ll try” Folks who avoid feedback or accountability Deadline-missers without updates Inconsistent executors with excuse loops A Personal Note My personal mission? You should be able to build your own kickass startup after Helixo. This isn’t a safe role. It’s a front-row seat to chaos, ownership, and clarity. If that excites you more than it scares you - let’s talk. How to apply? Send your resume and a 2-3 minute Loom video telling us: Why you’re interested in this role, What caught your eyes One product you’ve shipped that you’re proud of How would you build a team you can go to war with, one that fits in your hand and never drops the ball? Email: join@helixo.co Subject: I'm your Product Lead for $1B impact // [Your Name]
Posted 1 week ago
4.0 - 7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Senior Machine Learning Engineer Location: Chennai ,India Experience: 4-7 Years Company: Giggso India Pvt. Ltd. About Giggso Giggso is an award-winning, Michigan-based AI startup, recognized in the Top 50 Michigan Startups of 2023 & 2024. Founded in 2017, we deliver a unified platform for AI agent orchestration, governance, and observability, simplifying complex enterprise workflows. Our solutions extend to model risk management, security, and blockchain enablement, ensuring trustworthy AI across diverse industries. By automating operations and providing real-time monitoring, Giggso drives cost savings and boosts organizational intelligence. We champion responsible AI to help businesses optimize decision-making and enhance customer experiences at scale. Job Summary We are looking for a highly skilled Machine Learning Engineer to join our team and play a key role in product development, leading multiple projects, and mentoring junior members of the Machine Learning and Data Engineering team. The ideal candidate should have a strong background in end-to-end machine learning pipelines, model deployment, monitoring, and optimization. Key Responsibilities Develop, train, deploy, and monitor Machine Learning models for various business use cases. Utilize published/existing models and adapt them to improve efficiencies and meet specific business requirements. Optimize machine learning model training and inference times. Perform continuous monitoring of ML models, detect model variations, and manage retraining/redeployment processes. Handle end-to-end tasks, including data collection, preparation/annotation, validation, and model evaluation. Define and develop ML infrastructure to enhance the efficiency of ML development workflows. Collaborate with Engineers, Data Scientists, Product Managers, and Business Partners to design and implement innovative data-driven solutions. Communicate complex data science concepts effectively to technical and non-technical stakeholders. Required Skills & Qualifications Experience in productizing and deploying Machine Learning solutions. Thorough understanding of the full ML pipeline, from data collection to inference. Strong expertise in model monitoring, detecting variations, and managing retraining/redeployment processes. Proficiency in Python and experience with ML frameworks/libraries such as TensorFlow, Keras, PyTorch, spaCy, fastText, and Scikit-learn. Knowledge or experience in MLOps and ModelOps practices. Hands-on experience in Natural Language Processing (NLP), including text classification, entity extraction, and content summarization. Strong problem-solving skills with the ability to analyze, optimize, and scale models. Excellent verbal and written communication skills to collaborate effectively across teams and influence strategic decisions. What We Look For We seek innovative minds who thrive in an agile and fast-paced environment. We value technologists with a passion for ownership and innovation, who can think beyond traditional programming roles. We are looking for individuals with strong technical acumen and effective communication skills, who can drive impactful solutions and contribute to our platform's success. Why Join Us? Work on cutting-edge AI/ML technologies and real-world applications.. Mentorship and leadership opportunities to grow your career. Collaborate with global teams and work on innovative projects. If you are passionate about Machine Learning and want to contribute to a dynamic team driving innovation at scale, we would love to hear from you!
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Ciklum is looking for a Data Engineer to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: As a Data Engineer, become a part of a cross-functional development team who is working with GenAI solutions for digital transformation across Enterprise Products. The prospective team you will be working with is responsible for the design, development, and deployment of innovative, enterprise technology, tools, and standard processes to support the delivery of tax services. The team focuses on the ability to deliver comprehensive, value-added, and efficient tax services to our clients. It is a dynamic team with professionals of varying backgrounds from tax technical, technology development, change management, and project management. The team consults and executes on a wide range of initiatives involving process and tool development and implementation including training development, engagement management, tool design, and implementation. Responsibilities: Responsible for the building, deployment, and maintenance of mission-critical analytics solutions that process terabytes of data quickly at big-data scales Contributes design, code, configurations, manage data ingestion, real-time streaming, batch processing, ETL across multiple data storages Responsible for performance tuning of complicated SQL queries and Data flows Requirements: Experience coding in SQL/Python, with solid CS fundamentals including data structure and algorithm design Hands-on implementation experience working with a combination of the following technologies: Hadoop, Map Reduce, Kafka, Hive, Spark, SQL and NoSQL data warehouses Experience in Azure cloud data platform Experience working with vector databases (Milvus, Postgres, etc.) Knowledge of embedding models and retrieval-augmented generation (RAG) architectures Understanding of LLM pipelines, including data preprocessing for GenAI models Experience deploying data pipelines for AI/ML workloads(*), ensuring scalability and efficiency Familiarity with model monitoring(*), feature stores (Feast, Vertex AI Feature Store), and data versioning Experience with CI/CD for ML pipelines(*) (Kubeflow, MLflow, Airflow, SageMaker Pipelines) Understanding of real-time streaming for ML model inference (Kafka, Spark Streaming) Knowledge of Data Warehousing, design, implementation and optimization Knowledge of Data Quality testing, automation and results visualization Knowledge of BI reports and dashboards design and implementation (PowerBI) Experience with supporting data scientists and complex statistical use cases highly desirable What`s in it for you? Strong community: Work alongside top professionals in a friendly, open-door environment Growth focus: Take on large-scale projects with a global impact and expand your expertise Tailored learning: Boost your skills with internal events (meetups, conferences, workshops), Udemy access, language courses, and company-paid certifications Endless opportunities: Explore diverse domains through internal mobility, finding the best fit to gain hands-on experience with cutting-edge technologies Care: We’ve got you covered with company-paid medical insurance, mental health support, and financial & legal consultations About us: At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you’ll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level. Want to learn more about us? Follow us on Instagram , Facebook , LinkedIn . Explore, empower, engineer with Ciklum! Interested already? We would love to get to know you! Submit your application. We can’t wait to see you at Ciklum.
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Ciklum is looking for a Data Engineer to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: As a Data Engineer, become a part of a cross-functional development team who is working with GenAI solutions for digital transformation across Enterprise Products. The prospective team you will be working with is responsible for the design, development, and deployment of innovative, enterprise technology, tools, and standard processes to support the delivery of tax services. The team focuses on the ability to deliver comprehensive, value-added, and efficient tax services to our clients. It is a dynamic team with professionals of varying backgrounds from tax technical, technology development, change management, and project management. The team consults and executes on a wide range of initiatives involving process and tool development and implementation including training development, engagement management, tool design, and implementation. Responsibilities: Responsible for the building, deployment, and maintenance of mission-critical analytics solutions that process terabytes of data quickly at big-data scales Contributes design, code, configurations, manage data ingestion, real-time streaming, batch processing, ETL across multiple data storages Responsible for performance tuning of complicated SQL queries and Data flows Requirements: Experience coding in SQL/Python, with solid CS fundamentals including data structure and algorithm design Hands-on implementation experience working with a combination of the following technologies: Hadoop, Map Reduce, Kafka, Hive, Spark, SQL and NoSQL data warehouses Experience in Azure cloud data platform Experience working with vector databases (Milvus, Postgres, etc.) Knowledge of embedding models and retrieval-augmented generation (RAG) architectures Understanding of LLM pipelines, including data preprocessing for GenAI models Experience deploying data pipelines for AI/ML workloads(*), ensuring scalability and efficiency Familiarity with model monitoring(*), feature stores (Feast, Vertex AI Feature Store), and data versioning Experience with CI/CD for ML pipelines(*) (Kubeflow, MLflow, Airflow, SageMaker Pipelines) Understanding of real-time streaming for ML model inference (Kafka, Spark Streaming) Knowledge of Data Warehousing, design, implementation and optimization Knowledge of Data Quality testing, automation and results visualization Knowledge of BI reports and dashboards design and implementation (PowerBI) Experience with supporting data scientists and complex statistical use cases highly desirable What`s in it for you? Strong community: Work alongside top professionals in a friendly, open-door environment Growth focus: Take on large-scale projects with a global impact and expand your expertise Tailored learning: Boost your skills with internal events (meetups, conferences, workshops), Udemy access, language courses, and company-paid certifications Endless opportunities: Explore diverse domains through internal mobility, finding the best fit to gain hands-on experience with cutting-edge technologies Care: We’ve got you covered with company-paid medical insurance, mental health support, and financial & legal consultations About us: At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you’ll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level. Want to learn more about us? Follow us on Instagram , Facebook , LinkedIn . Explore, empower, engineer with Ciklum! Interested already? We would love to get to know you! Submit your application. We can’t wait to see you at Ciklum.
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description 💰 Compensation Note: The budget for this role is fixed at INR 50–55 lakhs per annum (non-negotiable). Please ensure this aligns with your expectations before applying. 📍 Work Setup: This is a hybrid role , requiring 3 days per week onsite at the office in Hyderabad, Bangalore or Pune, India . Company Description: Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. Job Description : We are looking for an AI Engineer with experience in Speech-to-text and Text Generation to solve a Conversational AI challenge for our client based in EMEA. The focus of this project is to transcribe conversations and leverage generative AI-powered text analytics to drive better engagement strategies and decision-making. The ideal candidate will have deep expertise in Speech-to-Text (STT), Natural Language Processing (NLP), Large Language Models (LLMs), and Conversational AI systems. This role involves working on real-time transcription, intent analysis, sentiment analysis, summarization, and decision-support tools. Key Responsibilities: Conversational AI & Call Transcription Development Develop and fine-tune automatic speech recognition (ASR) models Implement language model fine-tuning for industry-specific language. Develop speaker diarization techniques to distinguish speakers in multi-speaker conversations. NLP & Generative AI Applications Build summarization models to extract key insights from conversations. Implement Named Entity Recognition (NER) to identify key topics. Apply LLMs for conversation analytics and context-aware recommendations. Design custom RAG (Retrieval-Augmented Generation) pipelines to enrich call summaries with external knowledge. Sentiment Analysis & Decision Support Develop sentiment and intent classification models. Create predictive models that suggest next-best actions based on call content, engagement levels, and historical data. AI Deployment & Scalability Deploy AI models using tools like AWS, GCP, Azure AI, ensuring scalability and real-time processing. Optimize inference pipelines using ONNX, TensorRT, or Triton for cost-effective model serving. Implement MLOps workflows to continuously improve model performance with new call data. Qualifications: Technical Skills Strong experience in Speech-to-Text (ASR), NLP, and Conversational AI. Hands-on expertise with tools like Whisper, DeepSpeech, Kaldi, AWS Transcribe, Google Speech-to-Text. Proficiency in Python, PyTorch, TensorFlow, Hugging Face Transformers. Experience with LLM fine-tuning, RAG-based architectures, and LangChain. Hands-on experience with Vector Databases (FAISS, Pinecone, Weaviate, ChromaDB) for knowledge retrieval. Experience deploying AI models using Docker, Kubernetes, FastAPI, Flask. Soft Skills Ability to translate AI insights into business impact. Strong problem-solving skills and ability to work in a fast-paced AI-first environment. Excellent communication skills to collaborate with cross-functional teams, including data scientists, engineers, and client stakeholders. Preferred Qualifications Experience in healthcare, pharma, or life sciences NLP use cases. Background in knowledge graphs, prompt engineering, and multimodal AI. Experience with Reinforcement Learning (RLHF) for improving conversation models.
Posted 1 week ago
16.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Area(s) of responsibility Job Title: Generative AI Technical Architect Role Overview Generative AI Architect will design, develop, and implement advanced generative AI solutions that drive business impact. This role offers the opportunity to work at the forefront of AI innovation. The Architect will lead the end-to-end architecture, design, and deployment of scalable generative AI systems. Responsibilities include conceptualizing solutions, selecting models/frameworks, overseeing development, and integrating AI capabilities into platforms. Collaboration with stakeholders to translate complex requirements into high-performance AI solutions is key. Key Responsibilities Design GenAI Solutions: Lead architecture of generative AI systems, including LLM selection, RAG, and fine-tuning. Azure AI Expertise: Build scalable AI solutions using Azure AI services. Python Development: Write efficient, maintainable Python code for data processing, automation, and APIs. Model Optimization: Enhance model performance, scalability, and cost-efficiency. Data Strategy: Design data pipelines for training/inference using Azure data services. Integration & Deployment: Integrate models into enterprise systems. Implement MLOps, CI/CD (Azure DevOps, GitHub, Jenkins), and containerization (Docker, Kubernetes). Technical Leadership: Guide teams on AI development and deployment best practices. Innovation: Stay updated on GenAI trends. Drive PoCs and pilot implementations. Collaboration: Work with cross-functional teams to align AI solutions with business goals. Communicate technical concepts to non-technical stakeholders. Required Skills & Qualifications 12–16 years in IT, with 3+ years in GenAI architecture. Technical Proficiency: Azure AI services Python, TensorFlow, PyTorch, Hugging Face, LangChain, LlamaIndex LLMs, transformers, diffusion models Prompt engineering, RAG, vector DBs (Pinecone, Weaviate, Chroma) MLOps, CI/CD, Kubernetes RESTful API development Architecture: Cloud, microservices, design patterns Problem-Solving: Strong analytical and creative thinking Communication: Clear articulation of complex concepts Teamwork: Agile collaboration and project leadership Desirable Azure AI certifications Experience with AWS LLM fine-tuning Open-source contributions or AI/ML publications
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title ML Engineer – Predictive Maintenance Job Description ML Engineer at DSP – Predictive Maintenance Hay Level Hay 60 Job Location Veghel Vanderlande provides baggage handling systems for 600 airports globally, moving over 4 billion pieces of baggage annually. For the parcel market, our systems handle 52 million parcels daily. All these systems generate massive amounts of data. Do you see the challenge in building models and solutions that enable data-driven services, including predictive insights using machine learning? Would you like to contribute to Vanderlande's fast-growing Technology Department and its journey to become more data-driven? If so, join our Digital Service Platform team! Your Position You will work as a Data Engineer with Machine Learning expertise in the Predictive Maintenance team. This hybrid and multi-cultural team includes Data Scientists, Machine Learning Engineers, Data Engineers, a DevOps Engineer, a QA Engineer, an Architect, a UX Designer, a Scrum Master, and a Product Owner. The Digital Service Platform focuses on optimizing customer asset usage and maintenance, impacting performance, cost, and sustainability KPIs by extending component lifetimes. In your role, you will Participate in solution design discussions led by our Product Architect, where your input as a Data Engineer with ML expertise is highly valued. Collaborate with IT and business SMEs to ensure delivery of high-quality end-to-end data and machine learning pipelines. Your Responsibilities Data Engineering Develop, test, and document data (collection and processing) pipelines for Predictive Maintenance solutions, including data from (IoT) sensors and control components to our data platform. Build scalable pipelines to transform, aggregate, and make data available for machine learning models. Align implementation efforts with other back-end developers across multiple development teams. Machine Learning Integration Collaborate with Data Scientists to integrate machine learning models into production pipelines, ensuring smooth deployment and scalability. Develop and optimize end-to-end machine learning pipelines (MLOps) from data preparation to model deployment and monitoring. Work on model inference pipelines, ensuring efficient real-time predictions from deployed models. Implement automated retraining workflows and ensure version control for datasets and models. Continuous Improvement Contribute to the design and build of a CI/CD pipeline, including integration test automation for data and ML pipelines. Continuously improve and standardize data and ML services for customer sites to reduce project delivery time. Actively monitor model performance and ensure timely updates or retraining as needed. Your Profile Minimum 4 years' experience building complex data pipelines and integrating machine learning solutions. Bachelor's or Master's degree in Computer Science, IT, Data Science, or equivalent. Hands-on experience with data modeling and machine learning workflows. Strong programming skills in Java, Scala, and Python (preferred for ML tasks). Experience with stream processing frameworks (e.g., Spark) and streaming storage (e.g., Kafka). Proven experience with MLOps practices, including data preprocessing, model deployment, and monitoring. Familiarity with ML frameworks and tools (e.g., TensorFlow, PyTorch, MLflow). Proficient in cloud platforms (preferably Azure and Databricks). Experience with data quality management, monitoring, and ensuring robust pipelines. Knowledge of Predictive Maintenance model development is a strong plus. What You’ll Gain Opportunity to work at the forefront of data-driven innovation in a global organization. Collaborate with a talented and diverse team to design and implement cutting-edge solutions. Expand your expertise in data engineering and machine learning in a real-world industrial setting. If you are passionate about leveraging data and machine learning to drive innovation, we’d love to hear from you!
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description 💰 Compensation Note: The budget for this role is fixed at INR 50–55 lakhs per annum (non-negotiable). Please ensure this aligns with your expectations before applying. 📍 Work Setup: This is a hybrid role , requiring 3 days per week onsite at the office in Hyderabad, Bangalore or Pune, India . Company Description: Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. Job Description : We are looking for an AI Engineer with experience in Speech-to-text and Text Generation to solve a Conversational AI challenge for our client based in EMEA. The focus of this project is to transcribe conversations and leverage generative AI-powered text analytics to drive better engagement strategies and decision-making. The ideal candidate will have deep expertise in Speech-to-Text (STT), Natural Language Processing (NLP), Large Language Models (LLMs), and Conversational AI systems. This role involves working on real-time transcription, intent analysis, sentiment analysis, summarization, and decision-support tools. Key Responsibilities: Conversational AI & Call Transcription Development Develop and fine-tune automatic speech recognition (ASR) models Implement language model fine-tuning for industry-specific language. Develop speaker diarization techniques to distinguish speakers in multi-speaker conversations. NLP & Generative AI Applications Build summarization models to extract key insights from conversations. Implement Named Entity Recognition (NER) to identify key topics. Apply LLMs for conversation analytics and context-aware recommendations. Design custom RAG (Retrieval-Augmented Generation) pipelines to enrich call summaries with external knowledge. Sentiment Analysis & Decision Support Develop sentiment and intent classification models. Create predictive models that suggest next-best actions based on call content, engagement levels, and historical data. AI Deployment & Scalability Deploy AI models using tools like AWS, GCP, Azure AI, ensuring scalability and real-time processing. Optimize inference pipelines using ONNX, TensorRT, or Triton for cost-effective model serving. Implement MLOps workflows to continuously improve model performance with new call data. Qualifications: Technical Skills Strong experience in Speech-to-Text (ASR), NLP, and Conversational AI. Hands-on expertise with tools like Whisper, DeepSpeech, Kaldi, AWS Transcribe, Google Speech-to-Text. Proficiency in Python, PyTorch, TensorFlow, Hugging Face Transformers. Experience with LLM fine-tuning, RAG-based architectures, and LangChain. Hands-on experience with Vector Databases (FAISS, Pinecone, Weaviate, ChromaDB) for knowledge retrieval. Experience deploying AI models using Docker, Kubernetes, FastAPI, Flask. Soft Skills Ability to translate AI insights into business impact. Strong problem-solving skills and ability to work in a fast-paced AI-first environment. Excellent communication skills to collaborate with cross-functional teams, including data scientists, engineers, and client stakeholders. Preferred Qualifications Experience in healthcare, pharma, or life sciences NLP use cases. Background in knowledge graphs, prompt engineering, and multimodal AI. Experience with Reinforcement Learning (RLHF) for improving conversation models.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We’re looking for a hands-on backend expert who can take our FastAPI-based platform to the next level: production-grade model-inference services, agentic AI workflows, and seamless integration with third-party LLMs and NLP tooling. Note: This role is being hired for one of our client companies. The company name will be disclosed during the interview process. WHAT YOU'LL BUILD 1. Core Backend Enhancements Build APIs Harden security (OAuth2/JWT, rate-limiting, SecretManager) and observability (structured logging, tracing) Add CI/CD, test automation, ,health checks and SLO dashboards 2. Awesome UI Interfaces React.js/Next.js, Redact/Context, Tailwind / MUI / Custom-CSS / Shadcn / Axios 3. LLM & Agentic Services Design micro/mini-services that host and route to OpenAI, Anthropic, local HF models, embeddings & RAG pipelines Implement autonomous/recursive agents that orchestrate multi-step chains (Tools, Memory, Planning) 4. Model-Inference Infrastructure Spin up GPU / CPU inference servers behind an API gateway Optimize throughput with batching, streaming, quantization & caching (Redis / pgvector) 5. NLP & Data Services Own the NLP stack: Transformers for classification, extraction, and embedding generation. Build data pipelines that join aggregated business metrics with model telemetry for analytics TECH STACK YOU'LL WORK WITH 1.Fullstack/Backend Infrastructure • Python, FastAPI, Starlette, Pydantic • Async SQLAlchemy, Postgres, Alembic, pgvector • Docker, Kubernetes, or ECS/Fargate - AWS (Or) GCP • Redis / RabbitMQ / Celery (jobs & caching) • Prometheus, Grafana, OpenTelemetry • If you are a full-stack person, then - react.js / next.js / shadcn / tailwind.css / MUI 2.AI / NLP • HuggingFace Transformers, LangChain / Llama-Index, Torch / TensorRT • OpenAI, Anthropic, Azure OpenAI, Cohere APIs • Vector search (Pinecone, Qdrant, PGVector) 3. Tooling • Pytest, GitHub Actions • Terraform / CDK preferred MUST HAVE EXPERIENCE • 3+ yrs building production Python REST APIs (FastAPI, Flask, or Django-REST) • SQL schema design & query optimization in Postgres (CTEs, JSONB) • Deep knowledge of async patterns & concurrency (asyncio, AnyIO, celery) • Crafted awesome UI Applications that integrate with the backend API • Hands-on with RAG, LLM/embedding workflows, prompt-engineering & at least one of “agent-ops” frameworks (LangGraph, CrewAI, AutoGen) • Cloud container orchestration (Any of K8s, ECS, GKE, AKS, etc.) • CI/CD pipelines and infra-as-code NICE-TO-HAVE EXPERIENCE • Streaming protocols (Server-Sent Events, WebSockets, gRPC) • NGINX Ingress / AWS API Gateway • RBAC / multi-tenant SaaS security hardening • Data privacy, PII redaction, secure key vault integrations • Bitemporal or event-sourced data models WHY DOES THIS ROLE MATTER? We’re growing fast. Products are live, but evolving. Challenges are real, and the opportunity to own systems end-to-end is massive. You’ll lead how we scale AI services, work directly with the founder, and shape what the next wave of our platform looks like. If you’re looking for meaningful ownership and a chance to work on hard, forward-looking problems, this role is for you.
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Summary We are looking for a hands-on Data Scientist with deep expertise in NLP and Generative AI to help build and refine the intelligence behind our agentic AI systems. You will be responsible for fine- tuning, prompt engineering, and evaluating LLMs that power our digital workers across real-world workflows. Years of Experience 3 - 6 Years Budget 18 LPA to 24 LPA Location Chennai Immediate to 30 days Key Responsibilities · Fine-tune and evaluate LLMs (e.g., Mistral, LLaMA, Qwen) using frameworks like Unsloth, HuggingFace, and DeepSpeed · Develop high-quality prompts and RAG pipelines for few-shot and zero-shot performance · Analyze and curate domain-specific text datasets for training and evaluation · Conduct performance and safety evaluation of fine-tuned models · Collaborate with engineering teams to integrate models into agentic workflows · Stay up to date with the latest in open-source LLMs and GenAI tools, and rapidly prototype experiments · Apply efficient training and inference techniques (LoRA, QLoRA, quantization, etc.) Requirements · 3+ years of experience in Natural Language Processing (NLP) and machine learning applied to text · Strong coding skills in python · Hands-on experience fine-tuning LLMs (e.g., LLaMA, Mistral, Falcon, Qwen) using frameworks like Unsloth, HuggingFace Transformers, PEFT, LoRA, QLoRA, bitsandbytes · Proficient in PyTorch (preferred) or TensorFlow, with experience in writing custom training/evaluation loops · Experience in dataset preparation, tokenization (e.g., Tokenizer, tokenizers), and formatting for instruction tuning (ChatML, Alpaca, ShareGPT formats) · Familiarity with retrieval-augmented generation (RAG) using FAISS, Chroma, Weaviate, or Qdrant · Strong knowledge of prompt engineering, few-shot/zero-shot learning, chain-of-thought prompting, and function-calling patterns · Exposure to agentic AI frameworks like CrewAI, Phidata, LangChain, LlamaIndex, or AutoGen · Experience with GPU-accelerated training/inference and libraries like DeepSpeed, Accelerate, Flash Attention, Transformers v2, etc. · Solid understanding of LLM evaluation metrics (BLEU, ROUGE, perplexity, pass@k) and safety- related metrics (toxicity, bias) · Ability to work with open-source checkpoints and formats (e.g., safetensors, GGUF, HF Hub, GPTQ, ExLlama) · Comfortable with containerized environments (Docker) and scripting for training pipelines, data curation, or evaluation workflows Nice to Haves · Experience in Linux (Ubuntu) · Terminal/Bash Scripting
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We’re in an unbelievably exciting area of tech and are fundamentally reshaping the data storage industry. Here, you lead with innovative thinking, grow along with us, and join the smartest team in the industry. This type of work—work that changes the world—is what the tech industry was founded on. So, if you're ready to seize the endless opportunities and leave your mark, come join us. SHOULD YOU ACCEPT THIS CHALLENGE... The MTS AI/ML Engineer will contribute to the development and implementation of generative AI applications. This position requires foundational skills in AI/ML technologies and software development, with an eagerness to grow in the field of generative AI. Scope of work - The primary focus of the role is to contribute to the enterprise's transformation using generative AI technologies. Engage in advanced research and experimentation with cutting-edge technologies in the field of generative AI and develop custom applications using gen AI to solve complex business problems. Work on proprietary data formats, analyze data, and fine-tune models for specific use cases, such as code review and customer-facing sux`pport models. Spearhead the exploration and adoption of cutting-edge GenAI technology, continuously keeping pace with rapid advancements in the field. Strategically identify and apply the latest GenAI innovations to solve Pure's specific business problems, working on a "fail fast" basis to iterate and deliver solutions quickly. Develop and deploy custom, production-grade applications using generative AI, including hands-on experience with LLMs such as Claude or OpenAI models, to address complex business challenges. Drive strategic GenAI application development aimed at optimizing business operations, delivering significant product enhancements, improving overall productivity, and achieving substantial cost savings. Utilize frameworks like Langgraph, or Llama Index for building sophisticated LLM-powered applications. Implement robust LLM evaluation methodologies and best practices for prompt engineering and management. Address and mitigate LLM security challenges. Collaborate closely with product teams and other stakeholders to conceptualize, develop, and deliver full-fledged, integrated GenAI solutions What You’ll Need To Bring To This Role… Strong familiarity with language models (LLMs) and practical experience building applications with LLMs. Solid understanding of LLM evaluation techniques and prompt management strategies. Hands-on experience with Langchain, Langgraph, or Llama Index. Proven experience with Claude, OpenAI, or similar foundational models. Understanding of LLM security challenges and best practices is a plus. Experience with Model Context Protocol and building secure applications with it is a plus. Understanding of systems side of LLMs : inference and serving of open source models Familiarity with Python, and basic knowledge of software development tools (Git, JIRA). Understanding of machine learning concepts and experience with frameworks like TensorFlow or PyTorch. Introductory knowledge of cloud computing and containerization technologies. Ability to work with data storage solutions (SQL, NoSQL, Hadoop). Knowledge of software engineering practices, including version control and CI/CD. What You Can Expect From Us Pure Innovation: We celebrate those who think critically, like a challenge and aspire to be trailblazers. Pure Growth: We give you the space and support to grow along with us and to contribute to something meaningful. We have been Named Fortune's Best Large Workplaces in the Bay Area™, Fortune's Best Workplaces for Millennials™ and certified as a Great Place to Work®! Pure Team: We build each other up and set aside ego for the greater good. And because we understand the value of bringing your full and best self to work, we offer a variety of perks to manage a healthy balance, including flexible time off, wellness resources and company-sponsored team events. Check out purebenefits.com for more information. Accommodations And Accessibility Candidates with disabilities may request accommodations for all aspects of our hiring process. For more on this, contact us at TA-Ops@purestorage.com if you’re invited to an interview. Our Commitment To a Strong And Inclusive Team We’re forging a future where everyone finds their rightful place and where every voice matters. Where uniqueness isn’t just accepted but embraced. That’s why we are committed to fostering the growth and development of every person, cultivating a sense of community through our Employee Resource Groups and advocating for inclusive leadership. Pure is proud to be an equal opportunity and affirmative action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or any other characteristic legally protected by the laws of the jurisdiction in which you are being considered for hire.
Posted 1 week ago
5.0 years
4 - 10 Lacs
Hyderābād
On-site
Welcome to Warner Bros. Discovery… the stuff dreams are made of. Who We Are… When we say, “the stuff dreams are made of,” we’re not just referring to the world of wizards, dragons and superheroes, or even to the wonders of Planet Earth. Behind WBD’s vast portfolio of iconic content and beloved brands, are the storytellers bringing our characters to life, the creators bringing them to your living rooms and the dreamers creating what’s next… From brilliant creatives, to technology trailblazers, across the globe, WBD offers career defining opportunities, thoughtfully curated benefits, and the tools to explore and grow into your best selves. Here you are supported, here you are celebrated, here you can thrive. Sr. Data Scientist – Job description Meet our team: The Data & Analytics organization is at the forefront of developing and maintaining frameworks, tools, and data products vital to WBD, including flagship streaming product Max and non-streaming products such as Films Group, Sports, News and overall WBD eco-system. Our mission is to foster unified analytics and drive data-driven use cases by leveraging a robust multi-tenant platform and semantic layer. We are committed to delivering innovative solutions that empower teams across the company to catalyze subscriber growth, amplify engagement, and execute timely, informed decisions, ensuring our continued success in an ever-evolving digital landscape. Roles & Responsibilities: The role will focus on building out machine learning solutions for WBD’s Data and Analytics organization. Primary focus will be on unlocking machine learning opportunities and building foundational machine learning training and inference pipelines at scale. You have a deep understanding of different types of data, metrics and KPIs. You will lead by example and define the best practices, will set high standards for the entire team and for the rest of the organization. You have a successful track record for ambitious projects across cross-functional teams. You are passionate and results oriented. You strive for technical excellence and are very hands-on. Your co-workers love working with you. You have built respect in your career through concrete accomplishments. Build cutting-edge capabilities utilizing machine learning and data science (e.g., large language models, computer vision models, advanced ad & content targeting, etc.) Lead data science and model development techniques for the team. Leverage industry best practices and tools to continually improve teams' ability to build machine learning models. What to Bring: BA/BS in statistics, mathematics, economics, industrial engineering, or other quantitative discipline is required. Masters/PhD is a plus 5+ years of experience building data science/statistical models (Multivariate regression, Time Series Model, XGBoost, Causal inference etc.) Strong understanding of modern ML approaches (GBDT, CNN, LSTM, GRU, HRNN, transformers, siamese neural networks, variational auto-encoders, ...). Experience with Deep Learning, NLP, LLMs, Reinforcement Learning, Causal Inference. Good knowledge of ML tools and frameworks (TensorFlow, Keras, pyTorch, scikit-learn, Spark,...). Proficiency in programming languages such as Python or R. Familiarity with real-world ML systems (configuration, data collection, data verification, feature extraction, resource and process management, analytics, training, serving, validation, experimentation, monitoring). Good understanding of operating machine learning solutions at scale, covering the end-to-end ML workflow. Strong interpersonal skills with the ability to motivate, collaborate and influence Ability to deliver on multiple projects and meet tight deadlines Ability to effectively influence and communicate cross-functionally with all level How We Get Things Done… This last bit is probably the most important! Here at WBD, our guiding principles are the core values by which we operate and are central to how we get things done. You can find them at www.wbd.com/guiding-principles/ along with some insights from the team on what they mean and how they show up in their day to day. We hope they resonate with you and look forward to discussing them during your interview. Championing Inclusion at WBD Warner Bros. Discovery embraces the opportunity to build a workforce that reflects a wide array of perspectives, backgrounds and experiences. Being an equal opportunity employer means that we take seriously our responsibility to consider qualified candidates on the basis of merit, regardless of sex, gender identity, ethnicity, age, sexual orientation, religion or belief, marital status, pregnancy, parenthood, disability or any other category protected by law. If you’re a qualified candidate with a disability and you require adjustments or accommodations during the job application and/or recruitment process, please visit our accessibility page for instructions to submit your request.
Posted 1 week ago
0 years
0 Lacs
Noida
On-site
Education Preference- Only B.tech CS/IT Graduate from the batch of 2025/2024 Job Description- Key Responsibilities: Collect and curate large-scale botanical image datasets, including scraping from online sources and organizing local image repositories with proper structure and naming conventions. Manually annotate plant images with precision (leaf, flower, fruit, stem, etc.) using tools like CVAT, LabelImg, or Labelme, while ensuring consistency and reviewing for quality. Assist in the pre-processing of image data (e.g., resizing, filtering, normalization, and augmentation) and monitor automated pipelines for correctness and completeness. Help validate annotations and perform sanity checks through basic machine learning inference (e.g., verifying predictions from classification or segmentation models). Collaborate closely with ML teams to flag edge cases, improve annotation guidelines, and maintain high data hygiene throughout the pipeline. Requirements: Prior experience handling large-scale image datasets (ideally in the terabyte range) and a strong understanding of digital image formats and metadata handling. Knowledge of plant/botanical structures and ability to visually distinguish between different parts (e.g., leaf vs. flower), preferably with academic or project exposure. Proficiency in basic Python scripting — familiarity with os, pandas, opencv, Pillow and ability to automate repetitive data handling tasks. Understanding of image annotation workflows and tools (CVAT, LabelImg, Labelme), plus familiarity with pre-processing and augmentation techniques commonly used in computer vision. Basic exposure to machine learning concepts such as image classification, segmentation, and inference, along with the discipline to carry out repetitive annotation tasks with high accuracy. Job Types: Full-time, Internship Contract length: 3 months Pay: ₹7,000.00 per month Benefits: Paid sick time Paid time off Schedule: Day shift Fixed shift Monday to Friday Location: Noida, Uttar Pradesh (Required) Work Location: In person Application Deadline: 26/07/2025
Posted 1 week ago
0.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Way of working - Remote : Employees will have the freedom to work remotely all through the year. These employees, who form a large majority, will come together in their base location for a week, once every quarter. Job Profile : Data Scientist Location : Bangalore | Karnataka Years of Experience : 0 - 3 About The Team & Role Data Science at Swiggy Data Science and applied ML is ingrained deeply in decision making and product development at Swiggy. Our data scientists work closely with cross-functional teams to ship end-to-end data products, from formulating the business problem in mathematical/ML terms to iterating on ML/DL methods to taking them to production. We own or co-own several initiatives with a direct line of sight to impact on customer experience as well as business metrics. We also encourage open sharing of ideas and publishing in internal and external avenues What will you get to do here? You will leverage your strong ML/DL/Statistics background to build new and next generation of ML based solutions to improve the quality of ads recommendation and leverage various optimization techniques to improve the campaign performance. You will mine and extract relevant information from Swiggy's massive historical data to help ideate and identify solutions to business and CX problems. You will work closely with engineers/PMs/analysts on detailed requirements, technical designs, and implementation of end-to-end inference solutions at Swiggy scale. You will stay abreast with the latest in ML research for Ads Bidding algorithms, Recommendation Systems related areas and help adapt it to Swiggy's problem statements. You will publish and talk about your work in internal and external forums to both technical and layman audiences. Opportunity to work on challenging and impactful projects in the logistics domain Collaborate with cross-functional teams, including software developers and product managers, to integrate data-driven solutions into our systems Develop and deploy machine learning and deep learning models Take ownership of projects from inception to delivery, ensuring high-quality and impactful results. What qualities are we looking for? Bachelors or Masters degree in a quantitative field with 0-3 years of industry/research lab experience Required: Excellent problem solving skills, ability to deconstruct and formulate solutions from first-principles Required: Depth and hands-on experience in applying ML/DL, statistical techniques to business problems Preferred: Experience working with ‘big data’ and shipping ML/DL models to production Required: Strong proficiency in Python, SQL, Spark, Tensorflow Required: Strong spoken and written communication skills Big plus: Experience in the space of ecommerce and logistics Experience in Generative AI Proven track record of developing and shipping ML and DL data products Experience in Agentic AI , LLMS and NLP Proficiency in Python Visit our tech blogs to learn more about some the challenges we deal with: https://bytes.swiggy.com/the-swiggy-delivery-challenge-part-one-6a2abb4f82f6 https://bytes.swiggy.com/how-ai-at-swiggy-is-transforming-convenience-eae0a32055ae https://bytes.swiggy.com/decoding-food-intelligence-at-swiggy-5011e21dbc86 We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, disability status, or any other characteristic protected by the law.
Posted 1 week ago
5.0 years
0 Lacs
Vapi, Gujarat, India
On-site
Job Description: MLOps & AI Infrastructure Engineer Company: Credartha Location: Vapi, Gujarat Position Type: Full-Time About Calaxis by Credartha : The Future of AI is Built on Trust The $15 trillion promise of artificial intelligence is currently being held hostage by a single, pervasive bottleneck: the quality of domain-specific data. Today, building trustworthy, specialized AI is an artisanal, slow, and prohibitively expensive process reserved for tech giants with billion-dollar budgets. Calaxis is on a mission to change this. We are a deep-tech venture building a foundational platform to automate the end-to-end creation of flawless, high-quality datasets for any AI application. Our core innovation is a proprietary, self-improving system that uses a cascade of specialized AI models to systematically validate data for accuracy, compliance, and insight. By solving the data quality problem at its core, we are moving the AI industry from a capital-intensive to a method-intensive paradigm, democratizing the development of high-stakes AI for every vertical. What You Will Do: Architect the AI Flywheel: Design and build the end-to-end MLOps infrastructure for our entire platform. This includes creating automated pipelines for training, validation, deployment, and the crucial feedback loop that makes our system self-improving. Build a Multi-Tenant PaaS: Engineer a scalable, secure, and efficient multi-tenant architecture on AWS to support our customer-facing services. This includes managing on-demand compute for customer-driven fine-tuning (SFT & RL) and model deployment jobs. Automate Everything (CI/CD/CT): Implement and manage a sophisticated CI/CD/CT (Continuous Integration/Continuous Deployment/Continuous Training) system for our suite of AI models and backend services, ensuring rapid and reliable updates. Optimize LLM Serving: Deploy and manage high-throughput, low-latency model serving infrastructure for our internal AI validators and for customer-deployed models. Master GPU Resources: Develop and manage systems for efficient scheduling, allocation, and monitoring of GPU resources across multiple training and inference workloads. Ensure Production-Grade Reliability: Implement comprehensive monitoring, logging, and alerting for the entire platform using tools like AWS CloudWatch to ensure high availability and performance. Champion Infrastructure as Code (IaC): Use tools like Terraform or AWS CloudFormation to define and manage our infrastructure, ensuring it is version-controlled, repeatable, and scalable. Who You Are: The Expert We Need Required Qualifications: 5+ years of professional experience in a DevOps, SRE, or MLOps role, with a proven track record of building and managing production infrastructure for scalable applications. Deep expertise in cloud services, particularly AWS (e.g., EC2, S3, EKS/ECS, Lambda, RDS, API Gateway). Strong, hands-on experience with containerization (Docker) and container orchestration (Kubernetes). Proven experience designing and implementing CI/CD pipelines for complex applications (e.g., Jenkins, GitLab CI, AWS CodePipeline). Proficiency in scripting and automation, with strong skills in Python. A deep understanding of networking, security, and infrastructure best practices. Preferred Qualifications (Bonus Points): Direct experience building MLOps pipelines for training and deploying Large Language Models (LLMs). Familiarity with LLM-specific serving frameworks (e.g., vLLM, Text Generation Inference, Triton). Experience with ML platforms and tools like Kubeflow, MLflow, or Airflow. Experience building infrastructure for multi-tenant SaaS or PaaS products. Knowledge of advanced fine-tuning techniques like Reinforcement Learning from Human Feedback (RLHF) or Direct Preference Optimization (DPO) and their infrastructure requirements. AWS certifications (e.g., DevOps Engineer, Solutions Architect). Why Join Credartha? Build from the Ground Up: This is a rare greenfield opportunity to be the founding infrastructure architect for a deep-tech company. Your design choices will have a lasting impact on the entire platform. Solve Mission-Critical Challenges: You will be working on complex, interesting problems at the intersection of distributed systems, cloud infrastructure, and cutting-edge AI. Massive Impact and Ownership: You won't be maintaining legacy systems. You will have unparalleled ownership and the opportunity to build the operational foundation of a platform poised to disrupt a $15 trillion market. A Culture of Excellence: Join a passionate founding team that values technical rigor, innovation, and collaboration. Competitive Compensation: We offer a highly competitive salary, significant equity, and comprehensive benefits to ensure you are rewarded for your foundational contributions. If you are a world-class infrastructure engineer who is excited by the challenge of building the engine for the future of AI, we want to hear from you. How to Apply: Please submit your resume and a brief cover letter or message highlighting your experience building scalable, production-grade infrastructure and why you are excited about the mission at Calaxis.
Posted 1 week ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
AI Engineer Position: 1 Job Title: AI Engineer (Multimodal RAG, Vector Database, and LLM Implementation) Experience Level: Mid to Senior-Level (5-7 Years) Job Overview: We are seeking a highly skilled AI Engineer with expertise in Multimodal Retrieval-Augmented Generation (RAG), Vector databases, and Large Language Model (LLM) implementation. The ideal candidate will have a strong background in integrating structured and unstructured data into AI models and deploying these models in real-world applications. This role involves working on cutting-edge AI solutions, including the development and optimization of multimodal systems that leverage both text and visual data. Key Responsibilities: · Multimodal RAG Implementation: o Design, develop, and deploy Multimodal Retrieval-Augmented Generation (RAG) systems that integrate both structured (e.g., databases, tables) and unstructured data (e.g., text, images, videos). o Work with large-scale datasets, combining different data types to enhance the performance and accuracy of AI models. o Implement and fine-tune LLMs (e.g., GPT, BERT) to work effectively with multimodal inputs and outputs. · Vector Database Integration: o Develop and optimize AI models using vector databases to efficiently manage and retrieve high-dimensional data. o Implement vector search techniques to improve information retrieval from structured and unstructured data sources. o Ensure the scalability and performance of vector-based retrieval systems in production environments. · LLM Implementation and Optimization: o Implement and fine-tune large language models to handle complex queries involving multimodal data. o Optimize LLMs for specific tasks, such as text generation, question answering, and content summarization, using both structured and unstructured data. o Integrate LLMs with vector databases and RAG systems to enhance AI capabilities. · Data Integration and Processing: o Work with data engineers and data scientists to preprocess and integrate structured and unstructured data for AI model training and inference. o Develop data pipelines that handle the ingestion, transformation, and storage of diverse data types. o Ensure data quality and consistency across different data sources and formats. · Model Evaluation and Testing: o Evaluate the performance of multimodal AI models using various metrics, ensuring they meet accuracy, speed, and robustness requirements. o Conduct A/B testing and model validation to continuously improve AI system performance. o Implement automated testing and monitoring tools to ensure model reliability in production. · Collaboration and Communication: o Collaborate with cross-functional teams, including data engineers, data scientists, and software developers, to deliver AI-driven solutions. o Communicate complex technical concepts to non-technical stakeholders and provide insights on the impact of AI models on business outcomes. o Stay up to date with the latest advancements in AI, LLMs, vector databases, and multimodal systems, and share knowledge with the team. Qualifications: · Technical Skills: o Strong expertise in Multimodal Retrieval-Augmented Generation (RAG) systems. o Proficiency in vector databases (e.g., Pinecone, Milvus, Weaviate, Chroma) and vector search techniques with recommender systems, vector search capabilities. o Experience with LLMs (e.g., GPT, BERT) and their implementation in real-world applications. Experience with Mistral AI is a plus. o Solid understanding of machine learning and deep learning frameworks (e.g., TensorFlow, PyTorch, MLflow etc). o Experience working with structured data (e.g., SQL databases) and unstructured data (e.g., text, images, videos). o Proficiency in programming languages such as Python, with experience in relevant libraries and tools. · Experience: o 2+ years of experience in AI/ML engineering, with a focus on multimodal systems and LLMs. o Proven track record of deploying AI models in production environments. o Experience with cloud platforms preferably Azure, and MLOps practices is preferred.
Posted 1 week ago
4.0 years
3 - 10 Lacs
Bengaluru, Karnataka, India
On-site
This role is for one of our clients Industry: Technology, Information and Media Seniority level: Mid-Senior level Min Experience: 4 years Location: Bengaluru JobType: full-time About the Role: We are looking for a bold, inventive, and deeply technical Applied AI Engineer to join our next-generation AI innovation team. You will help architect intelligent, adaptive systems that combine reasoning, autonomy, and generative capabilities—building the next wave of AI-native applications. This is a unique opportunity to work across disciplines like cognitive computing, generative modeling, and agent orchestration to deliver real-world impact. What You’ll Be Doing 🧠 Intelligent System Design Develop autonomous, goal-driven agents that reason, plan, and execute multi-step tasks using tools, APIs, or memory systems. Architect modular frameworks for agent orchestration , prompt chaining, and task decomposition in complex environments. Implement AI workflows that emulate human-like decision-making and task execution. 🧬 Generative AI & Model Engineering Build and deploy Generative AI models for creative and business use cases including text generation, summarization, personalization, and automation. Fine-tune and align large language models (LLMs) and diffusion models for downstream applications. Design structured experimentation pipelines to evaluate generative model quality, coherence, and relevance. 📦 Full-Stack ML Productization Own the full ML pipeline: from dataset construction, feature transformation, and model training to deployment, monitoring, and real-time inference. Optimize inference efficiency and model serving at scale using vector databases, caching layers, and cloud-native infrastructure. Collaborate with backend, product, and design teams to embed AI seamlessly into user-facing applications. 🔍 Research & Innovation Track breakthroughs in AI agents, open-source orchestration frameworks (e.g., LangGraph, CrewAI), and multi-agent communication protocols. Apply emerging paradigms such as Retrieval-Augmented Generation (RAG) , toolformer-style models, and multi-modal intelligence. Contribute to internal knowledge bases and mentor team members on novel techniques. Who You Are ✅ Must-Have Skills 4–10 years of experience in machine learning, applied AI, or intelligent systems engineering. Strong programming ability in Python , and deep familiarity with ML libraries like PyTorch , scikit-learn , or TensorFlow . Proven track record of building, training, and deploying LLMs , generative models, or agentic systems in production. Experience with orchestration tools and frameworks like LangChain , Auto-GPT , CrewAI , or custom agent stacks. Deep understanding of prompt engineering, fine-tuning, vector search, and RAG pipelines. Solid grounding in software engineering principles—Git, containerization (Docker), CI/CD, and cloud platforms (AWS/GCP/Azure). ⭐ Bonus Skills Experience integrating vector databases like Pinecone , Weaviate , or FAISS for semantic search or contextual memory. Exposure to knowledge graphs , planning algorithms, or reinforcement learning agents. Experience with multi-modal models or cross-domain embeddings (e.g., vision-language or speech-text models). Contributions to open-source AI/ML projects or academic publications in ML conferences.
Posted 1 week ago
0.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
Education Preference- Only B.tech CS/IT Graduate from the batch of 2025/2024 Job Description- Key Responsibilities: Collect and curate large-scale botanical image datasets, including scraping from online sources and organizing local image repositories with proper structure and naming conventions. Manually annotate plant images with precision (leaf, flower, fruit, stem, etc.) using tools like CVAT, LabelImg, or Labelme, while ensuring consistency and reviewing for quality. Assist in the pre-processing of image data (e.g., resizing, filtering, normalization, and augmentation) and monitor automated pipelines for correctness and completeness. Help validate annotations and perform sanity checks through basic machine learning inference (e.g., verifying predictions from classification or segmentation models). Collaborate closely with ML teams to flag edge cases, improve annotation guidelines, and maintain high data hygiene throughout the pipeline. Requirements: Prior experience handling large-scale image datasets (ideally in the terabyte range) and a strong understanding of digital image formats and metadata handling. Knowledge of plant/botanical structures and ability to visually distinguish between different parts (e.g., leaf vs. flower), preferably with academic or project exposure. Proficiency in basic Python scripting — familiarity with os, pandas, opencv, Pillow and ability to automate repetitive data handling tasks. Understanding of image annotation workflows and tools (CVAT, LabelImg, Labelme), plus familiarity with pre-processing and augmentation techniques commonly used in computer vision. Basic exposure to machine learning concepts such as image classification, segmentation, and inference, along with the discipline to carry out repetitive annotation tasks with high accuracy. Job Types: Full-time, Internship Contract length: 3 months Pay: ₹7,000.00 per month Benefits: Paid sick time Paid time off Schedule: Day shift Fixed shift Monday to Friday Location: Noida, Uttar Pradesh (Required) Work Location: In person Application Deadline: 26/07/2025
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Gen AI Lead Engineer JD (5-8 years) Job Description We are seeking a hands-on and forward-thinking Gen AI Lead Engineer with 5-8 years of experience to lead the development of cutting-edge GenAI solutions. You'll work on taking modern large language models (LLMs) from prototype to production, orchestrating multi-agent workflows using advanced frameworks like LlamaIndex Workflows , LangGraph , and structured function calling. An ideal candidate has a strong foundation in classical ML (e.g., XGBoost), deep learning with transformer-based architectures (such as LLaMA and Qwen-family models ), and hands-on experience building Retrieval-Augmented Generation (RAG) pipelines. This role is perfect for someone who thrives at the intersection of research, engineering, and real-world application—owning everything from system design to scalable deployment. Key Responsibilities Design, build, and deploy ML/DL models for: Tabular data (e.g., XGBoost) NLP and GenAI (e.g., RAG, tool/function calling) Fine-tune and serve LLMs using Hugging Face , PyTorch , and efficient-tuning techniques like LoRA , QLoRA , and PEFT . Architect and implement agent-based LLM solutions with tools like LlamaIndex , LangGraph , or LangChain Agents . Design multi-step tool-calling workflows and structured function-calling strategies for complex tasks. Orchestrate agent memory, state management, and contextual conversations. Develop and maintain FastAPI microservices for low-latency GenAI inference, streaming, and secure system integration. Deploy ML pipelines and training jobs in the cloud (preferably Azure , also AWS ). Handle end-to-end MLOps: CI/CD, containerization (Docker), GPU/ACI deployment, observability, and cost governance. Collaborate cross-functionally with product, data, and frontend teams to translate abstract ideas into tangible outcomes. Required Skills & Qualifications Strong Python programming skills; experience with FastAPI and/or Java is a plus. Proven experience with: XGBoost or similar ML models for tabular data. YOLO , OCR , and PyTorch for vision and text extraction tasks. Deep knowledge in NLP / GenAI: LLM fine-tuning, prompt engineering, and RAG design. Proficiency with Hugging Face Transformers , PEFT , and vector databases. Implementation of agent frameworks like LlamaIndex , LangGraph , or LangChain Agents . MLOps & Deployment: Experience with Docker, CI/CD pipelines, experiment tracking, model versioning, and rollback mechanisms. Cloud Proficiency: Azure ML , Azure Functions, AKS (preferred) or AWS SageMaker , Lambda. Bonus : experience with Triton / vLLM, streaming websockets, or GPU cost-optimization. Benefits of Working with Us: Best of Both Worlds: Enjoy the enthusiasm and learning curve of a startup combined with the deliveries and performance of an enterprise service provider. Flexible Working Hours: We offer a delivery-oriented approach with flexible working hours to help you maintain a healthy work-life balance. Limitless Growth Opportunities: The sky is not the limit when it comes to learning, growth, and sharing ideas. We encourage continuous learning and personal development. Flat Organizational Structure: We don't follow the typical corporate hierarchy ladder, fostering an open and collaborative work environment where everyone's voice is heard. As part of our dedication to an inclusive and diverse workforce, TechChefz Digital is committed to Equal Employment Opportunity without regard to race, color, national origin, ethnicity, gender, protected veteran status, disability, sexual orientation, gender identity, or religion. If you need assistance, you may contact us at joinus@techchefz.com
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Please find below job details : Role : AI/ML Experience : 4-8 years Location : Gurgaon Mode : Hybrid JOB DESCRIPTION : How will you contribute? Develop and implement AI models and pipelines, leveraging LLMs to address business needs. Design and deploy machine learning solutions on Azure, ensuring performance, scalability, and reliability. Use Python for data preprocessing, model development, and experimentation. Build and integrate REST APIs using Flask to enable seamless model access for other applications. Containerize models and applications with Docker to streamline deployment and improve portability. Collaborate with cross-functional teams to understand project goals and create custom AI solutions. Stay updated on the latest advancements in LLMs, and cloud technologies. Optimize models for accuracy, performance, and inference speed based on real-time data Qualifications : 4 - 8 years of experience in data science, working with LLMs models – across CPG, Retail, Supply Chain etc. – e.g. customer churn/retention, route optimization, inventory planning, market mix modelling, etc. Hands-on experience deploying AI/ML models on Azure and using cloud-based tools. Proficiency in Python , with experience in data manipulation, model building, and evaluation. Experience with Dockerization for deploying and managing AI applications. Knowledge of Flask to build lightweight APIs for model integration. Strong problem-solving skills, with the ability to translate business challenges into AI solutions. Good communication skills to work with stakeholders and present technical results effectively. Working knowledge in finance domain is a must.
Posted 1 week ago
5.0 years
0 Lacs
Greater Hyderabad Area
On-site
mokSa.ai is specializing in AI-powered surveillance audit solutions . Founded in 2021, the company focuses on helping businesses reduce losses from shoplifting and employee fraud by leveraging computer vision and machine learning technologies Job Description : We are seeking a talented Computer Vision Engineer with strong expertise in microservice deployment architecture to join our team. In this role, you will be responsible for developing and deploying computer vision models to analyze retail surveillance footage for use cases such as theft detection, employee efficiency monitoring, and store traffic analysis. Responsibilities : You will work on designing and implementing scalable, cloud-based microservices to deliver real-time and post-event analytics to improve retail responsibilities - Develop computer vision models : Build, train, and optimize deep learning models to analyze surveillance footage for detecting theft, monitoring employee productivity, tracking store busy hours, and other relevant use cases. - Microservice architecture : Design and deploy scalable microservice-based solutions that allow seamless integration of computer vision models into cloud or on-premise environments. - Data processing pipelines : Develop data pipelines to process real-time and batch video data streams, ensuring efficient extraction, transformation, and loading (ETL) of video data. - Integrate with existing systems : Collaborate with backend and frontend engineers to integrate computer vision services with existing retail systems such as POS, inventory management, and employee scheduling. - Performance optimization : Fine-tune models for high accuracy and real-time inference on edge devices or cloud infrastructure, optimizing for latency, power consumption, and resource constraints. - Monitor and improve : Continuously monitor model performance in production environments, identify potential issues, and implement improvements to accuracy and efficiency. - Security and privacy : Ensure compliance with industry standards for security and data privacy, particularly regarding the handling of video footage and sensitive Skills & Requirements : - 5+ years of proven experience in computer vision, including object detection, action recognition, and multi-object tracking, preferably in retail or surveillance applications. - Hands-on experience with microservices deployment on cloud platforms (e., AWS, GCP, Azure) using Docker, Kubernetes, or similar technologies. - Experience with real-time video analytics, including working with large-scale video data and camera Skills - Proficiency in programming languages like Python, C++, or Java. - Expertise in deep learning frameworks (e. TensorFlow, PyTorch, Keras) for developing computer vision models. - Strong understanding of microservice architecture, REST APIs, and serverless computing. - Knowledge of database systems (SQL, NoSQL), message queues (Kafka, RabbitMQ), and container orchestration (Kubernetes). - Familiarity with edge computing and hardware acceleration (e., GPUs, TPUs) for running inference on embedded Qualifications : - Experience with deploying models to edge devices (NVIDIA Jetson, Coral, etc.) - Understanding of retail operations and common challenges in surveillance. - Knowledge of data privacy regulations such as GDPR - Strong analytical and problem-solving skills. - Ability to work independently and in cross-functional teams. - Excellent communication skills to convey technical concepts to non-technical - Competitive salary and stock options. - Health insurance. If you're passionate about creating cutting-edge computer vision solutions and deploying them at scale to transform retail operations, wed love to hear from you!. Interested candidates can apply here: sravankumar.m@moksa.ai
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Job Description Responsibilities : Identify relevant data sources - a combination of data sources to make it useful. Build the automation of the collection processes. Pre-processing of structured and unstructured data, leveraging NLP techniques for text data. Handle large amounts of information to create the input for analytical models, incorporating Gen AI for advanced data processing and generation. Build predictive models, machine learning, and deep learning algorithms; innovate with Gen AI applications in model development. Build network graphs, apply NLP techniques for text analysis, and design forecasting models while building data pipelines for end-to-end solutions. Propose solutions and strategies to address business challenges, integrating Gen AI and NLP in practical applications. Collaborate with product development teams and communicate with Senior Leadership teams. Participate in problem-solving sessions, leveraging NLP and Gen AI for innovative solutions. Requirements Bachelor's degree in a highly quantitative field (e., Computer Science, Engineering, Physics,Math, Operations Research, etc.) or equivalent experience. Extensive machine learning and algorithmic background with deep expertise in Gen AI andNatural Language Processing (NLP) techniques, along with a strong understanding of supervised and unsupervised learning methods, reinforcement learning, deep learning, Bayesian inference, and network graph analysis. Advanced knowledge of NLP methods, including text generation, sentiment analysis, namedentity recognition, and language modelling. Strong math skills, including proficiency in statistics, linear algebra, and probability, with the ability to apply these concepts in Gen AI and NLP solutions. Proven problem-solving aptitude with the ability to apply NLP and Gen AI tools to real-world business challenges. Excellent communication skills with the ability to translate complex technical information, especially related to Gen AI and NLP, into clear insights for non-technical stakeholders. Fluency in at least one data science/analytics programming language (e., Python, R, Julia), with expertise in NLP and Gen AI libraries like TensorFlow, PyTorch, Hugging Face, or OpenAI tools. Start-up experience is a plus, with ideally 5-8 years of advanced analytics experience in startups or marquee companies, particularly in roles leveraging Gen AI and NLP for product or business innovations. (ref:hirist.tech)
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. Those in Guidewire testing at PwC will specialise in testing and quality assurance activities related to Guidewire applications. Guidewire is a software suite that provides insurance companies with tools for policy administration, claims management, and billing. You will be responsible for confirming that the Guidewire applications meet the desired quality standards and perform as expected. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Skill - GW Testing - Senior Associate Total Experience – 5 - 9 years Edu Qualification: BTech/BE/MTech/MS/MCA Job Description - Reviewing requirements / specifications / technical design documents Designing detailed, comprehensive and well-structured Test Plans and Test Cases Setting up Test Environment & Test Data Executing tests as needed throughout the project. Analyzing and reporting test results. Identifying and tracking defects through their lifecycle. Understanding of Integration - Technical Design Document and Use Case Testing experience of any one of the Guidewire products: PolicyCenter Experience on policy transactions, workflow ,Audits, forms inference Performing thorough testing [Smoke / System / Integration / Regression / Stabilization Possessing expertise in Test Management Tools like ALM / Jira Understanding of data loading frameworks for data migration and datahub projects. Understanding of data warehousing concepts. Strong analytical skills to build data mapping from Guidewire Claims/Policy Data model to legacy systems. Strong SQL knowledge.
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Title: Senior AI Developer Years of Experience 8+ years Location: The selected candidate is required to work onsite for the initial 1 to 3-month project training and execution period at either our Kovilpatti or Chennai location, which will be confirmed during the onboarding process. After the initial period, remote work opportunities will be offered. Job Description: The Senior AI Developer will be responsible for designing, building, training, and deploying advanced artificial intelligence and machine learning models to solve complex business challenges across industries. This role demands a strategic thinker and hands-on practitioner who can work at the intersection of data science, software engineering, and innovation. The candidate will contribute to scalable production-grade AI pipelines and mentor junior AI engineers within the Center of Excellence (CoE). Key responsibilities · Design, train, and fine-tune deep learning models (NLP, CV, LLMs, GANs) for high-value applications · Architect AI model pipelines and implement scalable inference engines in cloud-native environments · Collaborate with data scientists, engineers, and solution architects to productionize ML prototypes · Evaluate and integrate pre-trained models like GPT-4o, Gemini, Claude, and fine-tune based on domain needs · Optimize algorithms for real-time performance, efficiency, and fairness · Write modular, maintainable code and perform rigorous unit testing and validation · Contribute to AI codebase management, CI/CD, and automated retraining infrastructure · Research emerging AI trends and propose innovative applications aligned with business objectives Technical Skills · Expert in Python, PyTorch, TensorFlow, Scikit-learn, Hugging Face Transformers · LLM deployment & tuning: OpenAI (GPT), Google Gemini, Claude, Falcon, Mistral · Experience with RESTful APIs, Flask/FastAPI for AI service exposure · Proficient in Azure Machine Learning, Databricks, MLflow, Docker, Kubernetes · Hands-on experience with vector databases, prompt engineering, and retrieval-augmented generation (RAG) · Knowledge of Responsible AI frameworks (bias detection, fairness, explain ability) Qualification · Master’s in Artificial Intelligence, Machine Learning, Data Science, or Computer Engineering · Certifications in AI/ML (e.g., Microsoft Azure AI Engineer, Google Professional ML Engineer) preferred · Demonstrated success in building scalable AI applications in production environments · Publications or contributions to open-source AI/ML projects are a plus
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough