Jobs
Interviews

32 Embeddings Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

You should have at least 3+ years of relevant experience with the following skills: - Proficiency in Python, machine learning, deep learning, and NLP processing. - Experience in developing and implementing generative AI models, with a strong understanding of deep learning techniques such as GPT, VAE, and GANs. - Proficiency in Langchain, LLM. - Ability to prompt and optimize few-shot techniques to enhance LLM's performance on specific tasks. - Evaluate LLM's zero-shot and few-shot capabilities, fine-tuning hyperparameters, ensuring task generalization, and exploring model interpretability for robust web app integration. - Collaborate with ML and Integration engineers to leverage LLM's pre-trained potential, delivering contextually appropriate responses in a user-friendly web app. - Solid understanding of data structures, algorithms, and principles of software engineering. - Experience with vector databases RDBMS, MongoDB, and NoSQL databases. - Proficiency in working with embeddings. - Strong distributed systems skills and system architecture skills. - Experience in building and running a large platform at scale. - Hands-on experience with Python, Hugging Face, TensorFlow, Keras, PyTorch, Spark, or similar statistical tools. - Experience as a data modeling ML/NLP scientist, including performance tuning, fine-tuning, RLHF, and performance optimization. - Proficient with the integration of data from multiple sources and API design. - Good knowledge of Kubernetes and RESTful design. - Prior experience in developing public cloud services or open-source ML software is an advantage. You should also have a validated background with ML toolkits such as PyTorch, TensorFlow, Keras, Langchain, Llamadindex, SparkML, or Databricks. Your experience and strong knowledge of using AI/ML and particularly LLMs will be beneficial in this role.,

Posted 13 hours ago

Apply

6.0 - 10.0 years

0 Lacs

noida, uttar pradesh

On-site

About Birlasoft: At Birlasoft, we are a powerhouse where domain expertise, enterprise solutions, and digital technologies converge to redefine business processes. We take pride in our consultative and design thinking approach, driving societal progress by enabling our customers to run businesses with unmatched efficiency and innovation. As part of the CKA Birla Group, a multibillion-dollar enterprise, we boast a 12,500+ professional team committed to upholding the Group's 162-year legacy. Our core values prioritize Diversity, Equity, and Inclusion (DEI) initiatives, along with Corporate Sustainable Responsibility (CSR) activities, demonstrating our dedication to building inclusive and sustainable communities. Join us in shaping a future where technology seamlessly aligns with purpose. We are looking for an experienced technical lead to implement an enterprise-grade conversational AI interface leveraging technologies like NodeJS, Python, LangChain, Azure OpenAI, and Azure Cognitive Services. Responsibilities: - Implement conversational AI application using NodeJS, Azure OpenAI, and Python. - Integrate various AI technologies like OpenAI models, LangChain, Azure Cognitive Services (Cognitive Search, Indexes, Indexers, and APIs, etc.) to enable sophisticated natural language capabilities. - Implementation of private endpoints across the Azure services leveraged for the application. - Implement schemas, APIs, frameworks, and platforms to operationalize AI models and connect them to conversational interfaces. - Implement app logic for conversation workflows, context handling, personalized recommendations, sentiment analysis, etc. - Build and deploy the production application on Azure while meeting security, reliability, and compliance standards. - Create tools and systems for annotating training data, monitoring model performance, and continuously improving the application. - Mentor developers and provide training on conversational AI development best practices. - Build and productionize vector databases for the application on Azure cloud. Key Responsibilities: - 6-8 years of overall technology experience in core application development. - 2+ years of experience leading the development of AI apps and conversational interfaces. - Hands-on implementation-centric knowledge of generative AI tools on Azure cloud. - Deep, hands-on, and development proficiency in Python and NodeJS. - Hands-on expertise of SharePoint indexes and data/file structures (Azure SQL). - Hands-on knowledge of Azure Form Recognizer tools. - Experience with LangChain, Azure OpenAI, and Azure Cognitive Search. - Retrieval Augmented Generation (RAG) and RLHF (Reinforcement Learning from Human Feedback) using Python. - Vector databases on Azure cloud using PostgreSQL. - Pinecone, FAISS, Weaviate, or ChromaDB. - Prompt Engineering using LangChain or Llama Index. - Knowledge of NLP techniques like transformer networks, embeddings, intent recognition, etc. - Hands-on skills on Embedding and fine-tuning Azure OpenAI using MLOPS/LLMOPS pipelines. - Good to have - Strong communication, DevOps, and collaboration skills.,

Posted 1 day ago

Apply

5.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Role: Senior AI/ML Engineer Function: Machine Learning / AI Engineering Industry: Fintech, SaaS, Artificial Intelligence About Company The client is an early-stage, venture-backed fintech using AI to simplify cash-flow analytics for high-volume B2C businesses. Its platform pairs natural-language queries with machine learning to deliver real-time insights and friction-free reconciliation. You join a small, fast-moving team that prizes ownership, experimentation, and transparent collaboration. The mission is clear: give operators instant, accurate financial visibility so they can scale with confidence. Position Overview You lead the end-to-end AI charter that powers financial automation, intelligent reconciliation, and anomaly detection. You shape the GenAI roadmap, productionize cutting-edge ML systems, and build a world-class teamall while delivering reliable, compliant, and low-latency solutions for transactional finance. Role & Responsibilities Define the AI and GenAI vision for reconciliation, document understanding, financial classification, and payment behavior intelligence. Translate product problems into scalable ML systems and GenAI workflows. Research, experiment, and productionize LLM-based pipelines tailored to enterprise financial operations. Deploy and maintain ML and LLM pipelines in production, including observability, retries, retraining, and versioning. Implement feedback loops that continuously learn from user actions and corrections. Optimize performance and latency of GenAI systems in high-throughput transactional environments. Ensure data privacy, regulatory compliance, and explainability of AI outputs in financial contexts. Lead, hire, mentor, and grow a team of ML and GenAI engineers. Collaborate with product managers, backend engineers, and data engineers to ship AI-powered features. Evangelize responsible AI practices and an experimentation culture across the organization. Must have Criteria 5+ years in AI/ML with at least 12 years hands-on with LLMs and GenAI. Deep knowledge of ML algorithms, NLP, transformers, vector search, embeddings, classification, and unsupervised learning. Proven track record building and deploying GenAI applications using OpenAI APIs, HuggingFace, LangChain, or LlamaIndex. Strong coding skills in Python and experience with PyTorch or TensorFlow, scikit-learn, pandas, and MLOps tools such as MLflow or Airflow. Hands-on experience fine-tuning LLMs (LoRA, QLoRA, PEFT) and crafting prompts for deterministic outputs. Expertise with OCR/NLP tools for semi-structured document extraction and parsing. Ability to work with unstructured, noisy financial data at scale. End-to-end ownership of ML systems from research through deployment and monitoring. Nice to Have Familiarity with open-source LLMs like Mistral, Claude, LLaMA, or Zephyr. Experience with chunking strategies, prompt templates, and hybrid search in RAG systems. Background in enterprise SaaS or fintech domains such as banking, reconciliation, ERP, or accounting. Knowledge of graph-based ML or probabilistic models for complex transaction flows. Past experience building AI systems in an early-stage startup or as a founding team member. Fintech domain expertise. What We Offer Foundational role where you define how AI evolves at the company. High impact on real-world financial decisions requiring accuracy and auditability. Ownership of deep ML and cutting-edge GenAI problems. Product-first, collaborative culture that values high agency and technical depth. Show more Show less

Posted 1 day ago

Apply

3.0 - 10.0 years

0 Lacs

karnataka

On-site

As a Data Scientist specializing in GenAI within the Banking domain, you will utilize your extensive experience of over 10 years in Data Science, with a focus of 3+ years specifically in GenAI. Your expertise will be instrumental in developing, training, and fine-tuning GenAI models such as LLMs and GPT for various banking use cases. You will collaborate closely with business and product teams to design and implement predictive models, NLP solutions, and recommendation systems tailored to the financial industry. A key aspect of your role will involve working with large volumes of both structured and unstructured financial data to derive valuable insights. Moreover, you will be responsible for ensuring the ethical and compliant use of AI by incorporating practices related to fairness, explainability, and compliance into the model outputs. Deployment of these models using MLOps practices like CI/CD pipelines and model monitoring will also be within your purview. Your skill set must include a strong proficiency in Python programming, along with a deep understanding of libraries such as Pandas, NumPy, Scikit-learn, TensorFlow, and PyTorch. Hands-on experience with GenAI tools like OpenAI, Hugging Face, LangChain, and Azure OpenAI will be crucial for success in this role. Furthermore, your expertise in NLP, prompt engineering, embeddings, and vector databases will play a pivotal role in building models for critical banking functions like credit risk assessment, fraud detection, and customer segmentation. While not mandatory, it would be advantageous to possess knowledge of LLM fine-tuning and retrieval-augmented generation (RAG) in addition to familiarity with data privacy and compliance regulations such as GDPR and RBI guidelines as they pertain to AI systems. Your understanding of banking data, processes, and regulatory requirements will be key in delivering effective AI-driven solutions within the financial services industry.,

Posted 6 days ago

Apply

3.0 - 7.0 years

0 - 0 Lacs

haryana

On-site

As an ML Engineer at an early-stage, US-based venture-backed technology company in Gurgaon, you will be responsible for designing and deploying the core recommendation and personalization systems that power the matchmaking experience. Your role will involve engineering the full lifecycle - from design, R&D, to deployment - while laying the foundation for scalable, real-time ranking infrastructure. You will be developing match-making, recommendation, ranking, and personalization systems. Specifically, you will work on creating a novel real-time adaptive matchmaking engine that learns from user interactions and other signals. Your tasks will also include designing ranking and recommendation algorithms that make each user feed feel curated for them. Additionally, you will build user embedding systems, similarity models, and graph-based match scoring frameworks, and deploy models to production using fast iteration loops, model registries, and observability tooling. The ideal candidate for this role is an ML engineer with 3-6 years of experience working on ML Engineering or Data Science. You should have prior experience working in personalization, recommendations, search, or ranking at scale. Exposure to a variety of popular recommendation and personalization techniques, including collaborative filtering, deep retrieval models, learning-to-rank, embeddings with ANN search, and LLM approaches for sparse data personalization is desirable. Experience with end-to-end ML pipelines, vector search, graph-based algorithms, and LLM based approaches would be a significant advantage. Joining this company's founding team will allow you to play a core role in shaping the future of how humans connect in the AI era. The compensation offered includes a range of 30-50 LPA along with ESOPs, providing an opportunity for wealth creation.,

Posted 1 week ago

Apply

3.0 - 10.0 years

0 Lacs

karnataka

On-site

As a Data Scientist specializing in GenAI within the Banking domain, you will leverage your extensive experience of over 10 years in Data Science, with a specific focus of 3+ years in GenAI. Your key responsibility will revolve around developing, training, and fine-tuning GenAI models such as LLMs, GPT, etc., tailored for banking use cases. Additionally, you will be tasked with designing and implementing predictive models, NLP solutions, and recommendation systems while working with large volumes of structured and unstructured financial data. Collaboration with business and product teams to define AI-driven solutions will be crucial in your role. You will play a pivotal part in ensuring responsible AI practices, fairness, explainability, and compliance in model outputs. Deployment of models using MLOps practices including CI/CD pipelines and model monitoring will also be within your purview. Your skill set must include strong proficiency in Python programming along with libraries like Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch. Hands-on experience with GenAI tools such as OpenAI, Hugging Face, LangChain, Azure OpenAI, etc., will be essential. Expertise in NLP, prompt engineering, embeddings, and vector databases is a prerequisite. Furthermore, you should have experience in building models for credit risk assessment, fraud detection, customer segmentation, among others. A solid understanding of banking data, processes, and regulatory requirements is essential for success in this role. Moreover, having knowledge in LLM fine-tuning and retrieval-augmented generation (RAG) is considered a good-to-have skill. Exposure to data privacy and compliance standards such as GDPR, RBI guidelines, in AI systems will be an added advantage.,

Posted 1 week ago

Apply

12.0 - 18.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Freshworks Organizations everywhere struggle under the crushing costs and complexities of solutions that promise to simplify their lives. To create a better experience for their customers and employees. To help them grow. Software is a choice that can make or break a business. Create better or worse experiences. Propel or throttle growth. Business software has become a blocker instead of ways to get work done. Theres another option. Freshworks. With a fresh vision for how the world works. At Freshworks, we build uncomplicated service software that delivers exceptional customer and employee experiences. Our enterprise-grade solutions are powerful, yet easy to use, and quick to deliver results. Our people-first approach to AI eliminates friction, making employees more effective and organizations more productive. Over 72,000 companies, including Bridgestone, New Balance, Nucor, S&P Global, and Sony Music, trust Freshworks customer experience (CX) and employee experience (EX) software to fuel customer loyalty and service efficiency. And over 4,500 Freshworks employees make this possible, all around the world. Fresh vision. Real impact. Come build it with us. Job Description Function: Engineering AI Reports To: VP - Engineering AI Team Size: 20 to 40 (Data Scientists, ML Engineers, Software Engineers) We are looking for a Senior Director Engineering AI to lead the charter for machine learning, and GenAI initiatives across Freshworks platform. This leader will be responsible for defining the AI vision, leading high-impact cross-functional programs, embedding intelligence into our product suite, and scaling a world-class data science function globally. Youll operate at the intersection of business, product, and technology steering the strategic use of AI to solve customer problems, improve operational efficiency, and drive revenue growth. Key Responsibilities Strategy & Vision Define and own the AI roadmap aligned with company objectives. Drive the long-term strategy for AI/ML initiatives, and data monetization opportunities. Evangelize a culture of experimentation, evidence-based decision-making, and responsible AI. Team Leadership Hire, mentor, and develop a world-class team of data scientists, machine learning and engineers. Foster a collaborative, inclusive, and high-impact environment with a strong learning and delivery mindset. Cross-functional Leadership Partner closely with Engineering and Product teams to embed ML models into products and services. Collaborate with stakeholders across business functions to identify and prioritize use cases for AI. Technical Execution Oversee development and deployment of scalable ML models, statistical models, NLP solutions, and recommendation engines. Ensure rigorous experimentation and model validation using state-of-the-art techniques. Champion data governance, quality, and security practices. Metrics & Impact Define KPIs and success metrics for data science initiatives. Deliver measurable impact on revenue growth, operational efficiency, and customer experience. Qualifications Professional Experience 1218 years of experience in AI, with at least 5 years in senior leadership roles managing large data science teams. Proven experience delivering ML-based products and solutions in a SaaS or digital platform environment. Demonstrated ability to influence product roadmaps and drive AI strategy in large-scale environments. Technical Expertise Deep understanding of applied machine learning, NLP, deep learning, causal inference, optimization, and GenAI (LLMs, embeddings, retrieval pipelines). Strong hands-on foundation in Python, SQL, Spark, and ML frameworks such as PyTorch, TensorFlow, scikit-learn. Familiarity with modern cloud data stacks (e.g., Snowflake, Databricks, AWS Sagemaker, Vertex AI, LangChain/RAG pipelines). Drive operational efficiency and engineering productivity across AI and data platform teams through streamlined processes, tooling, and automation. Establish and enforce standardized practices for data engineering, model development, and deployment across teams to ensure consistency, quality, and reuse. Champion platform-first thinkingbuilding reusable components, shared services, and self-service capabilities to accelerate experimentation and delivery. Experience with production-grade ML deployment, experimentation, and performance tracking. Leadership & Influence Strong executive presence with the ability to influence senior stakeholders across Product, Engineering, Sales, and Marketing. Effective communication of complex technical concepts to diverse audiences including C-suite, product managers, and non-technical partners. Lead cross-functional initiatives to optimize end-to-end ML workflows, from data ingestion to model monitoring, reducing cycle times and increasing model velocity. Partner with engineering, product, and infrastructure teams to align roadmaps and eliminate friction in building, testing, and deploying AI solutions at scale. Passionate about building a data-driven culture and driving talent development across the organization. Nice to Have Experience in customer experience, CRM, service management, or sales tech domains. Hands-on exposure to LLM fine-tuning, prompt engineering, and GenAI application development. Hands-on experience developing and scaling ML pipelines and models using Databricks and related tools (e.g., MLflow,). etc Participation in AI ethics or responsible AI governance efforts. Open-source contributions or published research in relevant domains. Why Join Freshworks Shape the AI-first future of one of the fastest-growing SaaS companies globally. Build and ship data-driven solutions at scale, impacting 60,000+ businesses. Work with a global leadership team that values innovation, autonomy, and customer-centricity. Additional Information At Freshworks, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business. Show more Show less

Posted 1 week ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

About Zwende: An ISB Alumni Venture, approved by DIPP under Startup India. Zwende is the worlds first creator-to-consumer platform which offers unique creative products, learning and entertainment from independent artists/makers, boutique designers, and rural artisans. Zwende is built to offer the unlimited creativity of the creators and match it to the need, of todays consumers, for individuality and self-expression. Zwende is the only Indian start-up featured by Amazon&aposs Global CTO, Werner Vogels, on his show Now Go Build (NGB). NGB tells stories of global entrepreneurs using cloud-based technologies to solve hard, real-world problems. Watch the show here: https://www.youtube.com/watchv=2n7bm0mteG0 At Zwende, we&aposre not just building products we&aposre reimagining how theyre built. Were crafting a world-class, AI-first product and technology team, and we&aposre on the hunt for an AI Product Analyst whos ready to redefine what this role means. This isn&apost your typical analyst gig. Youll be part of a new generation of product thinkers those who blend data, instinct, and cutting-edge AI to shape decisions, automate insights, and accelerate product velocity. You wont just support the product; youll co-create it, using AI as your co-pilot. If you&aposre excited about operating at 10x speed, impact, and creativity and building the muscle to lead AI-native product roles across the tech ecosystem this is your launchpad. If this has got you excited, lets dive deeper. Here are all the things that the AI Product Analyst would do: Combine data across Google Analytics, Meta Ads & Shopify to come up with unique product/business hypothesis that will move the business forward Spin up automated A/B testing frameworks using tools like GrowthBook or PostHog, powered by AI agents to monitor results, adjust hypotheses, and even auto-suggest next test variations. Learn vibe coding on Replit/Cursor etc, to be able to prototype new product ideas and validate them in a matter of days, before they get into regular product development Build Agents using n8n or Agent builder toolkits which can automate workflows for the entire organization Train and deploy lightweight recommender systems (e.g., using embeddings + Shopify data) that surface relevant products or bundles dynamically. Lead AI onboarding sessions for other team members, making AI-first thinking part of the org DNA from designers and marketers to CX and ops. To become successful in this role, you will need: Core Analytical & Product Skills Strong grounding in data analysis fluency in SQL, Excel/Sheets, and experience with tools like GA4, Shopify analytics, and Meta Ads Manager. Ability to connect the dots across platforms and datasets to uncover user behavior insights, marketing ROI, and conversion gaps. Comfort with rapid experimentation running A/B tests, interpreting results, and suggesting actionable changes. Experience working closely with product teams to inform feature prioritization and validate product hypotheses with data. AI-Native Thinking Hands-on experience (or deep curiosity) with tools like Replit, Cursor, n8n, or LangChain to automate, prototype, or build internal tools. Familiarity with how LLMs work, and how they can be used for analysis, summarization, ideation, and UX (even if you haven&apost fine-tuned one yourself). Ability to design, prompt, or configure AI agents to automate workflows (internal dashboards, reporting agents, CX support agents, etc). Bonus: Exposure to embeddings, vector search, or recommendation systems in any side project or course. Mindset & Traits Youre a builder-analyst someone who doesn&apost just observe problems but wants to solve them by creating scrappy tools or internal agents. You learn by doing youre excited to ship rough prototypes, learn from real-world signals, and iterate quickly. You think from first principles, not just playbook. You&aposre not afraid to question default tools or methods if you see a better way. Youre deeply curious about AI and how it can be applied creatively to real business and user problems. Youre comfortable with ambiguity this role wont come with rigid requirements, but with room to invent and define it. Additional Information: Flexible Working Hours. Indicative timing: 9 AM to 7 PM. Monday to Saturday. Location: Work from Home, might convert to work from office Work closely with the leadership team at Zwende Exponential career growth based on performance A flat org and informal structure where performance and superiority of thought drive all decisions Show more Show less

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a Python Backend Engineer specializing in AWS with a focus on GenAI & ML, you will be responsible for designing, developing, and maintaining intelligent backend systems and AI-driven applications. Your primary objective will be to build and scale backend systems while integrating AI/ML models using Django or FastAPI. You will deploy machine learning and GenAI models with frameworks like TensorFlow, PyTorch, or Scikit-learn, and utilize Langchain for GenAI pipelines. Experience with LangGraph will be advantageous in this role. Collaboration with data scientists, DevOps, and architects is essential to integrate models into production. You will be working with AWS services such as EC2, Lambda, S3, SageMaker, and CloudFormation for infrastructure and deployment purposes. Additionally, managing CI/CD pipelines for backend and model deployments will be a key part of your responsibilities. Ensuring the performance, scalability, and security of applications in cloud environments will also fall under your purview. To be successful in this role, you should have at least 5 years of hands-on experience in Python backend development and a strong background in building RESTful APIs using Django or FastAPI. Proficiency in AWS cloud services is crucial, along with a solid understanding of ML/AI concepts and model deployment practices. Familiarity with ML libraries like TensorFlow, PyTorch, or Scikit-learn is required, as well as experience with Langchain for GenAI applications. Experience with DevOps tools such as Docker, Kubernetes, Git, Jenkins, and Terraform will be beneficial. An understanding of microservices architecture, CI/CD workflows, and agile development practices is also desirable. Nice to have skills include knowledge of LangGraph, LLMs, embeddings, and vector databases, as well as exposure to OpenAI APIs, AWS Bedrock, or similar GenAI platforms. Additionally, familiarity with MLOps tools and practices for model monitoring, versioning, and retraining will be advantageous. This is a full-time, permanent position with benefits such as health insurance and provident fund. The work location is in-person, and the schedule involves day shifts from Monday to Friday in the morning. If you are interested in this opportunity, please contact the employer at +91 9966550640.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

Genpact is a global professional services and solutions firm dedicated to delivering outcomes that shape the future. With over 125,000 employees spread across more than 30 countries, we are characterized by our innate curiosity, entrepreneurial agility, and commitment to creating lasting value for our clients. Our purpose, which is the relentless pursuit of a world that works better for people, drives us to serve and transform leading enterprises, including the Fortune Global 500, by leveraging our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently looking for a Principal Consultant - Data Scientist, specializing in Azure Generative AI & Advanced Analytics. In this role, we are seeking a highly skilled and experienced Data Scientist with hands-on expertise in Azure Generative AI, Document Intelligence, Agentic AI, and Advanced Data Pipelines. Your responsibilities will include developing and optimizing AI/ML models, analyzing complex datasets, and providing strategic recommendations for embedding models and Generative AI applications. You will play a crucial role in driving AI-driven insights and automation within our business. Responsibilities: - Collaborate with cross-functional teams to identify, analyze, and interpret complex datasets to develop actionable insights and drive data-driven decision-making. - Design, develop, and implement Generative AI solutions leveraging AWS Bedrock, Azure OpenAI, Azure Machine Learning, and Cognitive Services. - Utilize Azure Document Intelligence to extract and process structured and unstructured data from diverse document sources. - Build and optimize data pipelines for processing and analyzing large-scale datasets efficiently. - Implement Agentic AI techniques to develop intelligent, autonomous systems that can make decisions and take actions. - Research, evaluate, and recommend embedding models, language models, and generative models for diverse business use cases. - Continuously monitor and assess the performance of AI models, generative models, and data-driven solutions, refining and optimizing them as needed. - Stay up-to-date with the latest industry trends, tools, and technologies in data science, AI, and generative models, and apply this knowledge to improve existing solutions and develop new ones. - Mentor and guide junior team members, helping to develop their skills and contribute to their professional growth. - Ensure model explainability, fairness, and compliance with responsible AI principles. - Stay up to date with the latest advancements in AI, ML, and data science, and apply best practices to enhance business operations. Qualifications: Minimum Qualifications / Skills: - Bachelor's or Master's degree in Computer Science, Data Science, AI, Machine Learning, or a related field. - Relevant experience in data science, machine learning, AI applications, generative AI prompt engineering, and creating custom models. - Strong proficiency in Python, TensorFlow, PyTorch, PySpark, Scikit-learn, and MLflow. - Hands-on experience with Azure AI services (Azure OpenAI, Azure Document Intelligence, Azure Machine Learning, Azure Synapse, Azure Data Factory, Data Bricks, RAG Pipeline). - Expertise in LLMs, transformer architectures, and embeddings. - Experience in building and optimizing end-to-end data pipelines. - Familiarity with vector databases, FAISS, Pinecone, and knowledge retrieval techniques. - Knowledge of Reinforcement Learning (RLHF), fine-tuning LLMs, and prompt engineering. - Strong analytical skills with the ability to translate business requirements into AI/ML solutions. - Excellent problem-solving, critical thinking, and communication skills. - Experience with cloud-native AI deployment, containerization (Docker, Kubernetes), and MLOps practices is a plus. Preferred Qualifications / Skills: - Experience with multi-modal AI models and computer vision applications. - Exposure to LangChain, Semantic Kernel, RAG (Retrieval-Augmented Generation), and knowledge graphs. - Certifications in Microsoft Azure AI, Data Science, or ML Engineering. If you are passionate about leveraging your skills and expertise to drive AI-driven insights and automation in a dynamic environment, we invite you to apply for the role of Principal Consultant - Data Scientist at Genpact. Join us in shaping the future and creating lasting value for our clients.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

delhi

On-site

As a Sr. Software Engineer specializing in AI, you will be a crucial part of Occams Advisory's AI Initiative. Your primary responsibility will be to lead the development of AI-driven solutions, generative AI, and AI business applications that have a significant impact on various domains. If you are passionate about AI, LLMs, and APIs, and are well-versed in tokens, embeddings, and temperature settings, this role is tailor-made for you. Your expertise will be instrumental in steering the end-to-end execution of AI models, starting from conceptualization and training to optimization and deployment. The role necessitates a profound understanding of AI evolution, particularly in the realms of LLMs, AI Business applications, and generative models, ensuring that the AI solutions are not only innovative but also scalable and effective. You will play a pivotal role in developing LLM-Powered Business Solutions by creating robust applications leveraging frameworks like OpenAI and Langchain. Your proficiency in implementing Retrieval-Augmented Generation (RAG) Pipelines, advanced prompt engineering, and API and Microservices Development will be critical in enhancing user interactions and integrating AI functionalities seamlessly. Collaborating with cross-functional teams and ensuring performance optimization and quality assurance will also be part of your responsibilities. To excel in this role, you should hold a Bachelor's or Master's degree in computer science, Data Science, Engineering, or a related field, with at least 2 years of experience in integrating OpenAI, Gemini, Lambda, Llama, Langchain into business scenarios. Proficiency in node.js, AI frameworks like TensorFlow and PyTorch, as well as expertise in LLMs, transformers, embeddings, and generative AI are essential. Additionally, experience in AI model optimization, API development, cloud platforms (OpenAI, AWS, Azure, Google Cloud), and handling large datasets for AI training is required. Occams Advisory offers a comprehensive benefits package that includes health insurance for you and your dependents, provident fund, a fixed CTC budget for learning opportunities, a market-leading leave policy, paid holidays, employee recognition & rewards, and a culture that values meritocracy. If you are a proactive, analytical, and detail-oriented professional who thrives on challenges and is eager to contribute to a culture of excellence and impact, we encourage you to apply for this exciting opportunity as a Sr. Software Engineer (AI) with Occams Advisory.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Software Engineer in the Developer Experience and Productivity Engineering team at Coupa, you will play a crucial role in designing, implementing, and enhancing our sophisticated AI orchestration platform. Your primary responsibilities will revolve around architecting AI and MCP tools architecture with a focus on scalability and maintainability, developing integration mechanisms for seamless connectivity between AI platforms and MCP systems, and building secure connectors to internal systems and data sources. You will have the opportunity to collaborate with product managers to prioritize and implement features that deliver significant business value. Additionally, you will mentor junior engineers, contribute to engineering best practices, and work on building a scalable, domain-based hierarchical structure for our AI platforms. Your role will involve creating specialized tools tailored to Coupa's unique operational practices, implementing secure knowledge integration with AWS RAG and Knowledge Bases, and designing systems that expand capabilities while maintaining manageability. In this role, you will get to work at the forefront of AI integration and orchestration, tackling complex technical challenges with direct business impact. You will collaborate with a talented team passionate about AI innovation and help transform how businesses leverage AI for operational efficiency. Furthermore, you will contribute to an architecture that scales intelligently as capabilities grow, work with the latest LLM technologies, and shape their application in enterprise environments. To excel in this position, you should possess at least 5 years of professional software engineering experience, be proficient in Python and RESTful API development, and have experience in building and deploying cloud-native applications, preferably on AWS. A solid understanding of AI/ML concepts, software design patterns, system architecture, and performance optimization is essential. Additionally, you should have experience with integrating multiple complex systems and APIs, strong problem-solving skills, attention to detail, and excellent communication abilities to explain complex technical concepts clearly. Preferred qualifications include experience with AI orchestration platforms or building tools for LLMs, knowledge of vector databases, embeddings, and RAG systems, familiarity with monitoring tools like New Relic, observability patterns, and SRE practices, and experience with DevOps tools like Jira, Confluence, GitHub, or similar tools and their APIs. Understanding security best practices for AI systems and data access, previous work with domain-driven design and microservices architecture, and contributions to open-source projects or developer tools are also advantageous. Coupa is committed to providing equal employment opportunities to all qualified candidates and employees, fostering a welcoming and inclusive work environment. Decisions related to hiring, compensation, training, or performance evaluation are made fairly, in compliance with relevant laws and regulations. Please note that inquiries or resumes from recruiters will not be accepted. By applying to this position, you acknowledge that Coupa collects your application, including personal data, for managing ongoing recruitment and placement activities, as well as for employment purposes if your application is successful. You can find more information about how your application is processed, the purposes of processing, and data retention in Coupa's Privacy Policy.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

As a LLM Engineer at HuggingFace, you will play a crucial role in bridging the gap between advanced language models and real-world applications. Your primary focus will be on fine-tuning, evaluating, and deploying LLMs using frameworks such as HuggingFace and Ollama. You will be responsible for developing React-based applications with seamless LLM integrations through REST, WebSockets, and APIs. Additionally, you will work on building scalable pipelines for data extraction, cleaning, and transformation, as well as creating and managing ETL workflows for training data and RAG pipelines. Your role will also involve driving full-stack LLM feature development from prototype to production. To excel in this position, you should have at least 2 years of professional experience in ML engineering, AI tooling, or full-stack development. Strong hands-on experience with HuggingFace Transformers and LLM fine-tuning is essential. Proficiency in React, TypeScript/JavaScript, and back-end integration is required, along with comfort working with data engineering tools such as Python, SQL, and Pandas. Familiarity with vector databases, embeddings, and LLM orchestration frameworks is a plus. Candidates with experience in Ollama, LangChain, or LlamaIndex will be given bonus points. Exposure to real-time LLM applications like chatbots, copilots, or internal assistants, as well as prior work with enterprise or SaaS AI integrations, are highly valued. This role offers a remote-friendly environment with flexible working hours and a high-ownership opportunity. Join our small, fast-moving team at HuggingFace and be part of building the next generation of intelligent systems. If you are passionate about working on impactful AI products and have the drive to grow in this field, we would love to hear from you.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

punjab

On-site

We are seeking a highly motivated GenAI Engineer with a strong background in working with Large Language Models (LLMs), Retrieval-Augmented Generation (RAG) workflows, and production-ready AI applications. As a GenAI Engineer, you will be involved in designing, building, and expanding digital products and creative applications that capitalize on the latest LLM technologies available. In this role, you will take on a lead position in product development, offering AI services to clients, facilitating client onboarding, and delivering cutting-edge AI solutions. You will collaborate with a variety of modern AI tools, cloud services, and frameworks to achieve these objectives. Your key responsibilities will include designing and implementing generative AI solutions using LLMs, NLP, and computer vision, developing and scaling digital products with LLMs at their core, leading product development and operations teams to implement GenAI-based solutions, managing client onboarding and adoption strategies, delivering enhancements based on client-specific requirements, building and maintaining RAG pipelines and LLM-based workflows for enterprise applications, overseeing LLMOps processes throughout the AI lifecycle, working with cloud-based GenAI platforms such as Azure OpenAI, Google, and AWS, implementing API integrations, orchestration, and workflow automation, as well as evaluating, fine-tuning, and monitoring the performance of LLM outputs using observability tools. The ideal candidate will possess a Bachelor's degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field, or equivalent hands-on experience, along with a minimum of 2 years of practical experience in software development or applied machine learning. Proficiency in Azure AI services, including Azure OpenAI, Azure Cognitive Services, and Azure Machine Learning, is required. Additionally, the candidate should have proven experience with LLM APIs, hands-on experience in building and deploying RAG pipelines, proficiency in Python and its ecosystems and libraries, familiarity with core GenAI frameworks, experience with vector databases, practical knowledge of embeddings, model registries, LLM APIs, prompt engineering, tool/function calling, structured outputs, LLM observability tools, as well as strong Git, API, and cloud platform experience. This is a full-time position based in Mohali, Punjab, with on-site work mode and day shift timings from 10:00 AM to 7:00 PM, Monday to Friday. The interview mode will be face-to-face on-site, and interested candidates can contact +91-9872993778 for further information.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

Genpact is a global professional services and solutions firm committed to delivering outcomes that help shape the future. With a team of over 125,000 individuals across 30+ countries, we are driven by curiosity, entrepreneurial agility, and a desire to create lasting value for our clients. Our purpose, the relentless pursuit of a world that works better for people, empowers us to serve and transform leading enterprises, including the Fortune Global 500, utilizing our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently looking for a Principal Consultant - Data Scientist specializing in Azure Generative AI & Advanced Analytics. As a highly skilled and experienced professional, you will be responsible for developing and optimizing AI/ML models, analyzing complex datasets, and providing strategic recommendations for embedding models and Generative AI applications. Your role will be crucial in driving AI-driven insights and automation within our business. Responsibilities: - Collaborate with cross-functional teams to identify, analyze, and interpret complex datasets for actionable insights and data-driven decision-making. - Design, develop, and implement Generative AI solutions leveraging various platforms including AWS Bedrock, Azure OpenAI, Azure Machine Learning, and Cognitive Services. - Utilize Azure Document Intelligence to extract and process structured and unstructured data from diverse document sources. - Build and optimize data pipelines to efficiently process and analyze large-scale datasets. - Implement Agentic AI techniques to develop intelligent, autonomous systems capable of making decisions and taking actions. - Research, evaluate, and recommend embedding models, language models, and generative models for diverse business use cases. - Continuously monitor and assess the performance of AI models and data-driven solutions, refining and optimizing them as necessary. - Stay updated with the latest industry trends, tools, and technologies in data science, AI, and generative models to enhance existing solutions and develop new ones. - Mentor and guide junior team members to aid in their professional growth and skill development. - Ensure model explainability, fairness, and compliance with responsible AI principles. - Keep abreast of advancements in AI, ML, and data science and apply best practices to enhance business operations. Minimum Qualifications / Skills: - Bachelor's or Master's degree in Computer Science, Data Science, AI, Machine Learning, or a related field. - Experience in data science, machine learning, AI applications, generative AI prompt engineering, and creating custom models. - Proficiency in Python, TensorFlow, PyTorch, PySpark, Scikit-learn, and MLflow. - Hands-on experience with Azure AI services (Azure OpenAI, Azure Document Intelligence, Azure Machine Learning, Azure Synapse, Azure Data Factory, Data Bricks, RAG Pipeline). - Expertise in LLMs, transformer architectures, and embeddings. - Experience in building and optimizing end-to-end data pipelines. - Familiarity with vector databases, FAISS, Pinecone, and knowledge retrieval techniques. - Knowledge of Reinforcement Learning (RLHF), fine-tuning LLMs, and prompt engineering. - Strong analytical skills with the ability to translate business requirements into AI/ML solutions. - Excellent problem-solving, critical thinking, and communication skills. - Experience with cloud-native AI deployment, containerization (Docker, Kubernetes), and MLOps practices is advantageous. Preferred Qualifications / Skills: - Experience with multi-modal AI models and computer vision applications. - Exposure to LangChain, Semantic Kernel, RAG (Retrieval-Augmented Generation), and knowledge graphs. - Certifications in Microsoft Azure AI, Data Science, or ML Engineering. Job Title: Principal Consultant Location: India-Noida Schedule: Full-time Education Level: Bachelor's / Graduation / Equivalent Job Posting: Apr 11, 2025, 9:36:00 AM Unposting Date: May 11, 2025, 1:29:00 PM Master Skills List: Digital Job Category: Full Time,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

ahmedabad, gujarat

On-site

You will be joining Ziance Technologies as an experienced Data Engineer (Gen AI) where your primary role will involve leveraging your expertise in Python and the Azure Tech Stack. Your responsibilities will include designing and implementing advanced data solutions, with a special focus on Generative AI concepts. With 5 - 8 years of experience under your belt, you must possess a strong proficiency in Python programming language. Additionally, you should have hands-on experience with REST APIs, Fast APIs, Graph APIs, and SQL Alchemy. Your expertise in Azure Services such as DataLake, Azure SQL, Function App, and Azure Cognitive Search will be crucial for this role. A good understanding of concepts like Chunking, Embeddings, vectorization, indexing, Prompting, Hallucinations, and RAG will be beneficial. Experience in DevOps, including creating pull PRs and maintaining code repositories, is a must-have skill. Your strong communication skills and ability to collaborate effectively with team members will be essential for success in this position. If you are looking to work in a dynamic environment where you can apply your skills in Azure, Python, and data stack, this role at Ziance Technologies could be the perfect fit for you.,

Posted 2 weeks ago

Apply

12.0 - 22.0 years

25 - 40 Lacs

Gurugram, Bengaluru

Hybrid

We are looking for an experienced Solution Architect in Generative AI to lead the design and delivery of enterprise-grade GenAI solutions across Azure and Google Cloud platforms. This is a full-stack leadership role requiring deep technical expertise and the ability to guide teams through end-to-end project execution. Key Responsibilities: Lead architecture for GenAI projects: LLMs, embeddings, RAG, prompt engineering, agent frameworks. Define scalable designs across full-stack: React, Node.js, .NET Core, C#, Cosmos DB, SQL/NoSQL, vector DBs. Implement Azure/GCP cloud-native solutions using AKS, GKE, Functions, Pub/Sub. Drive CI/CD automation via GitHub Actions, Azure DevOps, Cloud Build. Conduct architecture/code reviews and enforce security, DevSecOps, and performance standards. Translate business requirements into technical solutions and communicate with senior stakeholders. Mentor engineering teams and promote innovation, collaboration, and agile delivery. Required Skills: Generative AI, LLMs, Prompt Engineering, LangChain, RAG, C#, .NET Core, React, Node.js, Azure, Google Cloud, AKS, GKE, Terraform, CI/CD, Cosmos DB, BigQuery, Microservices, DevSecOps, API Gateway Qualifications: Bachelors in Computer Science or Engineering (Master’s in AI/ML preferred) Strong leadership, communication, and stakeholder management skills Apply Now: shrishtispearhead1@gmail.com Contact: +91 8299010653

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

Bengaluru

Work from Office

Role & responsibilities Assist in designing, training, and evaluating machine learning and deep learning models. Work on GenAI use cases such as text summarization, question answering, and prompt engineering. Build applications using LLMs (like OpenAI GPT, LLaMA, Mistral, Claude, or similar). Preprocess and manage large datasets for training and inference. Implement NLP pipelines using libraries like Hugging Face Transformers. Help integrate AI models into production-ready APIs or applications. Stay updated with advancements in GenAI, ML, and LLM frameworks. Preferred candidate profile Knowledge of vector databases (FAISS, Pinecone, etc.) LangChain or LlamaIndex (RAG pipelines) Experience with Kaggle competitions Awareness of ethical AI principles and model limitations Selection Process: Technical Assessment Python + ML/GenAI basics Technical Interview – Coding + Project Discussion Final Selection – Based on combined performance

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

You are an experienced Python Backend Engineer with a strong background in AWS and AI/ML. Your primary responsibility will be to design, develop, and maintain Python-based backend systems and AI-powered services. You will be tasked with building and managing RESTful APIs using Django or FastAPI for AI/ML model integration. Additionally, you will develop and deploy machine learning and GenAI models using frameworks like TensorFlow, PyTorch, or Scikit-learn. Your expertise in implementing GenAI pipelines using Langchain will be crucial, and experience with LangGraph is considered a strong advantage. You will leverage various AWS services such as EC2, Lambda, S3, SageMaker, and CloudFormation for infrastructure and deployment purposes. Collaborating with data scientists, DevOps, and architects to integrate models and workflows into production will be a key aspect of your role. Furthermore, you will be responsible for building and managing CI/CD pipelines for backend and model deployments. Ensuring the performance, scalability, and security of applications in cloud environments will be paramount. Monitoring production systems, troubleshooting issues, and optimizing model and API performance will also fall under your purview. To excel in this role, you must possess at least 5 years of hands-on experience in Python backend development. Your strong experience in building RESTful APIs using Django or FastAPI is essential. Proficiency in AWS cloud services, a solid understanding of ML/AI concepts, and experience with ML libraries are prerequisites. Hands-on experience with Langchain for building GenAI applications and familiarity with DevOps tools and microservices architecture will be beneficial. Additionally, having Agile development experience and exposure to tools like Docker, Kubernetes, Git, Jenkins, Terraform, and CI/CD workflows will be advantageous. Experience with LangGraph, LLMs, embeddings, and vector databases, as well as knowledge of MLOps tools and practices, are considered nice-to-have qualifications. In summary, as a Python Backend Engineer with expertise in AWS and AI/ML, you will play a vital role in designing, developing, and maintaining intelligent backend systems and GenAI-driven applications. Your contributions will be instrumental in scaling backend systems and implementing AI/ML applications effectively.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

DecisionX is pioneering a new category with the world's first Decision AI, an AI Super-Agent that assists high-growth teams in making smarter, faster decisions by transforming fragmented data into clear next steps. Whether it involves strategic decisions in the boardroom or operational decisions across various departments like Sales, Marketing, Product, and Engineering, down to the minutiae that drives daily operations, Decision AI serves as your invisible co-pilot, thinking alongside you, acting ahead of you, and evolving beyond you. We are seeking a dedicated and hands-on AI Engineer to join our Founding team. In this role, you will collaborate closely with leading AI experts to develop the intelligence layer of our exclusive "Agentic Number System." Key Responsibilities - Building, fine-tuning, and deploying AI/ML models for tasks such as segmentation, scoring, recommendation, and orchestration. - Developing and optimizing agent workflows using LLMs (OpenAI, Claude, Mistral, etc.) for contextual reasoning and task execution. - Creating vector-based memory systems utilizing tools like FAISS, Chroma, or Weaviate. - Working with APIs and connectors to incorporate third-party data sources (e.g., Salesforce, HubSpot, GSuite, Snowflake). - Designing pipelines that transform structured and unstructured signals into actionable insights. - Collaborating with GTM and product teams to define practical AI agent use cases. - Staying informed about the latest developments in LLMs, retrieval-augmented generation (RAG), and agent orchestration frameworks (e.g., CrewAI, AutoGen, LangGraph). Must Have Skills - 5-8 years of experience in AI/ML engineering or applied data science. - Proficient programming skills in Python, with expertise in LangChain, Pandas, NumPy, and Scikit-learn. - Experience with LLMs (OpenAI, Anthropic, etc.), prompt engineering, and RAG pipelines. - Familiarity with vector stores, embeddings, and semantic search. - Expertise in data wrangling, feature engineering, and model deployment. - Knowledge of MLOps tools such as MLflow, Weights & Biases, or equivalent. What you will get - Opportunity to shape the AI architecture of a high-ambition startup. - Close collaboration with a visionary founder and experienced product team. - Ownership, autonomy, and the thrill of building something from 0 to 1. - Early team equity and a fast growth trajectory.,

Posted 2 weeks ago

Apply

3.0 - 8.0 years

0 Lacs

karnataka

On-site

As a GEN AI and Machine Learning Engineer, you will be responsible for leveraging your Bachelor's or Masters degree in Computer Science, Data Science, Engineering, or a related field to develop and deploy machine learning and AI models. With a solid foundation in mathematics, statistics, and probability, you will utilize your strong programming skills in Python and proficiency in SQL/NoSQL databases to build analytical approaches based on business requirements. Your hands-on experience with Agentic AI frameworks will be crucial in building AI-driven features and solutions by collaborating with software engineers, business stakeholders, and domain experts. You will have the opportunity to work with Generative AI models and tools like OpenAI, Google Gemini, and Runway ML, in addition to AI/ML and deep learning frameworks such as TensorFlow, PyTorch, Scikit-learn, OpenCV, and Keras. In this role, you will preprocess and analyze large-scale datasets to extract meaningful insights, identify trends, and develop RESTful APIs using Flask or Django. Proficiency in cloud environments like AWS, Azure, or GCP will be essential as you deploy AI/ML applications and continuously monitor model performance for accuracy, scalability, and generalization. Additionally, your familiarity with ML operations tools such as MLFlow, Kubeflow, and CI/CD pipelines will be advantageous in evaluating and optimizing model performance. Experience with Docker, Kubernetes, and container orchestration, as well as frontend technologies like HTML, CSS, JavaScript/jQuery, Node.js, Angular, or React, will be beneficial for this role. Overall, as a GEN AI and Machine Learning Engineer, you will play a key role in designing, developing, and deploying AI/ML, NLP/NLU, and deep learning models and applications. Your ability to document development processes, results, and best practices will contribute to transparency and continuous learning within the organization.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Analytics focused Senior Software Engineer at PubMatic, you will be responsible for developing advanced AI agents to enhance data analytics capabilities. Your expertise in building and optimizing AI agents, along with strong skills in Hadoop, Spark, Scala, Kafka, Spark Streaming, and cloud-based solutions, will play a crucial role in improving data-driven insights and analytical workflows. Your key responsibilities will include building and implementing a highly scalable big data platform to process terabytes of data, developing backend services using Java, REST APIs, JDBC, and AWS, and building and maintaining Big Data pipelines using technologies like Spark, Hadoop, Kafka, and Snowflake. Additionally, you will design and implement real-time data processing workflows, develop GenAI-powered agents for analytics and data enrichment, and integrate LLMs into existing services for query understanding and decision support. You will work closely with cross-functional teams to enhance the availability and scalability of large data platforms and PubMatic software functionality. Participating in Agile/Scrum processes, discussing software features with product managers, and providing customer support over email or JIRA will also be part of your role. We are looking for candidates with three plus years of coding experience in Java and backend development, solid computer science fundamentals, expertise in developing software engineering best practices, hands-on experience with Big Data tools, and proven expertise in building GenAI applications. The ability to lead feature development, debug distributed systems, and learn new technologies quickly are essential. Strong interpersonal and communication skills, including technical communications, are highly valued. To qualify for this role, you should have a bachelor's degree in engineering (CS/IT) or an equivalent degree from well-known Institutes/Universities. PubMatic employees globally have returned to our offices via a hybrid work schedule to maximize collaboration, innovation, and productivity. Our benefits package includes paternity/maternity leave, healthcare insurance, broadband reimbursement, and office perks like healthy snacks, drinks, and catered lunches. About PubMatic: PubMatic is a leading digital advertising platform that provides transparent advertising solutions to publishers, media buyers, commerce companies, and data owners. Our vision is to enable content creators to run a profitable advertising business and invest back into the multi-screen and multi-format content that consumers demand.,

Posted 3 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

chennai, tamil nadu

On-site

As an AI Engineer at GenAI & Applied ML, you will have the opportunity to work with cutting-edge AI technologies, particularly in Generative AI, Large Language Models (LLMs), and applied machine learning. Your role will involve designing, building, and deploying AI solutions using platforms like OpenAI, Hugging Face, and Cohere, specifically for applications in document intelligence, automation, and Q&A systems. You will collaborate with product and engineering teams to integrate AI features into scalable platforms, utilizing tools like Lang Chain, Flask, and AWS/GCP to deploy models effectively. Your key responsibilities will include integrating RAG (Retrieval-Augmented Generation), prompt engineering, and zero/few-shot learning for domain-specific applications, and developing NLP models using advanced techniques such as NER, topic modeling, embeddings (Word2Vec, FastText), and transformers (BERT, GPT). Additionally, you will be automating data extraction, classification, and analysis using Python, Selenium, and web scraping tools, while utilizing frameworks like TensorFlow, PyTorch, and Scikit-learn for training deep learning models across NLP, CV, and recommendation domains. To qualify for this role, you should hold a Bachelor's/masters degree in computer science, AI/ML, Data Science, or related fields, along with at least 1+ years of hands-on experience in building AI/ML models. Proficiency in Python and ML frameworks such as PyTorch, TensorFlow, and Scikit-learn is essential, as well as familiarity with LLM orchestration frameworks, cloud platforms like AWS, GCP, or Azure, and a strong understanding of NLP, Computer Vision, or Recommendation Systems. Experience with SQL, data pipelines, and big data tools like Hadoop and Spark would be advantageous, along with a good understanding of explainable AI (XAI) and model interpretability. Nice to have for this role would be projects or experience in healthcare AI, document processing, or legal/finance AI applications, familiarity with Flask/Django for API development, and contributions to research or published papers in ML/NLP conferences. Join our team and be part of our mission to solve real-world problems through AI innovation and technology.,

Posted 3 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a part of our team at Raven, a YC-backed startup specializing in building AI assistants for manufacturing plants, you will play a crucial role in leveraging decades of manufacturing expertise to tackle real operational challenges through AI solutions. Based in Bangalore, our small yet dedicated team is committed to enhancing industrial operations to be safer, smarter, and more efficient. In this core technical position, your primary focus will be on developing AI-native applications and agents tailored for manufacturing workflows. Specifically, you will be responsible for: - Building Python/Go backend services that seamlessly integrate with AI systems. - Enhancing multi-modal pipeline infrastructure to manage P&IDs, SOPs, sensor data, and technical documents effectively. - Constructing agent memory systems utilizing knowledge graphs and event timelines. - Designing AI orchestration layers to facilitate decision-making workflows based on structured/unstructured plant data. - Conducting rapid prototyping of new AI workflows and deploying them in real-world plant settings. We are seeking individuals with at least 2-4 years of experience in building production systems, possessing strong Python/Go skills, familiarity with LLMs, embeddings, and vector stores. The ideal candidate should have a proactive approach to problem-solving, a keen interest in taking ownership of end-to-end solutions, and the ability to thrive in dynamic environments by iterating quickly and delivering tangible value. Moreover, a background in working at startups and the ability to adapt to various roles will be considered a bonus. By joining our team, you will have the opportunity to contribute to the development of core systems that empower plant teams to make informed decisions swiftly and safely. We offer a chance to shape the product, culture, and direction of the company as one of the initial hires, with a stake in the company through equity. You will be part of a collaborative environment that encourages fast iteration and in-person collaboration, focusing on addressing meaningful problems that have a direct impact on safety and efficiency in real-world scenarios.,

Posted 3 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

The company Raven, a YC-backed startup (S22), is dedicated to developing AI assistants for manufacturing plants by leveraging decades of manufacturing expertise combined with AI technology to address operational challenges. With strong support from top VCs, the team in Bangalore focuses on enhancing industrial operations to promote safety, intelligence, and efficiency within challenging work environments. As a part of the team, your key responsibilities will revolve around the core technical aspect of constructing AI-native applications and agents tailored for manufacturing workflows. Your tasks will include: - Building Python/Go backend services that seamlessly integrate with AI systems. - Enhancing multi-modal pipeline infrastructure to manage P&IDs, SOPs, sensor data, and technical documents effectively. - Developing agent memory systems utilizing knowledge graphs and event timelines. - Designing AI orchestration layers to facilitate decision-making workflows based on structured/unstructured plant data. - Rapid prototyping of new AI workflows and deploying them in real-world plant settings. The ideal candidate for this role should possess: - 2+ years of experience in constructing production systems. - Proficiency in Python/Go programming, with familiarity in LLMs, embeddings, and vector storage. - A keen interest in problem-solving and a proactive approach towards finding solutions. - A deep commitment to problem ownership throughout the entire process, from exploring potential solutions to implementing functional systems in production. - Comfort in handling ambiguity, adapting quickly, and delivering tangible value. A bonus would be past experience working in a startup environment and a willingness to take on various responsibilities. Joining Raven means being part of a team dedicated to developing fundamental systems that empower plant teams to make quicker and safer decisions. The role offers: - Impact: As one of the initial hires, you will play a pivotal role in shaping the product, culture, and trajectory of the company. - Ownership: You will be granted 0.1-1% equity, emphasizing the importance of feeling a sense of ownership in the company. - Focus: An opportunity to tackle significant, complex problems that directly influence real-world outcomes related to safety and efficiency. The company values the collaboration and fast iteration that come with working together in person, particularly at this stage of growth. The emphasis on teamwork and proximity aims to enhance collaboration and accelerate progress towards shared goals.,

Posted 4 weeks ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies