Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Java Developer with expertise in Prompt Engineering to join our AI-driven development team. The ideal candidate will combine robust Java backend development capabilities with hands-on experience in integrating and fine-tuning LLMs (e.g., OpenAI, Cohere, Mistral, or Anthropic), designing effective prompts, and embedding AI functionality into enterprise applications. This role is ideal for candidates passionate about merging traditional enterprise development with cutting-edge AI technologies. Key Responsibilities Design, develop, and maintain scalable backend systems using Java (Spring Boot) and integrate AI/LLM services. Collaborate with AI/ML engineers and product teams to design prompt templates, test prompt effectiveness, and iterate for accuracy, performance, and safety. Build and manage RESTful APIs that interface with LLM services and microservices in production-grade environments. Fine-tune prompt formats for various AI tasks (e.g., summarization, extraction, Q&A, chatbots) and optimize for performance and cost. Apply RAG (Retrieval-Augmented Generation) patterns to retrieve relevant context from data stores for LLM input. Ensure secure, efficient, and scalable communication between LLM APIs (OpenAI, Google Gemini, Azure OpenAI, etc.) and internal systems. Develop reusable tools and frameworks to support prompt evaluation, logging, and improvement cycles. Write high-quality unit tests, conduct code reviews, and maintain CI/CD pipelines using tools like Jenkins, GitHub Actions, or GitLab. Work in Agile/Scrum teams and contribute to sprint planning, estimation, and retrospectives. Must-Have Technical Skills Java & Backend Development : Core Java 8/11/17 Spring Boot, Spring MVC, Spring Data JPA RESTful APIs, JSON, Swagger/OpenAPI Hibernate or other ORM tools Microservices architecture Prompt Engineering / LLM Integration : Experience working with OpenAI (GPT-4, GPT-3.5), Claude, Llama, Gemini, or Mistral models. Designing effective prompts for various tasks (classification, summarization, Q&A, etc.) Familiarity with prompt chaining, zero-shot/few-shot learning Understanding of token limits, temperature, top_up, and stop sequences Prompt evaluation methods and frameworks (e.g., LangChain, LlamaIndex, Guidance, PromptLayer) AI Integration Tools : LangChain or LlamaIndex for building LLM applications API integration with AI platforms (OpenAI, Azure AI, Hugging Face, etc.) Vector databases (e.g., Pinecone, FAISS, Weaviate, ChromaDB) DevOps / Deployment : Docker, Kubernetes (preferred) CI/CD tools (Jenkins, GitHub Actions) AWS/GCP/Azure cloud environments Monitoring : Prometheus, Grafana, ELK Stack Good-to-Have Skills Python for prototyping AI workflows Chatbot development using LLMs Experience with RAG pipelines and semantic search Hands-on with GitOps, IaC (Terraform), or serverless functions Experience integrating LLMs into enterprise SaaS products Knowledge of Responsible AI and bias mitigation strategies Soft Skills Strong problem-solving and analytical thinking Excellent written and verbal communication skills Willingness to learn and adapt in a fast-paced, AI-evolving environment Ability to mentor junior developers and contribute to tech strategy Education Bachelors or Masters degree in Computer Science, Engineering, or related field Preferred Certifications (Not Mandatory) : OpenAI Developer or Azure AI Certification Oracle Certified Java Professional AWS/GCP Cloud Certifications (ref:hirist.tech)
Posted 1 month ago
8.0 years
0 Lacs
Delhi Cantonment, Delhi, India
On-site
Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA Seeking a talented Solution Architect/BDM for On-Prem/Private AI. Requires deep open source LLM expertise to translate client needs into technical solutions. Responsibilities include assessing needs, recommending LLM tech, sizing opportunities and infrastructure, and collaborating on end-to-end solutions with costing. Needs strategic thinking, strong technical and business skills to drive innovation and client value. What You'll Be Doing Key Roles and Responsibilities: Solution Architecture & Technical Leadership Demonstrate deep expertise in LLMs such as Phi-4, Mistral, Gemma, Llama and other foundation models Assess client business requirements and translate them into detailed technical specifications Recommend appropriate LLM solutions based on specific business outcomes and use cases Experience in sizing and architecting infrastructure for AI/ML workloads, particularly GPU-based systems. Design scalable and secure On-Prem/Private AI architectures Create technical POCs and prototypes to demonstrate solution capabilities Hands-on experience with vector databases (open-source or proprietary), such as Weaviate, Milvus, or Vald etc. Expertise in fine-tuning, query caching, and optimizing vector embeddings for efficient similarity searches Business Development Size and qualify opportunities in the On-Prem/Private AI space Develop compelling proposals and solution presentations for clients Build and nurture client relationships at technical and executive levels Collaborate with sales teams to create competitive go-to-market strategies Identify new business opportunities through technical consultation Project & Delivery Leadership Work with delivery teams to develop end-to-end solution approaches and accurate costing Lead technical discovery sessions with clients Guide implementation teams during solution delivery Ensure technical solutions meet client requirements and business outcomes Develop reusable solution components and frameworks to accelerate delivery AI Agent Development Design, develop, and deploy AI-powered applications leveraging agentic AI frameworks such as LangChain, AutoGen, and CrewAI. Utilize the modular components of these frameworks (LLMs, Prompt Templates, Agents, Memory, Retrieval, Tools) to build sophisticated language model systems and multi-agent workflows. Implement Retrieval Augmented Generation (RAG) pipelines and other advanced techniques using these frameworks to enhance LLM responses with external data. Contribute to the development of reusable components and best practices for agentic AI implementations. Knowledge, Skills, and Attributes: Basic Qualifications: 8+ years of experience in solution architecture or technical consulting roles 3+ years of specialized experience working with LLMs and Private AI solutions Demonstrated expertise with models such as Phi-4, Mistral, Gemma, and other foundation models Strong understanding of GPU infrastructure sizing and optimization for AI workloads Proven experience converting business requirements into technical specifications Experience working with delivery teams to create end-to-end solutions with accurate costing Strong understanding of agentic AI systems and orchestration frameworks Bachelor’s degree in computer science, AI, or related field Ability to travel up to 25% Preferred Qualifications: Master's degree or PhD in Computer Science or related technical field. Experience with Private AI deployment and fine-tuning LLMs for specific use cases Knowledge of RAG (Retrieval Augmented Generation) and enterprise knowledge systems Hands-on experience with prompt engineering and LLM optimization techniques Understanding of AI governance, security, and compliance requirements Experience with major AI providers: OpenAI/Azure OpenAI, AWS, Google, Anthropic, etc. Prior experience in business development or pre-sales for AI solutions Excellent verbal and written communication skills, with the ability to explain complex technical concepts to non-technical stakeholders Strong problem-solving abilities and analytical mindset Location: Delhi or Bangalore Workplace type: Hybrid Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today.
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Req ID: 327884 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a AI Data Scientist - Digital Engineering Sr. Engineer to join our team in Hyderabad, Telangana (IN-TG), India (IN). ARTIFICIAL INTELLIGENCE AI Data Scientist | Focused on Generative AI & LLMs Design and develop AI/ML models with a focus on LLMs (e.g., GPT, LLaMA, Mistral, Falcon, Claude). Apply prompt engineering, fine-tuning, and transfer learning techniques to customize models for enterprise use cases. Work with vector databases and retrieval-augmented generation (RAG) pipelines for contextual response generation. Collaborate with data engineers, AI Engineers and MLOps teams to deploy models in production environments About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here .
Posted 1 month ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. About Yubi Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India's fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. All of our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any and all capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset based securitization Spocto - Debt recovery & risk mitigation platform Corpository - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals and Predictions to Lenders, Investors and Business Enterprises So far, we have on-boarded over 17000+ enterprises, 6200+ investors & lenders and have facilitated debt volumes of over INR 1,40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today, who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come, join the club to be a part of our epic growth story. About The Role We're looking for a highly skilled, results-driven AI Developer who thrives in fast-paced, high-impact environments. If you are passionate about pushing the boundaries of Computer Vision, OCR, NLP and and Large Language Models (LLMs) and have a strong foundation in building and deploying AI solutions, this role is for you. As a Lead Data Scientist, you will take ownership of designing and implementing state-of-the-art AI products. This role demands deep technical expertise, the ability to work autonomously, and a mindset that embraces complex challenges head-on. Here, you won't just fine-tune pre-trained models—you'll be architecting, optimizing, and scaling AI solutions that power real-world applications. Key Responsibilities Architect, develop, and deploy high-performance AI Solutions for real-world applications. Implement and optimize state-of-the-art LLM , OCR models and frameworks. Fine-tune and integrate LLMs (GPT, LLaMA, Mistral, etc.) to enhance text understanding and automation. Build and optimize end-to-end AI pipelines, ensuring efficient data processing and model deployment. Work closely with engineers to operationalize AI models in production (Docker, FastAPI, TensorRT, ONNX). Enhance GPU performance and model inference efficiency, applying techniques such as quantization and pruning. Stay ahead of industry advancements, continuously experimenting with new AI architectures and training techniques. Work in a highly dynamic, startup-like environment, balancing rapid experimentation with production-grade robustness. What We're Looking For Requirements Required Skills & Qualifications: Proven technical expertise – Strong programming skills in Python, PyTorch, TensorFlow with deep experience in NLP and LLM Hands-on experience in developing, training, and deploying LLM and Agentic workflows Strong background in vector databases, RAG pipelines, and fine-tuning LLMs for document intelligence. Deep understanding of Transformer-based architectures for vision and text processing. Experience working with Hugging Face, OpenCV, TensorRT, and NVIDIA GPUs for model acceleration. Autonomous problem solver – You take initiative, work independently, and drive projects from research to production. Strong experience in scaling AI solutions, including model optimization and deployment on cloud platforms (AWS/GCP/Azure). Thrives in fast-paced environments – You embrace challenges, pivot quickly, and execute effectively. Familiarity with MLOps tools (Docker, FastAPI, Kubernetes) for seamless model deployment. Experience in multi-modal models (Vision + Text). Good to Have Financial background and understanding corporate finance . Contributions to open-source AI projects.
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
This role is for one of our clients Industry: Technology, Information and Media Seniority level: Mid-Senior level About The Role We’re seeking an experienced and forward-thinking Lead AI Engineer to head the development of cutting-edge AI applications leveraging large language models (LLMs) and generative AI techniques. In this high-impact role, you will guide the architecture, deployment, and optimization of AI systems that power real-world, intelligent applications. You’ll be at the intersection of research and engineering, leading a team of top-tier AI engineers while shaping the future of AI capabilities within the organization. What You’ll Do Architect AI Solutions Design and lead the development of end-to-end AI systems that leverage LLMs, retrieval-augmented generation (RAG), and autonomous agent frameworks. Model Integration and Optimization Work hands-on with both open-source (Llama, Mistral, Falcon, etc.) and proprietary models (GPT-4, Claude, Gemini), fine-tuning them for specialized applications using proprietary data. Agentic Systems & RAG Build intelligent agents capable of reasoning, planning, and performing multi-step tasks using agentic architectures and RAG pipelines with vector databases like FAISS or Weaviate. Production-Grade Engineering Oversee scalable deployment, versioning, and monitoring of AI models in production using modern MLOps practices and cloud-native platforms (AWS, GCP, Azure). Cross-functional Collaboration Partner with data scientists, product managers, and software engineers to transform complex problems into elegant AI-driven products. AI Team Leadership Mentor a team of engineers, drive technical direction, and manage project priorities, sprint planning, and roadmap execution. Innovation & Research Stay ahead of the curve by tracking advancements in the AI/LLM space. Promote a culture of experimentation, rapid prototyping, and responsible AI. Ethics & Evaluation Implement best practices for model interpretability, bias mitigation, and responsible deployment of AI systems. Experience Must-Have Qualifications 5+ years of hands-on experience in AI/ML engineering with a primary focus on NLP and LLMs. Proven experience in designing and deploying production-grade AI systems. Technical Skills Strong command of Python and frameworks like PyTorch , TensorFlow , LangChain , and LlamaIndex . Solid knowledge of transformer architectures , embeddings, fine-tuning techniques, and tokenization. Experience integrating with vector databases (FAISS, Pinecone, Weaviate). Proficiency in designing APIs and working with distributed systems . Familiarity with MLOps , containerization (Docker), and orchestration tools (Kubernetes). Leadership Experience managing engineering teams and projects, with a track record of shipping successful AI solutions. Cloud & Deployment Deep experience with deploying AI systems on AWS , GCP , or Azure , and optimizing workloads for cost and performance. Nice-to-Have Experience working with multi-modal AI models (text, image, video, audio). Knowledge of RLHF (Reinforcement Learning from Human Feedback) methodologies. Contributions to open-source AI projects or peer-reviewed research. Prior experience at a startup or AI research lab . Show more Show less
Posted 1 month ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Education and Work Experience Requirements: · 5 to 8 years of experience as Data Scientist · 2 to 3 years of experience in Generative AI solution development · Strong understanding of AI agent collaboration, negotiation, and autonomous decision-making. · Experience in developing and deploying AI agents that operate independently or collaboratively in complex environments. · Deep knowledge of agentic AI principles, including self-improving, self-organizing, and goal-driven agents. · Proficiency in multi-agent frameworks such as AutoGen, LangGraph, LangChain, and CrewAI for orchestrating AI workflows. · Hands-on experience integrating LLMs (GPT, LLaMA, Mistral, etc.) with agentic frameworks to enhance automation and reasoning. · Expertise in hierarchical agent frameworks, distributed agent coordination, and decentralized AI governance. · Strong grasp of memory architectures, tool use, and action planning within AI agents. · Autonomy Score: Measures the degree of independence in decision-making. · Collaboration Efficiency: Evaluates the ability of agents to work together and share information. · Task Completion Rate: Tracks the percentage of tasks successfully executed by agents. · Response Time: Measures the latency in agent decision-making and execution. · Adaptability Index: Assesses how well agents adjust to dynamic changes in the environment. · Resource Utilization Efficiency: Evaluates computational and memory usage for optimization. · Explainability & Interpretability Score: Ensures transparency in agent reasoning and outputs. · Error Rate & Recovery Time: Tracks failures and the system’s ability to self-correct. · Knowledge Retention & Utilization: Measures how effectively agents recall and apply information. · Hands-on experience with LLMs such as GPT, BERT, LLaMA, Mistral, Claude, Gemini, etc. · Proven expertise in both open-source (LLaMA, Gemma, Mixtral) and closed-source (OpenAI GPT, Azure OpenAI, Claude, Gemini) LLMs. · Advanced skills in prompt engineering, tuning, retrieval-augmented generation (RAG), reinforcement learning (RAFT), and LLM fine-tuning (PEFT, LoRA, QLoRA). · Strong understanding of small language models (SLMs) like Phi-3 and BERT, along with Transformer architectures. · Experience working with text-to-image models such as Stable Diffusion, DALL·E, and Midjourney. · Proficiency in vector databases such as Pinecone, Qdrant for knowledge retrieval in agentic AI systems. · Deep understanding of Human-Machine Interaction (HMI) frameworks within cloud and on-prem environments. · Strong grasp of deep learning architectures, including CNNs, RNNs, Transformers, GANs, and VAEs. · Expertise in Python, R, TensorFlow, Keras, and PyTorch. · Hands-on experience with NLP tools and libraries: OpenNLP, CoreNLP, WordNet, NLTK, SpaCy, Gensim, Knowledge Graphs, and LLM-based applications. · Proficiency in advanced statistical methods and transformer-based text processing. · Experience in reinforcement learning and planning techniques for autonomous agent behavior. Mandatory Skills: · Design, develop, test, and deploy Machine Learning models using state-of-the-art algorithms with a strong focus on language models. · Strong understanding of LLMs, and associated technologies like RAG, Agents, VectorDB and Guardrails · Hand-on experience in GenAI frameworks like LlamaIndex, Langchain, Autogen, etc. · Experience in cloud services like Azure, GCP and AWS · Multi-agent frameworks: AutoGen, LangGraph, LangChain, CrewAI · Large Language Models (LLMs): GPT, Educational qualification: BE,BTECH or PHD
Posted 1 month ago
3.0 years
0 Lacs
Dehradun, Uttarakhand, India
On-site
Overview - We are hiring experienced AI Developers to lead the design, development, and scaling of intelligent systems using large language models (LLMs). This includes building prompt-based agents, developing and fine-tuning custom AI models, and architecting advanced pipelines across various business functions. You will work across the full AI development lifecycle—from prompt engineering to model training and deployment—while staying at the forefront of innovation in generative AI and autonomous agent frameworks. Key Responsibilities: - Design and deploy intelligent agents using LLMs such as OpenAI GPT-4, Claude, Mistral, Gemini, Cohere, etc. Build prompt-driven and autonomous agents using frameworks like LangChain, AutoGen, CrewAI, Semantic Kernel, LlamaIndex , or custom stacks. Architect and implement multi-agent systems capable of advanced reasoning, coordination, and tool interaction. Incorporate goal-setting, sub-task decomposition, and autonomous feedback loops for agent self-improvement. Develop custom AI models via fine-tuning, supervised learning , or LoRA/QLoRA approaches using Hugging Face Transformers, PyTorch , or TensorFlow . Build and manage Retrieval-Augmented Generation (RAG) pipelines with vector databases like Pinecone, FAISS, Weaviate, or Chroma. Train and evaluate models on custom datasets using modern NLP workflows and distributed training tools. Optimize models for latency, accuracy, and cost efficiency in both prototype and production environments. Create and maintain testing and evaluation pipelines for prompt quality, hallucination detection, and model behavior safety. Integrate external tools, APIs, plugins, and knowledge bases to enhance agent capabilities. Collaborate with product and engineering teams to translate use cases into scalable AI solutions. Required Technical Skills: - 3+ years hands-on experience with large language models, generative AI, conversational systems, or agent-based system development. Proficient in Python , with experience in AI/ML libraries such as Transformers, LangChain, PyTorch, TensorFlow, PEFT , and Scikit-learn. Strong understanding of prompt engineering, instruction tuning , and system prompt architecture. Experience with custom model training, fine-tuning , and deploying models via Hugging Face, OpenAI APIs, or open-source LLMs. Experience designing and implementing RAG pipelines and managing embedding stores. Familiarity with agent orchestration frameworks , tool integration, memory handling, and context management. Working knowledge of containerization (Docker), MLOps , and cloud environments (AWS, GCP, Azure). Preferred Experience: - Exposure to distributed training (DeepSpeed, Accelerate), quantization , or model optimization techniques . Familiarity with LLM evaluation tools (Trulens, LM Eval Harness, custom eval agents). Experience with RLHF, multi-modal models , or voice/chat integrations . Background in data engineering for building high-quality training/evaluation datasets. Experience with self-healing agents, auto-reflection loops, and adaptive control systems . Shift Timing: Night Shift (Fixed timing will be disclosed at the time of joining) Show more Show less
Posted 1 month ago
3.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Title: AI Agent Factory Developers Location: Pan India Experience: 3-8 years Must-Haves:Python , AI Agent / Multi Agent / RAG / Hugging Face OR LangChain OR OpenAI API OR Transformers Key Responsibilities LLM Integration Development: Build and fine-tune LLMs for task-specific applications using techniques like prompt engineering, retrieval-augmented generation (RAG), fine-tuning, and model adaptation. AI Agent Engineering Design, develop, and orchestrate AI agents capable of reasoning, planning, tool use (e.g., APIs, plugins), and autonomous execution for user-defined goals. GenAI Use Case Implementation Deliver GenAI-powered solutions such as chatbots, summarizers, document QA systems, assistants, and co-pilot tools using frameworks like LangChain or LlamaIndex. System Integration Connect LLM-based agents to external tools, APIs, databases, and knowledge sources for real-time, contextualized task execution. Performance Tuning Optimize model performance, cost-efficiency, safety, and latency using caching, batching, evaluation tools, and monitoring systems. Collaboration Documentation Work closely with AI researchers, product teams, and engineers to iterate quickly. Maintain well-structured, reusable, and documented codebases. Required Qualifications 35 years of experience in AI/ML, with at least 12 years hands-on with GenAI or LLMs. Strong Python development skills and experience with ML frameworks (e.g., Hugging Face, LangChain, OpenAI API, Transformers). Familiarity with LLM orchestration, vector databases (e.g., FAISS, Pinecone, Weaviate), and embedding models. Understanding of prompt engineering, agent architectures, and conversational AI flows. Bachelors or Masters degree in Computer Science, Artificial Intelligence, or related field. Preferred Qualifications Experience deploying AI systems in cloud environments (AWS/GCP/Azure) or with containerized setups (Docker/Kubernetes). Familiarity with open-source LLMs (LLaMA, Mistral, Mixtral, etc.) and open-weight tuning methods (LoRA, QLoRA). Exposure to RAG pipelines, autonomous agents (e.g., Auto-GPT, BabyAGI), and multi-agent systems. Knowledge of model safety, evaluation, and compliance standards in GenAI.""" This job is provided by Shine.com Show more Show less
Posted 1 month ago
3.0 - 8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Title: AI Agent Factory Developers Location: Pan India Experience: 3-8 years Must-Haves:Python , AI Agent / Multi Agent / RAG / Hugging Face OR LangChain OR OpenAI API OR Transformers Key Responsibilities LLM Integration Development: Build and fine-tune LLMs for task-specific applications using techniques like prompt engineering, retrieval-augmented generation (RAG), fine-tuning, and model adaptation. AI Agent Engineering Design, develop, and orchestrate AI agents capable of reasoning, planning, tool use (e.g., APIs, plugins), and autonomous execution for user-defined goals. GenAI Use Case Implementation Deliver GenAI-powered solutions such as chatbots, summarizers, document QA systems, assistants, and co-pilot tools using frameworks like LangChain or LlamaIndex. System Integration Connect LLM-based agents to external tools, APIs, databases, and knowledge sources for real-time, contextualized task execution. Performance Tuning Optimize model performance, cost-efficiency, safety, and latency using caching, batching, evaluation tools, and monitoring systems. Collaboration Documentation Work closely with AI researchers, product teams, and engineers to iterate quickly. Maintain well-structured, reusable, and documented codebases. Required Qualifications 35 years of experience in AI/ML, with at least 12 years hands-on with GenAI or LLMs. Strong Python development skills and experience with ML frameworks (e.g., Hugging Face, LangChain, OpenAI API, Transformers). Familiarity with LLM orchestration, vector databases (e.g., FAISS, Pinecone, Weaviate), and embedding models. Understanding of prompt engineering, agent architectures, and conversational AI flows. Bachelors or Masters degree in Computer Science, Artificial Intelligence, or related field. Preferred Qualifications Experience deploying AI systems in cloud environments (AWS/GCP/Azure) or with containerized setups (Docker/Kubernetes). Familiarity with open-source LLMs (LLaMA, Mistral, Mixtral, etc.) and open-weight tuning methods (LoRA, QLoRA). Exposure to RAG pipelines, autonomous agents (e.g., Auto-GPT, BabyAGI), and multi-agent systems. Knowledge of model safety, evaluation, and compliance standards in GenAI.""" This job is provided by Shine.com Show more Show less
Posted 1 month ago
3.0 - 8.0 years
0 Lacs
Greater Kolkata Area
On-site
Title: AI Agent Factory Developers Location: Pan India Experience: 3-8 years Must-Haves:Python , AI Agent / Multi Agent / RAG / Hugging Face OR LangChain OR OpenAI API OR Transformers Key Responsibilities LLM Integration Development: Build and fine-tune LLMs for task-specific applications using techniques like prompt engineering, retrieval-augmented generation (RAG), fine-tuning, and model adaptation. AI Agent Engineering Design, develop, and orchestrate AI agents capable of reasoning, planning, tool use (e.g., APIs, plugins), and autonomous execution for user-defined goals. GenAI Use Case Implementation Deliver GenAI-powered solutions such as chatbots, summarizers, document QA systems, assistants, and co-pilot tools using frameworks like LangChain or LlamaIndex. System Integration Connect LLM-based agents to external tools, APIs, databases, and knowledge sources for real-time, contextualized task execution. Performance Tuning Optimize model performance, cost-efficiency, safety, and latency using caching, batching, evaluation tools, and monitoring systems. Collaboration Documentation Work closely with AI researchers, product teams, and engineers to iterate quickly. Maintain well-structured, reusable, and documented codebases. Required Qualifications 35 years of experience in AI/ML, with at least 12 years hands-on with GenAI or LLMs. Strong Python development skills and experience with ML frameworks (e.g., Hugging Face, LangChain, OpenAI API, Transformers). Familiarity with LLM orchestration, vector databases (e.g., FAISS, Pinecone, Weaviate), and embedding models. Understanding of prompt engineering, agent architectures, and conversational AI flows. Bachelors or Masters degree in Computer Science, Artificial Intelligence, or related field. Preferred Qualifications Experience deploying AI systems in cloud environments (AWS/GCP/Azure) or with containerized setups (Docker/Kubernetes). Familiarity with open-source LLMs (LLaMA, Mistral, Mixtral, etc.) and open-weight tuning methods (LoRA, QLoRA). Exposure to RAG pipelines, autonomous agents (e.g., Auto-GPT, BabyAGI), and multi-agent systems. Knowledge of model safety, evaluation, and compliance standards in GenAI.""" This job is provided by Shine.com Show more Show less
Posted 1 month ago
3.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Title: AI Agent Factory Developers Location: Pan India Experience: 3-8 years Must-Haves:Python , AI Agent / Multi Agent / RAG / Hugging Face OR LangChain OR OpenAI API OR Transformers Key Responsibilities LLM Integration Development: Build and fine-tune LLMs for task-specific applications using techniques like prompt engineering, retrieval-augmented generation (RAG), fine-tuning, and model adaptation. AI Agent Engineering Design, develop, and orchestrate AI agents capable of reasoning, planning, tool use (e.g., APIs, plugins), and autonomous execution for user-defined goals. GenAI Use Case Implementation Deliver GenAI-powered solutions such as chatbots, summarizers, document QA systems, assistants, and co-pilot tools using frameworks like LangChain or LlamaIndex. System Integration Connect LLM-based agents to external tools, APIs, databases, and knowledge sources for real-time, contextualized task execution. Performance Tuning Optimize model performance, cost-efficiency, safety, and latency using caching, batching, evaluation tools, and monitoring systems. Collaboration Documentation Work closely with AI researchers, product teams, and engineers to iterate quickly. Maintain well-structured, reusable, and documented codebases. Required Qualifications 35 years of experience in AI/ML, with at least 12 years hands-on with GenAI or LLMs. Strong Python development skills and experience with ML frameworks (e.g., Hugging Face, LangChain, OpenAI API, Transformers). Familiarity with LLM orchestration, vector databases (e.g., FAISS, Pinecone, Weaviate), and embedding models. Understanding of prompt engineering, agent architectures, and conversational AI flows. Bachelors or Masters degree in Computer Science, Artificial Intelligence, or related field. Preferred Qualifications Experience deploying AI systems in cloud environments (AWS/GCP/Azure) or with containerized setups (Docker/Kubernetes). Familiarity with open-source LLMs (LLaMA, Mistral, Mixtral, etc.) and open-weight tuning methods (LoRA, QLoRA). Exposure to RAG pipelines, autonomous agents (e.g., Auto-GPT, BabyAGI), and multi-agent systems. Knowledge of model safety, evaluation, and compliance standards in GenAI.""" This job is provided by Shine.com Show more Show less
Posted 1 month ago
3.0 - 8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Title: AI Agent Factory Developers Location: Pan India Experience: 3-8 years Must-Haves:Python , AI Agent / Multi Agent / RAG / Hugging Face OR LangChain OR OpenAI API OR Transformers Key Responsibilities LLM Integration Development: Build and fine-tune LLMs for task-specific applications using techniques like prompt engineering, retrieval-augmented generation (RAG), fine-tuning, and model adaptation. AI Agent Engineering Design, develop, and orchestrate AI agents capable of reasoning, planning, tool use (e.g., APIs, plugins), and autonomous execution for user-defined goals. GenAI Use Case Implementation Deliver GenAI-powered solutions such as chatbots, summarizers, document QA systems, assistants, and co-pilot tools using frameworks like LangChain or LlamaIndex. System Integration Connect LLM-based agents to external tools, APIs, databases, and knowledge sources for real-time, contextualized task execution. Performance Tuning Optimize model performance, cost-efficiency, safety, and latency using caching, batching, evaluation tools, and monitoring systems. Collaboration Documentation Work closely with AI researchers, product teams, and engineers to iterate quickly. Maintain well-structured, reusable, and documented codebases. Required Qualifications 35 years of experience in AI/ML, with at least 12 years hands-on with GenAI or LLMs. Strong Python development skills and experience with ML frameworks (e.g., Hugging Face, LangChain, OpenAI API, Transformers). Familiarity with LLM orchestration, vector databases (e.g., FAISS, Pinecone, Weaviate), and embedding models. Understanding of prompt engineering, agent architectures, and conversational AI flows. Bachelors or Masters degree in Computer Science, Artificial Intelligence, or related field. Preferred Qualifications Experience deploying AI systems in cloud environments (AWS/GCP/Azure) or with containerized setups (Docker/Kubernetes). Familiarity with open-source LLMs (LLaMA, Mistral, Mixtral, etc.) and open-weight tuning methods (LoRA, QLoRA). Exposure to RAG pipelines, autonomous agents (e.g., Auto-GPT, BabyAGI), and multi-agent systems. Knowledge of model safety, evaluation, and compliance standards in GenAI.""" This job is provided by Shine.com Show more Show less
Posted 1 month ago
3.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Title: AI Agent Factory Developers Location: Pan India Experience: 3-8 years Must-Haves:Python , AI Agent / Multi Agent / RAG / Hugging Face OR LangChain OR OpenAI API OR Transformers Key Responsibilities LLM Integration Development: Build and fine-tune LLMs for task-specific applications using techniques like prompt engineering, retrieval-augmented generation (RAG), fine-tuning, and model adaptation. AI Agent Engineering Design, develop, and orchestrate AI agents capable of reasoning, planning, tool use (e.g., APIs, plugins), and autonomous execution for user-defined goals. GenAI Use Case Implementation Deliver GenAI-powered solutions such as chatbots, summarizers, document QA systems, assistants, and co-pilot tools using frameworks like LangChain or LlamaIndex. System Integration Connect LLM-based agents to external tools, APIs, databases, and knowledge sources for real-time, contextualized task execution. Performance Tuning Optimize model performance, cost-efficiency, safety, and latency using caching, batching, evaluation tools, and monitoring systems. Collaboration Documentation Work closely with AI researchers, product teams, and engineers to iterate quickly. Maintain well-structured, reusable, and documented codebases. Required Qualifications 35 years of experience in AI/ML, with at least 12 years hands-on with GenAI or LLMs. Strong Python development skills and experience with ML frameworks (e.g., Hugging Face, LangChain, OpenAI API, Transformers). Familiarity with LLM orchestration, vector databases (e.g., FAISS, Pinecone, Weaviate), and embedding models. Understanding of prompt engineering, agent architectures, and conversational AI flows. Bachelors or Masters degree in Computer Science, Artificial Intelligence, or related field. Preferred Qualifications Experience deploying AI systems in cloud environments (AWS/GCP/Azure) or with containerized setups (Docker/Kubernetes). Familiarity with open-source LLMs (LLaMA, Mistral, Mixtral, etc.) and open-weight tuning methods (LoRA, QLoRA). Exposure to RAG pipelines, autonomous agents (e.g., Auto-GPT, BabyAGI), and multi-agent systems. Knowledge of model safety, evaluation, and compliance standards in GenAI.""" This job is provided by Shine.com Show more Show less
Posted 1 month ago
3.0 - 8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Title: AI Agent Factory Developers Location: Pan India Experience: 3-8 years Must-Haves:Python , AI Agent / Multi Agent / RAG / Hugging Face OR LangChain OR OpenAI API OR Transformers Key Responsibilities LLM Integration Development: Build and fine-tune LLMs for task-specific applications using techniques like prompt engineering, retrieval-augmented generation (RAG), fine-tuning, and model adaptation. AI Agent Engineering Design, develop, and orchestrate AI agents capable of reasoning, planning, tool use (e.g., APIs, plugins), and autonomous execution for user-defined goals. GenAI Use Case Implementation Deliver GenAI-powered solutions such as chatbots, summarizers, document QA systems, assistants, and co-pilot tools using frameworks like LangChain or LlamaIndex. System Integration Connect LLM-based agents to external tools, APIs, databases, and knowledge sources for real-time, contextualized task execution. Performance Tuning Optimize model performance, cost-efficiency, safety, and latency using caching, batching, evaluation tools, and monitoring systems. Collaboration Documentation Work closely with AI researchers, product teams, and engineers to iterate quickly. Maintain well-structured, reusable, and documented codebases. Required Qualifications 35 years of experience in AI/ML, with at least 12 years hands-on with GenAI or LLMs. Strong Python development skills and experience with ML frameworks (e.g., Hugging Face, LangChain, OpenAI API, Transformers). Familiarity with LLM orchestration, vector databases (e.g., FAISS, Pinecone, Weaviate), and embedding models. Understanding of prompt engineering, agent architectures, and conversational AI flows. Bachelors or Masters degree in Computer Science, Artificial Intelligence, or related field. Preferred Qualifications Experience deploying AI systems in cloud environments (AWS/GCP/Azure) or with containerized setups (Docker/Kubernetes). Familiarity with open-source LLMs (LLaMA, Mistral, Mixtral, etc.) and open-weight tuning methods (LoRA, QLoRA). Exposure to RAG pipelines, autonomous agents (e.g., Auto-GPT, BabyAGI), and multi-agent systems. Knowledge of model safety, evaluation, and compliance standards in GenAI.""" This job is provided by Shine.com Show more Show less
Posted 1 month ago
0 years
0 Lacs
India
On-site
Job Summary: We are looking for an expert Data Scientist (LLM/GenAI) to join our team. The ideal candidate will have a decent background of data science ( statistics, math, ML, DL) and good knowledge of Large Language Models (LLMs), along with hands-on experience in building and deploying development process automation in requirements, coding, testing, and maintenance. Key Responsibilities: Choose appropriate foundation model and APIs, develop, fine-tune, and deploy Large Language Models (LLMs) for various intelligent applications related to requirements, coding, testing, and maintenance activities. Design and develop ML/DL-based models to enhance natural language understanding capabilities. Build and optimize requirements related to engineering, such as requirement generation, test case generation, requirement gap analysis, bug analysis, and mapping, Implement and experiment with LLM agent development frameworks such as LangChain, LlamaIndex, AutoGen, and LangGraph. Work on retrieval-augmented generation (RAG) and vector databases (e.g., FAISS, Pinecone, Weaviate, ChromaDB) to enhance LLM-based applications. Optimize and fine-tune transformer-based models such as GPT, LLaMA, Falcon, Mistral, Claude, etc. for domain-specific tasks. Develop and implement prompt engineering techniques and fine-tuning strategies to improve LLM performance. Work on AI agents, multi-agent systems, and tool-use optimization for real-world business applications. Develop APIs and pipelines to integrate LLMs into enterprise applications. Performance and accuracy analysis for LLM applications. Requirements Required Skills & Qualifications: Strong academic background with knowledge of statistics and mathematics 2-3 Experience in LLM/GenAI and strong Python skills Hands-on experience in developing and deploying AI solutions related to requirement engineering or other development cycle automation projects for at least a year. Experience with LLM RAG/ agent development frameworks like LangChain, LlamaIndex, AutoGen, LangGraph. Knowledge of vector databases (e.g. Pinecone, Weaviate, ChromaDB) and embedding models . Understanding of Prompt Engineering and Fine-tuning LLMs . Ability to work independently and collaboratively in a fast-paced environment. Good to Have: · Proficiency in ML/DL frameworks such as TensorFlow, PyTorch, Hugging Face Transformers · Experience in working with APIs, Docker, FastAPI for model deployment · Familiarity with cloud services (AWS, GCP, Azure) for deploying LLMs at scale. · Experience or knowledge of multimodal AI Why Join Us? Opportunity to work on cutting-edge LLM and Generative AI projects and learn from senior technologists. Collaborative and innovative work environment. Competitive salary and benefits. Rapid Career growth opportunities in AI and ML research and development. Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Vadodara, Gujarat, India
Remote
Are you an AI virtuoso ready to push the boundaries of what's possible with Large Language Models? Join Trilogy, where we're not just talking about AI transformation – we're actively engineering it. We're seeking a visionary AI Engineer who sees LLMs not just as tools, but as catalysts for revolutionary change in business operations. Unlike companies still caught in the AI hype cycle, we're laser-focused on delivering tangible results. Picture yourself at the helm of groundbreaking initiatives that transform abstract AI concepts into powerful productivity solutions. Here, your expertise in LLMs won't just be appreciated – it'll be the driving force behind real-world innovations that reshape how businesses operate. This isn't your typical tech role where ideas get lost in endless meetings. At Trilogy, you'll be empowered to turn your AI mastery into measurable impact, creating solutions that fundamentally change how organizations work. If you're ready to be at the forefront of practical AI innovation, we want to hear from you. What You Will Be Doing Spearhead the development of cutting-edge AI automation systems that transform complex business processes into streamlined, efficient operations Pioneer the integration of advanced AI technologies, including GPT-4 Vision and Amazon CodeWhisperer, pushing the boundaries of what's possible in AI-driven development Architect and optimize AI solutions within AWS infrastructure, ensuring robust, scalable implementations that deliver consistent value What You Won’t Be Doing Getting bogged down in conventional coding - our AI ecosystem handles the routine work, leaving you free to focus on strategic innovation Monotonous task repetition - each challenge presents a unique opportunity for creative problem-solving Artificial Intelligence Engineer Key Responsibilities Lead the creation and implementation of autonomous AI systems that operate independently, delivering scalable solutions that transform business operations without requiring human intervention Basic Requirements Demonstrated commitment to an AI-first approach - you should instinctively think in terms of AI solutions before traditional coding methods Minimum 5-year track record in professional technology roles Strong command of Python programming and AWS cloud services Hands-on experience with modern AI coding assistants (Github Copilot, Cursor.sh, v0.dev) Proven success in implementing Generative AI solutions with measurable business impact Demonstrated expertise in utilizing various LLM APIs (GPT, Claude, Mistral) for practical business applications About Trilogy Hundreds of software businesses run on the Trilogy Business Platform. For three decades, Trilogy has been known for 3 things: Relentlessly seeking top talent, Innovating new technology, and incubating new businesses. Our technological innovation is spearheaded by a passion for simple customer-facing designs. Our incubation of new businesses ranges from entirely new moon-shot ideas to rearchitecting existing projects for today's modern cloud-based stack. Trilogy is a place where you can be surrounded with great people, be proud of doing great work, and grow your career by leaps and bounds. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $50 USD/hour, which equates to $100,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5105-IN-Vadodara-ArtificialInte.013 Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Vadodara, Gujarat, India
Remote
Are you an AI virtuoso ready to push the boundaries of what's possible with Large Language Models? Welcome to Trilogy, where we're not just riding the AI wave – we're creating the tsunami of change in productivity technology. Forget the AI hype machine. At Trilogy, we're about tangible results and real innovation. We're building the future of work, one intelligent automation at a time. Picture yourself crafting AI solutions that don't just improve workflows – they completely reimagine them. Your mission, should you choose to accept it: harness the raw power of LLMs to transform business operations from the ground up. We're not interested in incremental improvements; we're after revolutionary changes that make jaws drop and productivity soar. If you're energized by the prospect of seeing your AI innovations directly impact real-world business operations, and you live and breathe the "AI-first" philosophy, we need to talk. What You Will Be Doing Craft cutting-edge AI automations that don't just meet expectations – they shatter them, delivering scalable solutions that transform how businesses operate Pioneer the implementation of bleeding-edge AI technologies, including GPT-4 Vision and Amazon CodeWhisperer, pushing the boundaries of what's possible in automation Orchestrate and optimize AI solutions across AWS infrastructure, ensuring our innovations run like a well-oiled machine What You Won’t Be Doing Writing endless lines of traditional code – our AI companions handle the heavy lifting, leaving you free to focus on strategic innovation Getting bogged down in monotonous tasks – each day brings fresh challenges and unique problems to solve Machine Learning Engineer Key Responsibilities Your core mission: design and deploy sophisticated AI systems that operate autonomously, scaling impact without human intervention – creating true "set it and forget it" automation solutions that revolutionize business processes Basic Requirements A deeply ingrained AI-first approach to problem-solving (if you're still thinking "code first, AI second," this isn't your playground) Battle-tested experience: 5+ years in the professional arena Python mastery and AWS expertise Hands-on experience with modern GenAI code assistants (Github Copilot, Cursor.sh, v0.dev) Proven track record of implementing Generative AI solutions with measurable business impact Expert-level experience in wielding various LLMs (GPT, Claude, Mistral) via their APIs to tackle real-world business challenges About Trilogy Hundreds of software businesses run on the Trilogy Business Platform. For three decades, Trilogy has been known for 3 things: Relentlessly seeking top talent, Innovating new technology, and incubating new businesses. Our technological innovation is spearheaded by a passion for simple customer-facing designs. Our incubation of new businesses ranges from entirely new moon-shot ideas to rearchitecting existing projects for today's modern cloud-based stack. Trilogy is a place where you can be surrounded with great people, be proud of doing great work, and grow your career by leaps and bounds. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $50 USD/hour, which equates to $100,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5105-IN-Vadodara-MachineLearnin.017 Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
Are you prepared to utilize your expertise in LLMs to drive innovation and productivity? At Trilogy, we're offering a unique opportunity for tech enthusiasts eager to advance their AI skills in a vibrant setting. This is your chance to focus on mastering LLMs and making a tangible impact. In a world filled with buzzwords, Trilogy is where genuine innovation happens. We prioritize real-world applications that transform industries. Picture yourself crafting AI-driven solutions that streamline workflows, automate tasks, and enhance decision-making to boost productivity. Your mission? To harness LLMs and transform business operations, enhancing efficiency and effectiveness. You'll be creating AI-integrated solutions that simplify complex processes and optimize workflows for maximum efficiency. Here, you won't be bogged down by bureaucracy or endless pitches. Instead, you'll directly see the impact of your work as it shapes the future of productivity tools. Are you ready to apply your skills and become a change-maker? We might be looking for you if you're ready for this thrilling challenge! What You Will Be Doing Crafting and developing cutting-edge AI automations to streamline operations, boost productivity, and deliver resilient, scalable solutions across various platforms Testing and integrating advanced AI tools such as GPT-4 Vision and Amazon CodeWhisperer into our development processes to evaluate and enhance their effectiveness Assessing and fine-tuning AI solution implementations across diverse infrastructures, including AWS, to ensure seamless integration and performance What You Won’t Be Doing Engaging in traditional coding - our AI handles the bulk of the work, allowing you to innovate and strategize Performing repetitive tasks - each problem presents a new challenge Python Developer Key Responsibilities Designing and deploying advanced, fully-automated AI systems that require no human intervention for scalable impact Basic Requirements An AI-First Mindset (if you prefer to code first and then use AI tools to verify or enhance, this role might not be for you) Minimum 5 years of professional experience Expertise in Python and AWS Experience with GenAI code assistants (e.g., Github Copilot, Cursor.sh, v0.dev) Proven experience in implementing Generative AI solutions with significant quantitative impact Experience in leveraging various LLMs through APIs (e.g., GPT, Claude, Mistral) to address business challenges About Trilogy Hundreds of software businesses run on the Trilogy Business Platform. For three decades, Trilogy has been known for 3 things: Relentlessly seeking top talent, Innovating new technology, and incubating new businesses. Our technological innovation is spearheaded by a passion for simple customer-facing designs. Our incubation of new businesses ranges from entirely new moon-shot ideas to rearchitecting existing projects for today's modern cloud-based stack. Trilogy is a place where you can be surrounded with great people, be proud of doing great work, and grow your career by leaps and bounds. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $50 USD/hour, which equates to $100,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5105-IN-Ahmedaba-PythonDevelope.011 Show more Show less
Posted 1 month ago
25.0 years
0 Lacs
Tondiarpet, Tamil Nadu, India
On-site
The Company PayPal has been revolutionizing commerce globally for more than 25 years. Creating innovative experiences that make moving money, selling, and shopping simple, personalized, and secure, PayPal empowers consumers and businesses in approximately 200 markets to join and thrive in the global economy. We operate a global, two-sided network at scale that connects hundreds of millions of merchants and consumers. We help merchants and consumers connect, transact, and complete payments, whether they are online or in person. PayPal is more than a connection to third-party payment networks. We provide proprietary payment solutions accepted by merchants that enable the completion of payments on our platform on behalf of our customers. We offer our customers the flexibility to use their accounts to purchase and receive payments for goods and services, as well as the ability to transfer and withdraw funds. We enable consumers to exchange funds more safely with merchants using a variety of funding sources, which may include a bank account, a PayPal or Venmo account balance, PayPal and Venmo branded credit products, a credit card, a debit card, certain cryptocurrencies, or other stored value products such as gift cards, and eligible credit card rewards. Our PayPal, Venmo, and Xoom products also make it safer and simpler for friends and family to transfer funds to each other. We offer merchants an end-to-end payments solution that provides authorization and settlement capabilities, as well as instant access to funds and payouts. We also help merchants connect with their customers, process exchanges and returns, and manage risk. We enable consumers to engage in cross-border shopping and merchants to extend their global reach while reducing the complexity and friction involved in enabling cross-border trade. Our beliefs are the foundation for how we conduct business every day. We live each day guided by our core values of Inclusion, Innovation, Collaboration, and Wellness. Together, our values ensure that we work together as one global team with our customers at the center of everything we do – and they push us to ensure we take care of ourselves, each other, and our communities. Job Description Summary: What you need to know about the role- Data scientists are highly motivated team players with strong analytical skills who specialize in creating, driving and executing initiatives to mitigate fraud on PayPal’s platform and improve the experience for PayPal’s hundreds of millions of customers, while guaranteeing compliance with regulations. Meet our team Data scientists in the Fraud Risk team are problem solvers suited to approach varied challenges in complex big data environments. Our core goals are to enable seamless and delightful experiences to our customers, while preventing threat actors from accessing customers’ financial instruments and personal information. As part of our day-to-day job, we are collaborating with a wide variety of partners: product owners, data scientists, security experts, legal consults, and engineers, to bring our data science insights to life, impacting the experience and security of millions of customers around the globe. Job Description: Your way to impact Data scientists deeply understand PayPal’s business objectives, as their impact on PayPal’s top and bottom lines is immense. As a data scientist, you will develop key AIML capabilities, tools, and insights with the aim of adapting PayPal’s advanced proprietary fraud prevention and experience mechanisms and enabling growth. Your day to day Day-to-day duties include data analysis, monitoring and forecasting, creating the logic for and implementing risk rules and strategies, providing requirements to data scientists and technology teams on attribute, model and platform requirements, and communicating with global stakeholders to ensure we deliver the best possible customer experience while meeting loss rate targets. What Do You Need To Bring- Strong proficiency in Python for data analysis, machine learning, and automation. Solid understanding of supervised and unsupervised AI/machine learning methods (e.g., XGBoost, LightGBM, Random Forest, clustering, isolation forests, autoencoders, neural networks, transformer-based architectures). Experience in payment fraud, AML, KYC, or broader risk modeling within fintech or financial institutions. Experience developing and deploying ML models in production using frameworks such as scikit-learn, TensorFlow, PyTorch, or similar. Hands-on experience with LLMs (e.g., OpenAI, LLaMA, Claude, Mistral), including use of prompt engineering, retrieval-augmented generation (RAG), and agentic AI to support internal automation and risk workflows. Ability to work cross-functionally with engineering, product, compliance, and operations teams. Proven track record of translating complex ML insights into business actions or policy decisions. BS/BA degree with 3+ years of related professional experience or master’s degree with 1+ years of related experience. For the majority of employees, PayPal's balanced hybrid work model offers 3 days in the office for effective in-person collaboration and 2 days at your choice of either the PayPal office or your home workspace, ensuring that you equally have the benefits and conveniences of both locations. Our Benefits: At PayPal, we’re committed to building an equitable and inclusive global economy. And we can’t do this without our most important asset—you. That’s why we offer benefits to help you thrive in every stage of life. We champion your financial, physical, and mental health by offering valuable benefits and resources to help you care for the whole you. We have great benefits including a flexible work environment, employee shares options, health and life insurance and more. To learn more about our benefits please visit https://www.paypalbenefits.com Who We Are: To learn more about our culture and community visit https://about.pypl.com/who-we-are/default.aspx Commitment to Diversity and Inclusion PayPal provides equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, pregnancy, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state, or local law. In addition, PayPal will provide reasonable accommodations for qualified individuals with disabilities. If you are unable to submit an application because of incompatible assistive technology or a disability, please contact us at paypalglobaltalentacquisition@paypal.com. Belonging at PayPal: Our employees are central to advancing our mission, and we strive to create an environment where everyone can do their best work with a sense of purpose and belonging. Belonging at PayPal means creating a workplace with a sense of acceptance and security where all employees feel included and valued. We are proud to have a diverse workforce reflective of the merchants, consumers, and communities that we serve, and we continue to take tangible actions to cultivate inclusivity and belonging at PayPal. Any general requests for consideration of your skills, please Join our Talent Community. We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don’t hesitate to apply. REQ ID R0127047 Show more Show less
Posted 1 month ago
25.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
The Company PayPal has been revolutionizing commerce globally for more than 25 years. Creating innovative experiences that make moving money, selling, and shopping simple, personalized, and secure, PayPal empowers consumers and businesses in approximately 200 markets to join and thrive in the global economy. We operate a global, two-sided network at scale that connects hundreds of millions of merchants and consumers. We help merchants and consumers connect, transact, and complete payments, whether they are online or in person. PayPal is more than a connection to third-party payment networks. We provide proprietary payment solutions accepted by merchants that enable the completion of payments on our platform on behalf of our customers. We offer our customers the flexibility to use their accounts to purchase and receive payments for goods and services, as well as the ability to transfer and withdraw funds. We enable consumers to exchange funds more safely with merchants using a variety of funding sources, which may include a bank account, a PayPal or Venmo account balance, PayPal and Venmo branded credit products, a credit card, a debit card, certain cryptocurrencies, or other stored value products such as gift cards, and eligible credit card rewards. Our PayPal, Venmo, and Xoom products also make it safer and simpler for friends and family to transfer funds to each other. We offer merchants an end-to-end payments solution that provides authorization and settlement capabilities, as well as instant access to funds and payouts. We also help merchants connect with their customers, process exchanges and returns, and manage risk. We enable consumers to engage in cross-border shopping and merchants to extend their global reach while reducing the complexity and friction involved in enabling cross-border trade. Our beliefs are the foundation for how we conduct business every day. We live each day guided by our core values of Inclusion, Innovation, Collaboration, and Wellness. Together, our values ensure that we work together as one global team with our customers at the center of everything we do – and they push us to ensure we take care of ourselves, each other, and our communities. Job Description Summary: What you need to know about the role- Data scientists are highly motivated team players with strong analytical skills who specialize in creating, driving and executing initiatives to mitigate fraud on PayPal’s platform and improve the experience for PayPal’s hundreds of millions of customers, while guaranteeing compliance with regulations. Meet our team Data scientists in the Fraud Risk team are problem solvers suited to approach varied challenges in complex big data environments. Our core goals are to enable seamless and delightful experiences to our customers, while preventing threat actors from accessing customers’ financial instruments and personal information. As part of our day-to-day job, we are collaborating with a wide variety of partners: product owners, data scientists, security experts, legal consults, and engineers, to bring our data science insights to life, impacting the experience and security of millions of customers around the globe. Job Description: Your way to impact Data scientists deeply understand PayPal’s business objectives, as their impact on PayPal’s top and bottom lines is immense. As a data scientist, you will develop key AIML capabilities, tools, and insights with the aim of adapting PayPal’s advanced proprietary fraud prevention and experience mechanisms and enabling growth. Your day to day Day-to-day duties include data analysis, monitoring and forecasting, creating the logic for and implementing risk rules and strategies, providing requirements to data scientists and technology teams on attribute, model and platform requirements, and communicating with global stakeholders to ensure we deliver the best possible customer experience while meeting loss rate targets. What Do You Need To Bring- Strong proficiency in Python for data analysis, machine learning, and automation. Solid understanding of supervised and unsupervised AI/machine learning methods (e.g., XGBoost, LightGBM, Random Forest, clustering, isolation forests, autoencoders, neural networks, transformer-based architectures). Experience in payment fraud, AML, KYC, or broader risk modeling within fintech or financial institutions. Experience developing and deploying ML models in production using frameworks such as scikit-learn, TensorFlow, PyTorch, or similar. Hands-on experience with LLMs (e.g., OpenAI, LLaMA, Claude, Mistral), including use of prompt engineering, retrieval-augmented generation (RAG), and agentic AI to support internal automation and risk workflows. Ability to work cross-functionally with engineering, product, compliance, and operations teams. Proven track record of translating complex ML insights into business actions or policy decisions. BS/BA degree with 3+ years of related professional experience or master’s degree with 1+ years of related experience. For the majority of employees, PayPal's balanced hybrid work model offers 3 days in the office for effective in-person collaboration and 2 days at your choice of either the PayPal office or your home workspace, ensuring that you equally have the benefits and conveniences of both locations. Our Benefits: At PayPal, we’re committed to building an equitable and inclusive global economy. And we can’t do this without our most important asset—you. That’s why we offer benefits to help you thrive in every stage of life. We champion your financial, physical, and mental health by offering valuable benefits and resources to help you care for the whole you. We have great benefits including a flexible work environment, employee shares options, health and life insurance and more. To learn more about our benefits please visit https://www.paypalbenefits.com Who We Are: To learn more about our culture and community visit https://about.pypl.com/who-we-are/default.aspx Commitment to Diversity and Inclusion PayPal provides equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, pregnancy, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state, or local law. In addition, PayPal will provide reasonable accommodations for qualified individuals with disabilities. If you are unable to submit an application because of incompatible assistive technology or a disability, please contact us at paypalglobaltalentacquisition@paypal.com. Belonging at PayPal: Our employees are central to advancing our mission, and we strive to create an environment where everyone can do their best work with a sense of purpose and belonging. Belonging at PayPal means creating a workplace with a sense of acceptance and security where all employees feel included and valued. We are proud to have a diverse workforce reflective of the merchants, consumers, and communities that we serve, and we continue to take tangible actions to cultivate inclusivity and belonging at PayPal. Any general requests for consideration of your skills, please Join our Talent Community. We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don’t hesitate to apply. REQ ID R0127047 Show more Show less
Posted 1 month ago
3.0 years
0 Lacs
Greater Bengaluru Area
On-site
Redefine the future of customer experiences. One conversation at a time. We’re changing the game with a first-of-its-kind, conversation-centric platform that unifies team collaboration and customer experience in one place. Powered by AI, built by amazing humans. Our culture is forward-thinking, customer-obsessed and built on an unwavering belief that connection fuels business and life; connections to our customers with our signature Amazing Service®, our products and services, and most importantly, each other. Since 2008, 100,000+ companies and 1M+ users rely on Nextiva for customer and team communication. If you’re ready to collaborate and create with amazing people, let your personality shine and be on the frontlines of helping businesses deliver amazing experiences, you’re in the right place. Build Amazing - Deliver Amazing - Live Amazing - Be Amazing We’re looking for a highly skilled and hands-on RAG (Retrieval-Augmented Generation) & Prompt Engineer to join our applied AI team. You’ll work with cutting-edge open-source and proprietary LLMs (like LLaMA, Mistral, Claude, GPT-4o, etc.) to build, prompt, and orchestrate intelligent agents that are capable, reliable, and production-ready. This role is perfect for someone who has experience developing prompt chains, implementing tool-calling workflows, and debugging AI agents at scale. Key Responsibilities Design, develop, and iterate on prompt strategies tailored to downloadable models and major APIs (LLaMA, Mistral, Claude, GPT-4o, etc.). Architect and implement RAG pipelines with a deep understanding of embedding models, retrievers, and context optimization techniques. Create prompt chains and tool-calling workflows for dynamic agent behavior using Responses API and similar frameworks. Design, test, and deploy foolproof agent architectures using OpenAI tool calling and agent protocol layers. Write robust Guardrails and control flows for agents to prevent unintended behaviors and ensure task compliance. Debug and maintain agent codebases, ensuring reliability and scalability of deployed services. Apply basic knowledge of OpenAI Operator and related orchestration tools to manage agent lifecycle. Collaborate with researchers and infra teams to optimize prompt efficiency and latency. Must-Have Qualifications 3 - 5 years of experience in AI engineering, prompt engineering, or applied ML roles. Proven experience working with both downloadable open-source models and hosted APIs. Strong knowledge of LLM prompt design patterns, prompt chaining, and failure handling. Ability to build agent systems that are secure, auditable, and self-healing. Good coding and debugging skills in Python (or relevant stack) with focus on AI orchestration. Familiarity with agent deployment pipelines, containerized environments, and CI/CD flows. Tech Stack We Use Python, FastAPI, LangChain / LlamaIndex. OpenAI, Anthropic, HuggingFace. Vector DBs (Weaviate, Pinecone, Qdrant). Responses API, OpenAI Operator, A2A SDK. Docker, GitHub Actions, GCP/AWS. Bonus (Nice-to-Have Skills) Experience building agents from scratch, especially with agent transfer logic and persistent memory. Understanding of Model Context Protocols and how to integrate them into multi-agent LLM stacks. Familiarity with A2A SDK for agent-to-agent communication and delegation. Hands-on experience with LoRA / QLoRA techniques for fine-tuning GPT-style models on downstream or domain-specific tasks. Experience with vector DBs, context compression, or multi-turn reasoning at scale. Total Rewards Our Total Rewards offerings are designed to allow our employees to take care of themselves and their families so they can be their best, in and out of the office. Our compensation packages are tailored to each role and candidate's qualifications. We consider a wide range of factors, including skills, experience, training, and certifications, when determining compensation. We aim to offer competitive salaries or wages that reflect the value you bring to our team. Depending on the position, compensation may include base salary and/or hourly wages, incentives, or bonuses. Medical 🩺- Medical insurance coverage is available for employees, their spouse, and up to two dependent children with a limit of 500,000 INR, as well as their parents or in-laws for up to 300,000 INR. This comprehensive coverage ensures that essential healthcare needs are met for the entire family unit, providing peace of mind and security in times of medical necessity. Group Term & Group Personal Accident Insurance 💼 - Provides insurance coverage against the risk of death / injury during the policy period sustained due to an accident caused by violent, visible & external means. Coverage Type - Employee Only Sum Insured - 3 times of annual CTC with minimum cap of INR 10,00,000 Free Cover Limit - 1.5 Crore Work-Life Balance ⚖️ - 15 days of Privilege leaves per calendar year, 6 days of Paid Sick leave per calendar year, 6 days of Casual leave per calendar year. Paid 26 weeks of Maternity leaves, 1 week of Paternity leave, a day off on your Birthday, and paid holidays Financial Security💰 - Provident Fund & Gratuity Wellness 🤸 - Employee Assistance Program and comprehensive wellness initiatives Growth 🌱 - Access to ongoing learning and development opportunities and career advancement At Nextiva, we're committed to supporting our employees' health, well-being, and professional growth. Join us and build a rewarding career! Established in 2008 and headquartered in Scottsdale, Arizona, Nextiva secured $200M from Goldman Sachs in late 2021, valuing the company at $2.7B.To check out what’s going on at Nextiva, check us out on Instagram, Instagram (MX), YouTube, LinkedIn, and the Nextiva blog. Show more Show less
Posted 1 month ago
3.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
We are looking for a Senior Software Engineer – AI with 3+ years of hands-on experience in Artificial Intelligence/ML and a passion for innovation. This role is ideal for someone who thrives in a startup environment—fast-paced, product-driven, and full of opportunities to make a real impact. You will contribute to building intelligent, scalable, and production-grade AI systems, with a strong focus on Generative AI and Agentic AI technologies. Roles and Responsibilities Build and deploy AI-driven applications and services, focusing on Generative AI and Large Language Models (LLMs). Design and implement Agentic AI systems—autonomous agents capable of planning and executing multi-step tasks. Collaborate with cross-functional teams including product, design, and engineering to integrate AI capabilities into products. Write clean, scalable code and build robust APIs and services to support AI model deployment. Own feature delivery end-to-end—from research and experimentation to deployment and monitoring. Stay current with emerging AI frameworks, tools, and best practices and apply them in product development. Contribute to a high-performing team culture and mentor junior team members as needed. Skill Set: 3–6 years of overall software development experience, with 3+ years specifically in AI/ML engineering. Strong proficiency in Python, with hands-on experience in PyTorch, TensorFlow, and Transformers (Hugging Face). Proven experience working with LLMs (e.g., GPT, Claude, Mistral) and Generative AI models (text, image, or audio). Practical knowledge of Agentic AI frameworks (e.g., LangChain, AutoGPT, Semantic Kernel). Experience building and deploying ML models to production environments. Familiarity with vector databases (Pinecone, Weaviate, FAISS) and prompt engineering concepts. Comfortable working in a startup-like environment—self-motivated, adaptable, and willing to take ownership. Solid understanding of API development, version control, and modern DevOps/MLOps practices. Show more Show less
Posted 1 month ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Senior Data Scientist — Gen AI/ML Expert Location: Hybrid — Gurugram Company: Mechademy – Industrial Reliability & Predictive Analytics About Mechademy At Mechademy, we are redefining the future of reliability in rotating machinery with our flagship product, Turbomechanica . Built at the intersection of physics-based models, AI, and machine learning , Turbomechanica delivers prescriptive analytics that detect potential equipment issues before they escalate, maximizing uptime, extending asset life, and reducing operational risks for our industrial clients. The Role We are seeking a talented and driven Senior Data Scientist (AI/ML) with 3+ years of experience to join our AI team. You will play a critical role in building scalable ML pipelines, integrating cutting-edge language models, and developing autonomous agent-based systems that transform predictive maintenance is done for industrial equipment. This is a highly technical and hands-on role, with strong emphasis on real-world AI deployments — working directly with sensor data, time-series analytics, anomaly detection, distributed ML, and LLM-powered agentic workflows . What Makes This Role Unique Work on real-world industrial AI problems , combining physics-based models with modern ML/LLM systems. Collaborate with domain experts, engineers, and product leaders to directly impact critical industrial operations. Freedom to experiment with new tools, models, and techniques — with full ownership of your work. Help shape our technical roadmap as we scale our AI-first predictive analytics platform. Flexible hybrid work culture with high-impact visibility. Key Responsibilities Design & Develop ML Pipelines: Build scalable, production-grade ML pipelines for predictive maintenance, anomaly detection, and time-series analysis. Distributed Model Training: Leverage distributed computing frameworks (e.g. Ray, Dask, Spark, Horovod) for large-scale model training. LLM Integration & Optimization: Fine-tune, optimize, and deploy large language models (Llama, GPT, Mistral, Falcon, etc.) for applications like summarization, RAG (Retrieval-Augmented Generation), and knowledge extraction. Agent-Based AI Pipelines: Build intelligent multi-agent systems capable of reasoning, planning, and executing complex tasks via tool usage, memory, and coordination. End-to-End MLOps: Own the full ML lifecycle — from research, experimentation, deployment, monitoring to production optimization. Algorithm Development: Research, evaluate, and implement state-of-the-art ML/DL/statistical algorithms for real-world sensor data. Collaborative Development: Work closely with cross-functional teams including software engineers, domain experts, product managers, and leadership. Core Requirements 3+ years of professional experience in AI/ML, data science, or applied ML engineering. Strong hands-on experience with modern LLMs (Llama, GPT series, Mistral, Falcon, etc.), fine-tuning, prompt engineering, and RAG techniques. Familiarity with frameworks like LangChain, LlamaIndex , or equivalent for LLM application development. Practical experience in agentic AI pipelines : tool use, sequential reasoning, and multi-agent orchestration. Strong proficiency in Python (Pandas, NumPy, Scikit-learn) and at least one deep learning framework (TensorFlow, PyTorch, or JAX). Exposure to distributed ML frameworks (Ray, Dask, Horovod, Spark ML, etc.). Experience with containerization and orchestration (Docker, Kubernetes). Strong problem-solving ability, ownership mindset, and ability to work in fast-paced startup environments. Excellent written and verbal communication skills. Bonus / Good to Have Experience with time-series data, sensor data processing, and anomaly detection. Familiarity with CI/CD pipelines and MLOps best practices. Knowledge of cloud deployment, real-time system optimization, and industrial data security standards. Prior open-source contributions or active GitHub projects. What We Offer Opportunity to work on cutting-edge technology transforming industrial AI. Direct ownership, autonomy, and visibility into product impact. Flexible hybrid work culture. Professional development budget and continuous learning opportunities. Collaborative, fast-moving, and growth-oriented team culture. Health benefits and performance-linked rewards. Potential for equity participation for high-impact contributors. Note: Title and compensation will be aligned with the candidate’s experience and potential impact. Show more Show less
Posted 1 month ago
0.0 - 2.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Location: Bangalore, Karnataka Factspan Overview: Factspan is a pure play analytics organization. We partner with fortune 500 enterprises to build an analytics center of excellence, generating insights and solutions from raw data to solve business challenges, make strategic recommendations and implement new processes that help them succeed. With offices in Seattle, Washington and Bangalore, India; we use a global delivery model to service our customers. Our customers include industry leaders from Retail, Financial Services, Hospitality, and technology sectors. Job Description As an LLM (Large Language Model) Engineer, you will be responsible for designing, optimizing, and standardizing the architecture, codebase, and deployment pipelines of LLM-based systems. Your primary mission will focus on modernizing legacy machine learning codebases (including 40+ models) for a major retail client—enabling consistency, modularity, observability, and readiness for GenAI-driven innovation. You’ll work at the intersection of ML, software engineering, and MLOps to enable seamless experimentation, robust infrastructure, and production-grade performance for language-driven systems. This role requires deep expertise in NLP, transformer-based models, and the evolving ecosystem of LLM operations (LLMOps), along with a hands-on approach to debugging, refactoring, and building unified frameworks for scalable GenAI workloads. Responsibilities: Lead the standardization and modernization of legacy ML codebases by aligning to current LLM architecture best practices. Re-architect code for 40+ legacy ML models, ensuring modularity, documentation, and consistent design patterns. Design and maintain pipelines for fine-tuning, evaluation, and inference of LLMs using Hugging Face, OpenAI, or open- source stacks (e.g., LLaMA, Mistral, Falcon). Build frameworks to operationalize prompt engineering, retrieval-augmented generation (RAG), and few-shot/in- context learning methods. Collaborate with Data Scientists, MLOps Engineers, and Platform teams to implement scalable CI/CD pipelines, feature stores, model registries, and unified experiment tracking. Benchmark model performance, latency, and cost across multiple deployment environments (on-premise, GCP, Azure). Develop governance, access control, and audit logging mechanisms for LLM outputs to ensure data safety and compliance. Mentor engineering teams in code best practices, versioning, and LLM lifecycle maintenance. 2nd Floor, South Block, Vaishnavi Tech Park, Ambalipura, Sarjapur Marathahalli Rd, Bengaluru, Karnataka 560102 info@factspan.com Key Skills: Deep understanding of transformer architectures, tokenization, attention mechanisms, and training/inference optimization Proven track record in standardizing ML systems using OOP design, reusable components, and scalable service APIs Hands-on experience with MLflow, LangChain, Ray, Prefect/Airflow, Docker, K8s, Weights & Biases, and model- serving platforms. Strong grasp of prompt tuning, evaluation metrics, context window management, and hybrid search strategies using vector databases like FAISS, pgvector, or Milvus Proficient in Python (must), with working knowledge of shell scripting, YAML, and JSON schema standardization Experience managing compute, memory, and storage requirements of LLMs across GCP, Azure, or AWS environments Qualifications & Experience: 5+ years in ML/AI engineering with at least 2 years working on LLMs or NLP-heavy systems. Able to reverse-engineer undocumented code and reimagine it with strong documentation and testing in mind. Clear communicator who collaborates well with business, data science, and DevOps teams. Familiar with agile processes, JIRA, GitOps, and confluence-based knowledge sharing. Curious and future-facing—always exploring new techniques and pushing the envelope on GenAI innovation. Passionate about data ethics, responsible AI, and building inclusive systems that scale Why Should You Apply? Grow with Us: Be part of a hyper- growth startup with ample number of opportunities to Learn & Innovate. People: Join hands with the talented, warm, collaborative team and highly accomplished leadership. Buoyant Culture: Regular activities like Fun-Fridays, Sports tournaments, Trekking and you can suggest few more after joining us 2nd Floor, South Block, Vaishnavi Tech Park, Ambalipura, Sarjapur Marathahalli Rd, Bengaluru, Karnataka 560102 info@factspan.com
Posted 1 month ago
8.0 years
0 Lacs
India
Remote
Job Title: Technical Lead – AI & LLMs Location: India (Remote or Hybrid as required) Employment Type: 12-Month Contract (Extendable) Compensation: Competitive, based on experience Team Size: Leading a team of 20 engineers About the Role We are looking for a Technical Lead with hands-on experience in Large Language Models (LLMs) , generative AI systems, and full-stack AI product development. In this contract role, you will be responsible for leading and coordinating a multidisciplinary team of 20 engineers to design, build, and deploy AI-driven solutions from concept to production. Key Responsibilities Architect and lead the development of AI-powered applications using LLMs and generative AI frameworks. Supervise model development, fine-tuning, prompt engineering, and RAG pipelines. Lead end-to-end implementation across data pipelines, backend APIs, and deployment infrastructure. Mentor a cross-functional team of machine learning engineers, backend developers, and DevOps specialists. Collaborate with stakeholders to align product requirements with technical deliverables. Set coding standards, enforce best practices, and ensure timely project delivery. Implement CI/CD and MLOps processes using tools like MLflow, SageMaker, or Vertex AI. Ensure scalability, security, and performance of deployed AI services. Required Skills 8+ years in software engineering, with at least 3+ years in AI/ML development. Proven experience working with LLMs, transformer models, and modern NLP frameworks. Proficient in Python , cloud services (AWS/GCP), RESTful APIs, and containerization (Docker/Kubernetes). Experience with LangChain, LlamaIndex, vector databases (FAISS, Pinecone, Weaviate), and orchestration tools. Demonstrated success in managing medium-to-large engineering teams and delivering AI products at scale. Strong communication, documentation, and leadership skills. Preferred Qualifications Experience working with open-source or commercial LLM providers (e.g., OpenAI, Mistral, Claude). Familiarity with frontend integration for AI interfaces (e.g., chatbot UIs, copilots). Awareness of AI safety, bias mitigation, and ethical deployment practices. Prior experience in startup or rapid innovation environments. Contract Details Duration: 12 months (with potential to extend) Location: India (remote-first with optional in-person meetings) Work Schedule: Full-time, flexible hours Reporting To: Partner of the company Show more Show less
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France