Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
1.0 - 3.0 years
3 - 5 Lacs
New Delhi, Chennai, Bengaluru
Hybrid
Your day at NTT DATA We are seeking an experienced Data Engineer to join our team in delivering cutting-edge Generative AI (GenAI) solutions to clients. The successful candidate will be responsible for designing, developing, and deploying data pipelines and architectures that support the training, fine-tuning, and deployment of LLMs for various industries. This role requires strong technical expertise in data engineering, problem-solving skills, and the ability to work effectively with clients and internal teams. What youll be doing Key Responsibilities: Design, develop, and manage data pipelines and architectures to support GenAI model training, fine-tuning, and deployment Data Ingestion and Integration: Develop data ingestion frameworks to collect data from various sources, transform, and integrate it into a unified data platform for GenAI model training and deployment. GenAI Model Integration: Collaborate with data scientists to integrate GenAI models into production-ready applications, ensuring seamless model deployment, monitoring, and maintenance. Cloud Infrastructure Management: Design, implement, and manage cloud-based data infrastructure (e.g., AWS, GCP, Azure) to support large-scale GenAI workloads, ensuring cost-effectiveness, security, and compliance. Write scalable, readable, and maintainable code using object-oriented programming concepts in languages like Python, and utilize libraries like Hugging Face Transformers, PyTorch, or TensorFlow Performance Optimization: Optimize data pipelines, GenAI model performance, and infrastructure for scalability, efficiency, and cost-effectiveness. Data Security and Compliance: Ensure data security, privacy, and compliance with regulatory requirements (e.g., GDPR, HIPAA) across data pipelines and GenAI applications. Client Collaboration: Collaborate with clients to understand their GenAI needs, design solutions, and deliver high-quality data engineering services. Innovation and R&D: Stay up to date with the latest GenAI trends, technologies, and innovations, applying research and development skills to improve data engineering services. Knowledge Sharing: Share knowledge, best practices, and expertise with team members, contributing to the growth and development of the team. Bachelors degree in computer science, Engineering, or related fields (Masters recommended) Experience with vector databases (e.g., Pinecone, Weaviate, Faiss, Annoy) for efficient similarity search and storage of dense vectors in GenAI applications 5+ years of experience in data engineering, with a strong emphasis on cloud environments (AWS, GCP, Azure, or Cloud Native platforms) Proficiency in programming languages like SQL, Python, and PySpark Strong data architecture, data modeling, and data governance skills Experience with Big Data Platforms (Hadoop, Databricks, Hive, Kafka, Apache Iceberg), Data Warehouses (Teradata, Snowflake, BigQuery), and lakehouses (Delta Lake, Apache Hudi) Knowledge of DevOps practices, including Git workflows and CI/CD pipelines (Azure DevOps, Jenkins, GitHub Actions) Experience with GenAI frameworks and tools (e.g., TensorFlow, PyTorch, Keras) Nice to have: Experience with containerization and orchestration tools like Docker and Kubernetes Integrate vector databases and implement similarity search techniques, with a focus on GraphRAG is a plus Familiarity with API gateway and service mesh architectures Experience with low latency/streaming, batch, and micro-batch processing Familiarity with Linux-based operating systems and REST APIs
Posted 1 week ago
6.0 - 11.0 years
40 - 60 Lacs
Kolkata
Work from Office
We're looking for an experienced AI/ML Technical Lead to architect and drive the development of our intelligent conversation engine. Youll lead model selection, integration, training workflows (RAG/fine-tuning), and scalable deployment of natural language and voice AI components. This is a foundational hire for a technically ambitious platform. Key Responsibilities AI System Architecture: Design the architecture of the AI-powered agent including LLM-based conversation workflows, voice bots, and follow-up orchestration. Model Integration & Prompt Engineering: Leverage APIs from OpenAI, Anthropic, or deploy open models (e.g., LLaMA 3, Mistral). Implement effective prompt strategies and retrieval-augmented generation (RAG) pipelines for contextual responses. Data Pipelines & Knowledge Management: Build secure data pipelines to ingest, embed, and serve tenant-specific knowledge bases (FAQs, scripts, product docs) using vector databases (e.g., Pinecone, Weaviate). Voice & Text Interfaces: Implement and optimize multimodal agents (text + voice) using ASR (e.g., Whisper), TTS (e.g., Polly), and NLP for automated qualification and call handling. Conversational Flow Orchestration: Design dynamic, stateful conversations that can take actions (e.g., book meetings, update CRM records) using tools like LangChain, Temporal, or n8n. Platform Scalability: Ensure models and agent workflows scale across tenants with strong data isolation, caching, and secure API access. Lead a Cross-Functional Team: Collaborate with backend, frontend, and DevOps engineers to ship intelligent, production-ready features. Monitoring & Feedback Loops: Define and monitor conversation analytics (drop-offs, booking rates, escalation triggers), and create pipelines to improve AI quality continuously. Qualifications Must-Haves: 5+ years of experience in ML/AI, with at least 2 years leading conversational AI or LLM projects. Strong background in NLP, dialog systems, or voice AI preferably with production experience. Experience with OpenAI, or open-source LLMs (e.g. LLaMA, Mistral, Falcon) and orchestration tools (LangChain, etc.). Proficiency with Python and ML frameworks (Hugging Face, PyTorch, TensorFlow). Experience deploying RAG pipelines, vector DBs (e.g. Pinecone, Weaviate), and managing LLM-agent logic. Familiarity with voice processing (ASR, TTS, IVR design). Solid understanding of API-based integration and microservices. Deep care for data privacy, multi-tenancy security, and ethical AI practices. Nice-to-Haves: Experience with CRM ecosystems (e.g. Salesforce, HubSpot) and how AI agents sync actions to CRMs. Knowledge of sales pipelines and marketing automation tools. Exposure to calendar integrations (Google Calendar API, Microsoft Graph). Knowledge of Twilio APIs (SMS, Voice, WhatsApp) and channel orchestration logic. Familiarity with Docker, Kubernetes, CI/CD, and scalable cloud infrastructure (AWS/GCP/Azure). What We Offer Founding team role with strong ownership and autonomy Opportunity to shape the future of AI-powered sales Flexible work environment Competitive salary Access to cutting-edge AI tools and training resources Post your resume and any relevant project links (GitHub, blog, portfolio) to career@sourcedeskglobal.com. Include a short note on your most interesting AI project or voicebot/conversational AI experience.
Posted 3 weeks ago
4.0 - 5.0 years
8 - 12 Lacs
Vadodara
Hybrid
Job Type: Full Time Job Description: We are seeking an experienced AI Engineer with 4-5 years of hands-on experience in designing and implementing AI solutions. The ideal candidate should have a strong foundation in developing AI/ML-based solutions, including expertise in Computer Vision (OpenCV). Additionally, proficiency in developing, fine-tuning, and deploying Large Language Models (LLMs) is essential. As an AI Engineer, candidate will work on cutting-edge AI applications, using LLMs like GPT, LLaMA, or custom fine-tuned models to build intelligent, scalable, and impactful solutions. candidate will collaborate closely with Product, Data Science, and Engineering teams to define, develop, and optimize AI/ML models for real-world business applications. Key Responsibilities: Research, design, and develop AI/ML solutions for real-world business applications, RAG is must. Collaborate with Product & Data Science teams to define core AI/ML platform features. Analyze business requirements and identify pre-trained models that align with use cases. Work with multi-agent AI frameworks like LangChain, LangGraph, and LlamaIndex. Train and fine-tune LLMs (GPT, LLaMA, Gemini, etc.) for domain-specific tasks. Implement Retrieval-Augmented Generation (RAG) workflows and optimize LLM inference. Develop NLP-based GenAI applications, including chatbots, document automation, and AI agents. Preprocess, clean, and analyze large datasets to train and improve AI models. Optimize LLM inference speed, memory efficiency, and resource utilization. Deploy AI models in cloud environments (AWS, Azure, GCP) or on-premises infrastructure. Develop APIs, pipelines, and frameworks for integrating AI solutions into products. Conduct performance evaluations and fine-tune models for accuracy, latency, and scalability. Stay updated with advancements in AI, ML, and GenAI technologies. Required Skills & Experience: AI & Machine Learning: Strong experience in developing & deploying AI/ML models. Generative AI & LLMs: Expertise in LLM pretraining, fine-tuning, and optimization. NLP & Computer Vision: Hands-on experience in NLP, Transformers, OpenCV, YOLO, R-CNN. AI Agents & Multi-Agent Frameworks: Experience with LangChain, LangGraph, LlamaIndex. Deep Learning & Frameworks: Proficiency in TensorFlow, PyTorch, Keras. Cloud & Infrastructure: Strong knowledge of AWS, Azure, or GCP for AI deployment. Model Optimization: Experience in LLM inference optimization for speed & memory efficiency. Programming & Development: Proficiency in Python and experience in API development. Statistical & ML Techniques: Knowledge of Regression, Classification, Clustering, SVMs, Decision Trees, Neural Networks. Debugging & Performance Tuning: Strong skills in unit testing, debugging, and model evaluation. Hands-on experience with Vector Databases (FAISS, ChromaDB, Weaviate, Pinecone). Good to Have: Experience with multi-modal AI (text, image, video, speech processing). Familiarity with containerization (Docker, Kubernetes) and model serving (FastAPI, Flask, Triton).
Posted 3 weeks ago
8.0 - 13.0 years
14 - 24 Lacs
Pune, Ahmedabad
Hybrid
Senior Technical Architect Machine Learning Solutions We are looking for a Senior Technical Architect with deep expertise in Machine Learning (ML), Artificial Intelligence (AI) , and scalable ML system design . This role will focus on leading the end-to-end architecture of advanced ML-driven platforms, delivering impactful, production-grade AI solutions across the enterprise. Key Responsibilities Lead the architecture and design of enterprise-grade ML platforms , including data pipelines, model training pipelines, model inference services, and monitoring frameworks. Architect and optimize ML lifecycle management systems (MLOps) to support scalable, reproducible, and secure deployment of ML models in production. Design and implement retrieval-augmented generation (RAG) systems, vector databases , semantic search , and LLM orchestration frameworks (e.g., LangChain, Autogen). Define and enforce best practices in model development, versioning, CI/CD pipelines , model drift detection, retraining, and rollback mechanisms. Build robust pipelines for data ingestion, preprocessing, feature engineering , and model training at scale , using batch and real-time streaming architectures. Architect multi-modal ML solutions involving NLP, computer vision, time-series, or structured data use cases. Collaborate with data scientists, ML engineers, DevOps, and product teams to convert research prototypes into scalable production services . Implement observability for ML models including custom metrics, performance monitoring, and explainability (XAI) tooling. Evaluate and integrate third-party LLMs (e.g., OpenAI, Claude, Cohere) or open-source models (e.g., LLaMA, Mistral) as part of intelligent application design. Create architectural blueprints and reference implementations for LLM APIs, model hosting, fine-tuning, and embedding pipelines . Guide the selection of compute frameworks (GPUs, TPUs), model serving frameworks (e.g., TorchServe, Triton, BentoML) , and scalable inference strategies (batch, real-time, streaming). Drive AI governance and responsible AI practices including auditability, compliance, bias mitigation, and data protection. Stay up to date on the latest developments in ML frameworks, foundation models, model compression, distillation, and efficient inference . 14. Ability to coach and lead technical teams , fostering growth, knowledge sharing, and technical excellence in AI/ML domains. Experience managing the technical roadmap for AI-powered products , documentations ensuring timely delivery, performance optimization, and stakeholder alignment. Required Qualifications Bachelors or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 8+ years of experience in software architecture , with 5+ years focused specifically on machine learning systems and 2 years in leading team. Proven expertise in designing and deploying ML systems at scale , across cloud and hybrid environments. Strong hands-on experience with ML frameworks (e.g., PyTorch, TensorFlow, Hugging Face, Scikit-learn). Experience with vector databases (e.g., FAISS, Pinecone, Weaviate, Qdrant) and embedding models (e.g., SBERT, OpenAI, Cohere). Demonstrated proficiency in MLOps tools and platforms : MLflow, Kubeflow, SageMaker, Vertex AI, DataBricks, Airflow, etc. In-depth knowledge of cloud AI/ML services on AWS, Azure, or GCP – including certification(s) in one or more platforms. Experience with containerization and orchestration (Docker, Kubernetes) for model packaging and deployment. Ability to design LLM-based systems , including hybrid models (open-source + proprietary), fine-tuning strategies, and prompt engineering. Solid understanding of security, compliance , and AI risk management in ML deployments. Preferred Skills Experience with AutoML , hyperparameter tuning, model selection, and experiment tracking. Knowledge of LLM tuning techniques : LoRA, PEFT, quantization, distillation, and RLHF. Knowledge of privacy-preserving ML techniques , federated learning, and homomorphic encryption Familiarity with zero-shot, few-shot learning , and retrieval-enhanced inference pipelines. Contributions to open-source ML tools or libraries. Experience deploying AI copilots, agents, or assistants using orchestration frameworks.
Posted 3 weeks ago
5.0 - 10.0 years
40 - 60 Lacs
Kolkata
Work from Office
We're looking for an experienced AI/ML Technical Lead to architect and drive the development of our intelligent conversation engine. Youll lead model selection, integration, training workflows (RAG/fine-tuning), and scalable deployment of natural language and voice AI components. This is a foundational hire for a technically ambitious platform. Role & responsibilities AI System Architecture: Design the architecture of the AI-powered agent including LLM-based conversation workflows, voice bots, and follow-up orchestration. Model Integration & Prompt Engineering: Leverage APIs from OpenAI, Anthropic, or deploy open models (e.g., LLaMA 3, Mistral). Implement effective prompt strategies and retrieval-augmented generation (RAG) pipelines for contextual responses. Data Pipelines & Knowledge Management: Build secure data pipelines to ingest, embed, and serve tenant-specific knowledge bases (FAQs, scripts, product docs) using vector databases (e.g., Pinecone, Weaviate). Voice & Text Interfaces: Implement and optimize multimodal agents (text + voice) using ASR (e.g., Whisper), TTS (e.g., Polly), and NLP for automated qualification and call handling. Conversational Flow Orchestration: Design dynamic, stateful conversations that can take actions (e.g., book meetings, update CRM records) using tools like LangChain, Temporal, or n8n. Platform Scalability: Ensure models and agent workflows scale across tenants with strong data isolation, caching, and secure API access. Lead a Cross-Functional Team: Collaborate with backend, frontend, and DevOps engineers to ship intelligent, production-ready features. Monitoring & Feedback Loops: Define and monitor conversation analytics (drop-offs, booking rates, escalation triggers), and create pipelines to improve AI quality continuously. Preferred candidate profile Qualifications Must-Haves: 5+ years of experience in ML/AI, with at least 2 years leading conversational AI or LLM projects. Strong background in NLP, dialog systems, or voice AI preferably with production experience. Experience with OpenAI, or open-source LLMs (e.g. LLaMA, Mistral, Falcon) and orchestration tools (LangChain, etc.). Proficiency with Python and ML frameworks (Hugging Face, PyTorch, TensorFlow). Experience deploying RAG pipelines, vector DBs (e.g. Pinecone, Weaviate), and managing LLM-agent logic. Familiarity with voice processing (ASR, TTS, IVR design). Solid understanding of API-based integration and microservices. Deep care for data privacy, multi-tenancy security, and ethical AI practices. Nice-to-Haves: Experience with CRM ecosystems (e.g. Salesforce, HubSpot) and how AI agents sync actions to CRMs. Knowledge of sales pipelines and marketing automation tools. Exposure to calendar integrations (Google Calendar API, Microsoft Graph). Knowledge of Twilio APIs (SMS, Voice, WhatsApp) and channel orchestration logic. Familiarity with Docker, Kubernetes, CI/CD, and scalable cloud infrastructure (AWS/GCP/Azure). What We Offer Founding team role with strong ownership and autonomy Opportunity to shape the future of AI-powered sales Flexible work environment Competitive salary Access to cutting-edge AI tools and training resources Post your resume and any relevant project links (GitHub, blog, portfolio) to career@sourcdeskglobal.com. Include a short note on your most interesting AI project or voicebot/conversational AI experience.
Posted 3 weeks ago
5 - 10 years
25 - 30 Lacs
Mumbai, Navi Mumbai, Chennai
Work from Office
We are looking for an AI Engineer (Senior Software Engineer). Interested candidates email me resumes on mayura.joshi@lionbridge.com OR WhatsApp on 9987538863 Responsibilities: Design, develop, and optimize AI solutions using LLMs (e.g., GPT-4, LLaMA, Falcon) and RAG frameworks. Implement and fine-tune models to improve response relevance and contextual accuracy. Develop pipelines for data retrieval, indexing, and augmentation to improve knowledge grounding. Work with vector databases (e.g., Pinecone, FAISS, Weaviate) to enhance retrieval capabilities. Integrate AI models with enterprise applications and APIs. Optimize model inference for performance and scalability. Collaborate with data scientists, ML engineers, and software developers to align AI models with business objectives. Ensure ethical AI implementation, addressing bias, explainability, and data security. Stay updated with the latest advancements in generative AI, deep learning, and RAG techniques. Requirements: 8+ years experience in software development according to development standards. Strong experience in training and deploying LLMs using frameworks like Hugging Face Transformers, OpenAI API, or LangChain. Proficiency in Retrieval-Augmented Generation (RAG) techniques and vector search methodologies. Hands-on experience with vector databases such as FAISS, Pinecone, ChromaDB, or Weaviate. Solid understanding of NLP, deep learning, and transformer architectures. Proficiency in Python and ML libraries (TensorFlow, PyTorch, LangChain, etc.). Experience with cloud platforms (AWS, GCP, Azure) and MLOps workflows. Familiarity with containerization (Docker, Kubernetes) for scalable AI deployments. Strong problem-solving and debugging skills. Excellent communication and teamwork abilities Bachelors or Masters degree in computer science, AI, Machine Learning, or a related field.
Posted 1 month ago
6 - 11 years
20 - 30 Lacs
Mumbai Suburbs, Mumbai, Mumbai (All Areas)
Work from Office
Position Overview: We seek a skilled and innovative AI Engineer with background experience in Python, LangChain, AI, ML, and Data Science principles to design, develop, and deploy Agentic AI Agents / Vertical LLM Agents. The ideal candidate will possess extensive experience with LangChain, data science workflows, prompt engineering, retrieval-augmented generation (RAG), and LLM fine-tuning. You will be working to integrate structured and unstructured data into scalable knowledge bases and evaluate systems for continuous improvement. The role involves developing solutions for use cases primarily for UK-based clients and solving industry-specific challenges with cutting-edge AI technologies. Job Type: Full-Time Location: Powai, Mumbai Salary Range: Competitive, based on experience Working Hours: 10:30 am to 7:30 pm Indian Standard Time Days of work: Monday to Friday Key Responsibilities: 1. Knowledge Base Development and Integration Define Knowledge Base Scope: Collaborate with domain experts to identify industry-specific requirements and tasks. Assess and select appropriate structured and unstructured data sources. Data Curation and Organization: Collect and preprocess data from authoritative sources (e.g., research papers, databases, manuals). Structure unstructured data using techniques like knowledge graphs. Implement data cleaning workflows to ensure high-quality input. Knowledge Integration: Embed knowledge bases into LLM workflows using tools like Pinecone, Weaviate, or Milvus. 2. LLM Fine-Tuning Fine-tune LLMs using frameworks such as Hugging Face Transformers or OpenAI APIs. Use domain-specific datasets to adapt base models to specialized industries. Apply transfer learning techniques to enhance model performance for niche applications. Monitor and improve fine-tuned models using validation metrics and feedback loops. 3. Prompt Engineering Design, test, and optimize prompts for industry-specific tasks. Implement contextual prompting strategies to enhance accuracy and relevance. Iterate on prompt designs based on system evaluations and user feedback. 4. Retrieval-Augmented Generation (RAG) Implement RAG workflows to integrate external knowledge bases with LLMs. Develop and optimize embedding-based retrieval systems using vector databases. Combine retrieved knowledge with user queries to generate accurate and context-aware responses. 5. System Integration Build APIs and middleware to interface between the LLM, knowledge base, and user-facing applications. Develop scalable and efficient query-routing mechanisms for hybrid retrieval and generation tasks. Ensure seamless deployment of LLM-powered applications. 6. Validation and Testing Evaluate model responses against domain-specific benchmarks and ground truths. Collaborate with domain experts to refine system outputs. Conduct user testing and gather feedback to improve system performance iteratively. 7. Maintenance and Updates Implement strategies to keep the knowledge base current with periodic updates. Develop monitoring tools to track system performance and identify areas for improvement. Address ethical, regulatory, and privacy considerations (e.g., GDPR, HIPAA compliance). Qualifications: Technical Skills Programming: Strong knowledge of Python and frameworks like Flask, FastAPI, or LangChain for API development. Data Preprocessing: Familiarity with preprocessing pipelines for structured and unstructured data. LLM Proficiency: Experience with LLM platforms such as OpenAI GPT, Hugging Face Transformers, or similar. Knowledge Base Management: Hands-on experience with vector databases (e.g., Pinecone, Milvus, Weaviate) and relational databases (e.g., PostgreSQL, MySQL). Fine-Tuning Expertise: Proficiency in adapting LLMs for specialized domains using domain-specific datasets. RAG Implementation: Practical experience with retrieval-augmented generation workflows. Prompt Engineering: Ability to craft and optimize prompts for complex, context-driven tasks. Soft Skills Strong problem-solving skills and attention to detail. Ability to collaborate effectively with cross-functional teams, including domain experts. Excellent communication and documentation skills. Experience 6+ years of experience in AI/ML roles, focusing on LLM agent development and deployment in recent years. 2 + years would creating AI solution with langChain, experience. Demonstrated experience in designing domain-specific AI systems. Hands-on experience in integrating structured/unstructured data into AI models.
Posted 2 months ago
4 - 9 years
0 - 1 Lacs
Chennai
Work from Office
Dear Professionals! We are seeking a skilled AI Developer to design, develop, and implement intelligent solutions that enhance business processes. The ideal candidate will leverage machine learning models, natural language processing (NLP), and deep learning techniques to build impactful AI-driven applications. Key Responsibilities Develop, train, and deploy machine learning models for various business solutions. Collaborate with cross-functional teams to define AI project goals and technical requirements. Design algorithms for data processing, feature engineering, and model evaluation. Implement and optimize NLP, computer vision, and predictive analytics models. Craft and refine production-grade prompting strategies for large language models (LLMs), ensuring reliability and efficiency. Build and maintain LLM pipelines using LangChain, integrating state-of-the-art models like GPT, Claude, and Gemini. Develop comprehensive frameworks for LLM performance metrics, quality assessments, and cost optimization. Design and implement GenAI applications, including LLM agents and Retrieval-Augmented Generation (RAG). Optimize similarity-based retrieval systems using modern vector databases like Weaviate and Pinecone. Skills & Qualifications Strong proficiency in Python, with a focus on GenAI best practices and frameworks. Expertise in machine learning algorithms, data modeling, and model evaluation. Experience with NLP techniques, computer vision, or generative AI. Deep knowledge of LLMs, prompt engineering, and GenAI technologies. Proficiency in data analysis tools like Pandas and NumPy. Hands-on experience with vector databases such as Weaviate or Pinecone. Familiarity with cloud platforms (AWS, Azure, GCP) for AI deployment. Strong problem-solving skills and critical-thinking abilities. Experience with AI model fairness, bias detection, and adversarial testing. Excellent communication skills to translate business needs into technical solutions. Preferred Qualifications Bachelors or Masters degree in Computer Science, AI, or a related field. Experience with MLOps practices for model deployment and maintenance. Strong understanding of data pipelines, APIs, and cloud infrastructure. Advanced degree in Computer Science, Machine Learning, or a related field (preferred). interested professionals kindly share your updated resume to hr@wee4techsolutions.com with Subject as GENAIML Developer - Contract/Freelancing
Posted 2 months ago
3 - 6 years
20 - 35 Lacs
Bengaluru
Remote
Python LLM Engineer (WFH) Experience: 3 - 5 Years Salary: INR 20,00,000-35,00,000 / year Preferred Notice Period : Within 15 days Shift : 10:30 AM to 7:30 PM IST Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients.) Must have skills required : API, Communication, LangChain, LLMs, Pinecone/ Weaviate/ FAISS/ ChromaDB, rag, AWS, Python Good to have skills : CI/CD, multimodal AI, Prompt engineering, Reinforcement Learning, Voice AI Platformance (One of Uplers' Clients) is Looking for: Python LLM Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description We are seeking a highly skilled Python LLM Engineer to join our AI team. The ideal candidate should have deep expertise in large language models (LLMs), experience in building Retrieval-Augmented Generation (RAG) systems, and a strong background in AI-driven applications. This role requires hands-on experience with LangChain, multimodal AI, vector databases, agentic AI, and cloud-based AI infrastructure, particularly AWS and AWS Bedrock. Python will be the primary development language for this role. Key Responsibilities: Design, develop, and optimize applications leveraging LLMs using LangChain and other frameworks. Build and fine-tune Retrieval-Augmented Generation (RAG) based AI systems for efficient information retrieval. Implement and integrate major LLM APIs such as OpenAI, Anthropic, Google Gemini, and Mistral. Develop and optimize AI-driven voice applications and conversational agents using Python. Research and apply the latest advancements in AI, multimodal models, and vector databases. Architect and deploy scalable AI applications using AWS services, including AWS Bedrock. Design and implement vector search solutions using Pinecone, Weaviate, FAISS, or similar technologies. Develop agentic AI products that leverage autonomous decision-making and multi-agent coordination. Write efficient and scalable backend services in Python for AI-powered applications. Develop and optimize AI model fine-tuning and inference pipelines in Python. Implement end-to-end MLOps pipelines for model training, deployment, and monitoring using Python-based tools. Optimize LLM inference for performance and cost efficiency using Python frameworks. Ensure the security, scalability, and reliability of AI systems deployed in cloud environments. Required Skills and Experience: Strong experience with Large Language Models (LLMs) and their APIs (OpenAI, Anthropic, Cohere, Google Gemini, Mistral, etc.). Proficiency in LangChain and experience in developing modular AI pipelines. Deep knowledge of Retrieval-Augmented Generation (RAG) and its implementation. Experience with voice AI technologies, ASR (Automatic Speech Recognition), and TTS (Text-to-Speech), using Python-based frameworks. Familiarity with multimodal AI models (text, image, audio, and video processing) and Python libraries such as OpenCV, PIL, and SpeechRecognition. Hands-on experience with vector databases (Pinecone, Weaviate, FAISS, ChromaDB, etc.). Strong background in developing agentic AI products and autonomous AI workflows. Expertise in Python for AI/ML development, including libraries like TensorFlow, PyTorch, Hugging Face, FastAPI, and LangChain. Experience with AWS cloud services, including AWS Bedrock, Lambda, S3, and API Gateway, with Python-based implementations. Strong understanding of AI infrastructure, model deployment, and cloud scalability. Preferred Qualifications: Experience in reinforcement learning and self-improving AI agents. Exposure to prompt engineering, chain-of-thought prompting, and function calling. Prior experience in building production-grade AI applications in enterprise environments. Familiarity with CI/CD pipelines for AI model deployment and monitoring, using Python-based tools such as DVC, MLflow, and Airflow. Why Join Us? Work with cutting-edge AI technologies and build next-gen AI products. Be part of a highly technical and innovative AI-driven team. Competitive salary, stock options, and benefits. Opportunity to shape the future of AI-driven applications and agentic AI systems. Engagement Type: Direct-hire on the TBD payroll on behalf of platformance Job Type: Permanent Location: Remote Working time: 10:30 AM to 7:30 PM IST Interview Process - The HR team will conduct an initial culture fit assessment before technical rounds. Initial Technical Discussion: Live discussion to assess core competencies. Technical Assignment: Candidates will be given 4 days to complete a hands-on coding test. Final Interview (Optional): Review of the coding test and further technical discussion if required. How to Apply? Easy 3-Step Process: Step 1: Click On Apply! And Register or Login on our portal Step 2: Upload updated Resume & Complete the Screening Form Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Platformance is a Growth Technology Platform that helps brands connect with customers using a pay-per-outcome model. Platformance is a growth technology platform built to help advertisers achieve measurable business outcomes, not just marketing results. Our mission is to simplify the complexities of digital advertising while ensuring every campaign delivers tangible results. About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. ( Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2