Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 6.0 years
0 Lacs
pune, maharashtra, india
Remote
Primary Title: Senior LLM Engineer (4+ years) Hybrid, India About The Opportunity A technology consulting firm operating at the intersection of Enterprise AI, Generative AI and Cloud Engineering seeks an experienced LLM-focused engineer. You will build and productionize LLM-powered products and integrations for enterprise customers across knowledge management, search, automation, and conversational AI use-cases. This is a hybrid role based in India for candidates with strong hands-on LLM engineering experience. Role & Responsibilities Own design and implementation of end-to-end LLM solutions: data ingestion ? retrieval (RAG) ? fine-tuning ? inference and monitoring for production workloads. Develop robust Python microservices to serve LLM inference, retrieval, and agentic workflows using LangChain/LangGraph or equivalent toolkits. Implement and optimise vector search pipelines (FAISS/Pinecone/Milvus), embedding generation, chunking strategies, and relevance tuning for sub-second retrieval. Perform parameter-efficient fine-tuning (LoRA/adapters) and evaluation workflows; manage model versioning and automated validation for quality and safety. Containerise and deploy models and services with Docker and Kubernetes; integrate with cloud infra (AWS/Azure/GCP) and CI/CD for repeatable delivery. Establish observability, alerting, and performance SLAs for LLM services; collaborate with cross-functional teams to define success metrics and iterate rapidly. Skills & Qualifications Must-Have 4+ years engineering experience with 2+ years working directly on LLM/Generative AI projects. Strong Python skills and hands-on experience with PyTorch and HuggingFace/transformers libraries. Practical experience building RAG pipelines, vector search (FAISS/Pinecone/Milvus), and embedding workflows. Experience with fine-tuning strategies (LoRA/adapters) and evaluation frameworks for model quality and safety. Familiarity with Docker, Kubernetes, cloud deployment (AWS/Azure/GCP), and Git-based CI/CD workflows. Solid understanding of prompt engineering, retrieval strategies, and production monitoring of ML services. Preferred Experience with LangChain/LangGraph, agent frameworks, or building tool-calling pipelines. Exposure to MLOps platforms, model registry, autoscaling low-latency inference, and cost-optimisation techniques. Background in productionising LLMs for enterprise use-cases (knowledge bases, search, virtual assistants). Benefits & Culture Highlights Hybrid work model with flexible in-office collaboration and remote days; competitive market compensation. Opportunity to work on high-impact enterprise AI initiatives and shape production-grade GenAI patterns across customers. Learning-first culture: access to technical mentorship, experimentation environments, and conferences/learning stipend. To apply: include a brief portfolio of LLM projects, links to relevant repositories or demos, and a summary of production responsibilities. This role is ideal for engineers passionate about turning cutting-edge LLM research into reliable, scalable enterprise solutions. Skills: llm,open ai,gemini Show more Show less
Posted 1 day ago
4.0 - 6.0 years
0 Lacs
navi mumbai, maharashtra, india
Remote
Primary Title: Senior LLM Engineer (4+ years) Hybrid, India About The Opportunity A technology consulting firm operating at the intersection of Enterprise AI, Generative AI and Cloud Engineering seeks an experienced LLM-focused engineer. You will build and productionize LLM-powered products and integrations for enterprise customers across knowledge management, search, automation, and conversational AI use-cases. This is a hybrid role based in India for candidates with strong hands-on LLM engineering experience. Role & Responsibilities Own design and implementation of end-to-end LLM solutions: data ingestion ? retrieval (RAG) ? fine-tuning ? inference and monitoring for production workloads. Develop robust Python microservices to serve LLM inference, retrieval, and agentic workflows using LangChain/LangGraph or equivalent toolkits. Implement and optimise vector search pipelines (FAISS/Pinecone/Milvus), embedding generation, chunking strategies, and relevance tuning for sub-second retrieval. Perform parameter-efficient fine-tuning (LoRA/adapters) and evaluation workflows; manage model versioning and automated validation for quality and safety. Containerise and deploy models and services with Docker and Kubernetes; integrate with cloud infra (AWS/Azure/GCP) and CI/CD for repeatable delivery. Establish observability, alerting, and performance SLAs for LLM services; collaborate with cross-functional teams to define success metrics and iterate rapidly. Skills & Qualifications Must-Have 4+ years engineering experience with 2+ years working directly on LLM/Generative AI projects. Strong Python skills and hands-on experience with PyTorch and HuggingFace/transformers libraries. Practical experience building RAG pipelines, vector search (FAISS/Pinecone/Milvus), and embedding workflows. Experience with fine-tuning strategies (LoRA/adapters) and evaluation frameworks for model quality and safety. Familiarity with Docker, Kubernetes, cloud deployment (AWS/Azure/GCP), and Git-based CI/CD workflows. Solid understanding of prompt engineering, retrieval strategies, and production monitoring of ML services. Preferred Experience with LangChain/LangGraph, agent frameworks, or building tool-calling pipelines. Exposure to MLOps platforms, model registry, autoscaling low-latency inference, and cost-optimisation techniques. Background in productionising LLMs for enterprise use-cases (knowledge bases, search, virtual assistants). Benefits & Culture Highlights Hybrid work model with flexible in-office collaboration and remote days; competitive market compensation. Opportunity to work on high-impact enterprise AI initiatives and shape production-grade GenAI patterns across customers. Learning-first culture: access to technical mentorship, experimentation environments, and conferences/learning stipend. To apply: include a brief portfolio of LLM projects, links to relevant repositories or demos, and a summary of production responsibilities. This role is ideal for engineers passionate about turning cutting-edge LLM research into reliable, scalable enterprise solutions. Skills: llm,open ai,gemini Show more Show less
Posted 2 days ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a Power Automate & GenAI Specialist at our Pune location, you will be responsible for executing and maintaining Power Automate (Cloud + Desktop) flows, supporting GenAI service integration into business processes, and building and integrating backend services using FastAPI. Your role will be aligned to Mid-Level, Senior-Level, or Lead-Level based on your skills and experience. Key Responsibilities: - For Mid-Level (3-5 years): - Execute and maintain Power Automate (Cloud + Desktop) flows - Support GenAI service integration into business processes - Build and integrate backend services using FastAPI - For Senior-Level (6-9 years): - Design and implement end-to-end Power Automate workflows - Develop and deploy custom GenAI-based services - Manage Azure service deployments and backend integrations - For Lead-Level (9+ years): - Own solution delivery and provide architecture input - Align with stakeholders and business leaders on automation strategy - Mentor teams, guide vendor resources, and ensure delivery quality Qualification Required: - Hands-on experience in Power Automate (Cloud + Desktop) - Knowledge of GenAI Suite including OpenAI, Gemini, LangChain - Experience with FastAPI for backend service integration - Familiarity with Vector Databases such as FAISS, Pinecone, Chroma - Proficiency in Azure Services like Functions, Blob, Key Vault, Cognitive APIs - Skill in Python for GenAI logic and FastAPI services - Application of OOPS Concepts in backend design and service development In addition to the technical qualifications, preferred qualifications for this role include experience in healthcare or insurance automation, exposure to RPA tools (UIPath preferred), knowledge of prompt engineering and LLM fine-tuning, and familiarity with CI/CD, Docker, Kubernetes (preferred for Lead-Level). Please ensure to clearly align your profile with the Mid/Senior/Lead roles, avoid profiles with only theoretical GenAI exposure, and highlight hands-on Power Automate experience in resume summaries. This is a full-time position with the work location being in person. If you are seeking an opportunity to leverage your expertise in Power Automate, GenAI integration, and Azure services to drive automation and innovation, we encourage you to apply for this role.,
Posted 2 days ago
8.0 - 10.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Role Overview: We are seeking an experienced Senior Data Scientist to lead data-driven initiatives, perform advanced exploratory data analysis, and build & deploy scalable machine learning models. The ideal candidate will have strong expertise in Python, generative AI (GenAI), and machine learning deployment frameworks, with a good understanding of the life sciences domain. Key Responsibilities: 8 to 10 years of experience in leading design and development of advanced ML and Generative AI models. Drive end-to-end data science projects from data preparation to deployment. Apply LLMs, transformers, and other GenAI techniques to business problems. Guide and mentor junior data scientists in technical delivery. Partner with stakeholders to translate business needs into AI solutions. Ensure scalability, performance, and reliability of deployed models. Stay current with GenAI advancements and evaluate adoption for the organization. Required Skills: Strong proficiency in Python and ML frameworks (PyTorch, TensorFlow, Scikit-learn). Deep expertise in NLP, LLM fine-tuning, transformers, and GenAI frameworks. Experience with MLOps, model deployment, and cloud platforms (AWS, Azure, GCP). Strong knowledge of data engineering concepts, feature engineering, and large-scale data handling. Experience with vector databases (FAISS, Pinecone, Weaviate) and orchestration tools (LangChain, LlamaIndex). Excellent problem-solving, communication, and leadership skills. Show more Show less
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
chandigarh
On-site
As a Generative AI Expert at Cogniter Technologies, you will be a valuable member of our advanced AI/ML team. Your role will involve designing and delivering scalable AI applications powered by Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), AI agents, and vector databases. Key Responsibilities: - Develop intelligent applications using models such as GPT-4, LLaMA, Mistral, Falcon. - Architect and implement RAG pipelines for knowledge-driven AI systems. - Build AI agents with frameworks like LangChain. - Integrate vector databases (FAISS, Pinecone, Weaviate) for semantic search. - Deploy and manage real-time inference engines (e.g., Groq). - Optimize LLM performance, token usage, context handling, and prompts. - Collaborate with cross-functional teams to embed AI into production applications. - Stay updated on Generative AI and multimodal AI innovations. If you are passionate about building intelligent systems that solve real-world problems, Cogniter Technologies is looking forward to hearing from you.,
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
delhi
On-site
As a highly skilled GenAI Lead Engineer, your role will involve designing and implementing advanced frameworks for alternate data analysis in the investment management domain. You will leverage LLM APIs (such as GPT, LLaMA, etc.), build scalable orchestration pipelines, and architect cloud/private deployments to drive next-generation AI-driven investment insights. Additionally, you will lead a cross-functional team of Machine Learning Engineers and UI Developers to deliver robust, production-ready solutions. **Key Responsibilities:** - **GenAI Framework Development:** Develop custom frameworks using GPT APIs or LLaMA for alternate data analysis and insights generation. Optimize LLM usage for investment-specific workflows, including data enrichment, summarization, and predictive analysis. - **Automation & Orchestration:** Design and implement document ingestion workflows using tools such as n8n (or similar orchestration frameworks). Build modular pipelines for structured and unstructured data. - **Infrastructure & Deployment:** Architect deployment strategies on cloud (AWS, GCP, Azure) or private compute environments (CoreWeave, on-premises GPU clusters). Ensure high availability, scalability, and security in deployed AI systems. **Qualification Required:** - Strong proficiency in Python with experience in frameworks such as TensorFlow or PyTorch. - 2+ years of experience in Generative AI and Large Language Models (LLMs). - Experience with VectorDBs (e.g., Pinecone, Weaviate, Milvus, FAISS) and document ingestion pipelines. - Familiarity with data orchestration tools (e.g., n8n, Airflow, LangChain Agents). - Understanding of cloud deployments and GPU infrastructure (CoreWeave or equivalent). - Proven leadership skills with experience managing cross-functional engineering teams. - Strong problem-solving skills and ability to work in fast-paced, data-driven environments. - Experience with financial or investment data platforms. - Knowledge of RAG (Retrieval-Augmented Generation) systems. - Familiarity with frontend integration for AI-powered applications. - Exposure to MLOps practices for continuous training and deployment.,
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Senior AI Engineer at Uplevyl, you will play a crucial role in leading the design and deployment of AI-powered, agentic workflows that drive the future of personalized insights. Your main focus will be on vector search, retrieval-augmented generation (RAG), and intelligent automation, collaborating closely with full-stack engineers and product teams to bring scalable GenAI features into production. Key Responsibilities: - Design and implement RAG pipelines for semantic search, personalization, and contextual enrichment. - Build agentic AI workflows using Pinecone, LangChain/LangGraph, and custom orchestration. - Integrate LLM-driven features into production systems, balancing innovation with scalability. - Architect and optimize vector databases (Pinecone, FAISS, Milvus) for low-latency retrieval. - Work with structured/unstructured datasets for embedding, indexing, and enrichment. - Collaborate with data engineers on ETL/ELT pipelines to prepare data for AI applications. - Partner with backend and frontend engineers to integrate AI features into user-facing products. - Participate in Agile ceremonies (sprint planning, reviews, standups). - Maintain clear documentation and support knowledge sharing across the AI team. Required Qualifications: - 5+ years in AI/ML engineering or software engineering with applied AI focus. - Hands-on experience with RAG pipelines, vector databases (Pinecone, FAISS, Milvus), and LLM integration. - Strong background in Python for AI workflows (embeddings, orchestration, optimization). - Familiarity with agentic architectures (LangChain, LangGraph, or similar). - Experience deploying and scaling AI features on AWS cloud environments. - Strong collaboration and communication skills for cross-functional teamwork. Tech Stack: - AI Tools: Pinecone, LangChain, LangGraph, OpenAI APIs (ChatGPT, GPT-4/5), HuggingFace models - Languages: Python (primary for AI workflows), basic Node.js knowledge for integration - Cloud & DevOps: AWS (Lambda, S3, RDS, DynamoDB, IAM), Docker, CI/CD pipelines - Data Engineering: SQL, Python (Pandas, NumPy), ETL/ELT workflows, Databases (Postgres, DynamoDB, Redis) - Bonus Exposure: React, Next.js Preferred Skills: - Experience with embedding models, HuggingFace Transformers, or fine-tuning LLMs. - Knowledge of compliance frameworks (GDPR, HIPAA, SOC 2). - Exposure to personalization engines, recommender systems, or conversational AI.,
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Data Scientist specializing in Generative AI, you will be responsible for the following: - Utilizing your 3+ years of experience in Data Science to work on generative AI, agentic AI, LLMs, and NLP. - Demonstrating hands-on expertise with RAG pipelines, knowledge graphs, and prompt engineering. - Applying your skills in Azure AI tools such as Azure OpenAI, Cognitive Services, and Azure ML. - Leveraging your experience with LangChain, Agentic AI frameworks, and orchestration tools. - Proficiency in utilizing Hugging Face for model building and fine-tuning. - Demonstrating strong Python development skills specifically tailored for AI applications. - Working with vector databases, embeddings, and semantic search implementations. - Applying your solid MLOps background including CI/CD pipelines, DevOps integration, and AI deployment optimization. - Familiarity with FastAPI, Databricks, AWS, MLflow, Faiss, Redis, and OpenAI APIs. - Being a great problem-solver who enjoys debugging and improving AI systems. Qualifications required for this role include: - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. - 4 to 5 years of relevant experience in Data Science with a focus on generative AI and NLP. - Proven track record of working with AI tools and frameworks mentioned in the job description. If you are passionate about Data Science and Generative AI, and enjoy working on cutting-edge technologies, this role is perfect for you. Apply now by sending your resume to deepak.visko@gmail.com or call 9238142824. Please note that this is a full-time, permanent position with benefits including cell phone reimbursement, health insurance, paid sick time, paid time off, and Provident Fund. The work location is in person at Noida.,
Posted 5 days ago
0.0 years
0 Lacs
chennai, tamil nadu, india
Remote
About Us At InzpireU, we are on a mission to amplify human potential through AI-powered mentorship, learning, and accountability. We believe AI should serve as an enabler of growthnot a replacement for people. As an AI Intern, you will be part of a passionate team building next-gen solutions that combine AI, data, and human guidance to transform how individuals and organizations learn, grow, and succeed. Role Overview We are seeking a motivated AI Intern who is eager to gain hands-on experience in applied AI, product development, and data-driven solutions. You will support the design, development, and testing of AI features across mentorship, learning personalization, and outcome tracking. This is an excellent opportunity to learn, experiment, and build real-world AI applications in a fast-paced startup environment. Key Responsibilities Assist in building and fine-tuning AI/ML models (LLMs, recommendation engines, NLP, etc.). Support integration of AI features into our platform (e.g., Co-Guru, APEX assessments, document writers). Research emerging AI tools and frameworks (e.g., LangChain, OpenAI, Gemini, vector databases) and provide insights for adoption. Help design conversational AI prompts, workflows, and evaluation strategies. Work with the engineering team to clean, label, and prepare datasets. Document experiments, findings, and best practices to build internal AI knowledge. Collaborate with product managers and mentors to align AI solutions with user needs. Requirements Currently pursuing a degree in Computer Science, AI/ML, Data Science, or related field . Basic understanding of Python , ML libraries (TensorFlow, PyTorch, scikit-learn), or LLM frameworks. Familiarity with APIs, databases, and cloud services (AWS/GCP/Azure is a plus). Strong interest in Generative AI, NLP, and applied machine learning . Analytical mindset with eagerness to learn and experiment. Good communication skills and ability to work in a collaborative, remote-first team. Preferred (Nice to Have) Hands-on experience with LangChain, RAG pipelines, vector databases (Pinecone, Weaviate, FAISS) . Exposure to AI ethics, bias mitigation, and responsible AI frameworks . Prior projects (academic or personal) in chatbots, recommendation systems, or predictive analytics . What Youll Gain Real-world exposure to AI product development in a startup environment . Mentorship from experienced engineers and AI innovators. Opportunity to contribute to features that directly impact users. Potential pathway to a full-time AI Engineer role upon successful completion. Show more Show less
Posted 5 days ago
2.0 - 4.0 years
0 Lacs
india
Remote
Job Title: AI/ML Engineer (with Python & LLM Ops focus) Location: [Pan India-Remote] Experience: 24 years Core Responsibilities Develop and optimize data pipelines using Python, Pandas, and Scikit-learn . Support prompt engineering initiatives for LLM-based applications. Conduct model evaluation and testing , ensuring accuracy, reliability, and performance. Work with Vector Databases (e.g., Pinecone, FAISS) for embedding storage and retrieval. Secondary Responsibilities Assist in fine-tuning small and domain-specific models for production use. Write automation scripts for data preparation, cleaning, and feature engineering. Collaborate with product and engineering teams to enable API integrations for ML services. Must-Have Skills Strong programming skills in Python with hands-on experience in Pandas and Scikit-learn . Basic knowledge of prompt engineering for LLMs. Understanding of model evaluation metrics and testing techniques . Exposure to Vector Databases (Pinecone, FAISS, or similar). Good-to-Have Skills Experience with fine-tuning smaller ML or LLM models . Familiarity with REST APIs / GraphQL and integrating ML models into production workflows. Knowledge of data preprocessing pipelines and automation scripts. Soft Skills Strong problem-solving and analytical mindset. Ability to work in a collaborative environment with cross-functional teams. Curiosity to explore and learn new AI/ML tools and frameworks. Show more Show less
Posted 5 days ago
12.0 - 15.0 years
0 Lacs
thane, maharashtra, india
On-site
We are looking for a Director of Engineering (AI Systems & Secure Platforms) to join our client&aposs Core Engineering team at Thane (Maharashtra India). The ideal candidate should have 1215+ years of experience in architecting and deploying AI systems at scale, with deep expertise in agentic AI workflows, LLMs, RAG, Computer Vision, and secure mobile/wearable platforms. Top 3 Daily Tasks: ? Architect, optimize, and deploy LLMs, RAG pipelines, and Computer Vision models for smart glasses and other edge devices. ? Design and orchestrate agentic AI workflowsenabling autonomous agents with planning, tool usage, error handling, and closed feedback loops. ? Collaborate across AI, Firmware, Security, Mobile, Product, and Design teams to embed invisible intelligence within secure wearable systems. Must have 1215+ years of experience in Applied AI, Deep Learning, Edge AI deployment, Secure Mobile Systems, and Agentic AI Architecture. Must have: -Programming languages: Python, C/C++, Java (Android), Kotlin, JavaScript/Node.js, Swift, Objective-C, CUDA, Shell scripting -Expert in TensorFlow, PyTorch, ONNX, HuggingFace; model optimization with TensorRT, TFLite -Deep experience with LLMs, RAG pipelines, vector DBs (FAISS, Milvus) -Proficient in agentic AI workflowsmulti-agent orchestration, planning, feedback loops -Strong in privacy-preserving AI (federated learning, differential privacy) -Secure real-time comms (WebRTC, SIP, RTP) Nice to have: -Experience with MCP or similar protocol frameworks -Background in wearables/XR or smart glass AI platforms -Expertise in platform security architectures (sandboxing, auditability) Show more Show less
Posted 5 days ago
6.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
As an AI/ML professional at Voxai, you will play a crucial role in architecting and implementing Large Language Model (LLM)-driven solutions that revolutionize customer experience in the contact center domain. Your responsibilities will include designing and optimizing Retrieval-Augmented Generation (RAG) pipelines, recommending AI/ML architectures, developing AI agents, experimenting with Agentic AI systems, and building MLOps pipelines. You will collaborate with cross-functional teams to deliver real-world AI applications, analyze contact center data to derive actionable insights, and mentor junior engineers. Your qualifications should include a strong foundation in Computer Science, proficiency in Statistics and Machine Learning, hands-on experience in deploying ML models, familiarity with LLMs and vector databases, and expertise in MLOps tools and cloud-native deployment. Additionally, excellent problem-solving, analytical, and communication skills are essential for this role. Ideally, you should hold a Masters or Ph.D. in Computer Science or a related field, have experience in enterprise software or customer experience platforms, and be able to work effectively in a fast-paced, collaborative environment. Join Voxai to be at the forefront of CX innovation and make a significant impact in the tech industry.,
Posted 6 days ago
4.0 - 6.0 years
0 Lacs
bengaluru, karnataka, india
Remote
Job Title: Sr. AI/ML Engineer-Gen AI Location: Bangalore Experience: 4+ years About the Role: We are looking for an experienced AI/ML Engineer with a strong background in Generative AI (GenAI), Large Language Models (LLMs), and Retrieval-Augmented Generation (RAG) implementation . You will play a key role in designing, developing, and deploying cutting-edge AI solutions that leverage state-of-the-art machine learning techniques to solve complex problems. Key Responsibilities: Design, develop, and optimize LLM-based applications with a focus on RAG pipelines, fine-tuning, and model deployment . Implement GenAI solutions for text generation, summarization, code generation, and other NLP tasks. Develop and optimize ML models , leveraging classical and deep learning techniques. Build and integrate retrieval systems using vector databases like FAISS, ChromaDB, Pinecone, or Weaviate . Optimize and fine-tune LLMs (e.g., OpenAI GPT, LLaMA, Mistral, Falcon ) for domain-specific use cases. Develop data pipelines for training, validation, and inference. Work on scalable AI solutions , ensuring performance and cost efficiency in deployment. Collaborate with cross-functional teams to integrate AI models into production applications. Required Skills & Qualifications: 4+ years of experience in AI/ML development. Strong proficiency in Python and ML frameworks like TensorFlow, PyTorch, or Hugging Face Transformers . Hands-on experience with LLMs, GenAI, and RAG architectures . Experience working with vector databases (e.g., FAISS, Pinecone, ChromaDB). Knowledge of ML algorithms , deep learning architectures, and NLP techniques. Familiarity with cloud platforms ( AWS, GCP, or Azure ) and AI/ML model deployment. Experience with LangChain or LlamaIndex for LLM applications is a plus. Strong problem-solving skills and the ability to work in a fast-paced environment. Perks & Benefits: Health and Wellness: Healthcare policy covering your family and parents. Food: Enjoy scrumptious buffet lunch at the office every day. (Bangalore) Hybrid work policy: Beat the everyday traffic commute. A 3-day in office and 2-day WFH policy based on your roles and responsibilities. Professional Development: Learn and propel your career. We provide workshops, funded online courses and other learning opportunities based on individual needs. Rewards and Recognition&aposs: Recognition and rewards programs in place to celebrate your achievements and contributions. To find out more about us, head over to our Website and LinkedIn Show more Show less
Posted 6 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As the Lead AI Application Developer, you will be responsible for architecting and overseeing the implementation of the Agentic AI platform. Your role will involve guiding the development process from conceptualization to deployment, ensuring a modular and scalable architecture, and providing mentorship to the developer team to deliver high-quality and production-ready AI agents. Your key responsibilities will include leading sprint planning, conducting code reviews, and participating in architecture discussions. You will also need to ensure compliance with secure coding practices, performance standards, and test coverage requirements. With a background of 4-6 years in software development, particularly in Python, Node.js, or Java, you should also possess experience with LLMs such as OpenAI, Claude, or Mistral, as well as a strong understanding of agent frameworks like LangChain, Autogen, or CrewAI. Proficiency in FastAPI, REST APIs, version control using Git, CI/CD processes, and vector databases like Pinecone or FAISS will be essential for this role. Your leadership skills will play a crucial role in driving the technical execution and taking ownership of projects. You will be expected to lead the delivery of 10 reusable AI agent modules per quarter, maintain a bug rate of less than 10 percent post-deployment, achieve 90 percent unit test coverage for core modules, and contribute to 100 percent uptime and reliability in production services. Experience in containerized deployment using Docker or Kubernetes, familiarity with design patterns and event-driven systems, and leadership in Agile/Scrum environments will be beneficial for this position. Overall, as the Lead AI Application Developer, you will play a pivotal role in shaping and enhancing the AI capabilities of the organization, driving innovation, and ensuring the successful deployment of AI solutions to meet business objectives.,
Posted 1 week ago
5.0 - 23.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Generative AI Architect with 5 to 10+ years of experience, you will be responsible for designing, developing, and deploying enterprise-grade GenAI solutions. Your role will require in-depth expertise in LLMs, RAG, MLOps, cloud platforms, and scalable AI architecture. You will architect and implement secure, scalable GenAI solutions using LLMs such as GPT, Claude, LLaMA, and Mistral. Additionally, you will build RAG pipelines with LangChain, LlamaIndex, FAISS, Weaviate, and ElasticSearch. Your responsibilities will also include leading prompt engineering, setting up evaluation frameworks for accuracy and safety, and developing reusable GenAI modules for function calling, summarization, document chat, and Q&A. Furthermore, you will deploy workloads on AWS Bedrock, Azure OpenAI, and GCP Vertex AI, ensuring monitoring and observability with Grafana, Prometheus, and OpenTelemetry. You will apply MLOps best practices such as CI/CD, model versioning, and rollback. Researching emerging trends like multi-agent systems, autonomous agents, and fine-tuning will also be part of your role, along with implementing data governance and compliance measures like PII masking, audit logs, and encryption. To be successful in this role, you should have 8+ years of experience in AI/ML, including 2-3 years specifically in LLMs/GenAI. Strong coding skills in Python with Hugging Face Transformers, LangChain, and OpenAI SDKs are essential. Expertise in Vector Databases like Pinecone, FAISS, Qdrant, and Weaviate is required, along with hands-on experience with cloud AI platforms such as AWS SageMaker/Bedrock, Azure OpenAI, and GCP Vertex AI. Experience in building RAG pipelines and chat-based applications, familiarity with agents and orchestration frameworks like LangGraph, AutoGen, and CrewAI, knowledge of MLOps stack including MLflow, Airflow, Docker, Kubernetes, and FastAPI, as well as understanding of prompt security and GenAI evaluation metrics like BERTScore, BLEU, and GPTScore are also important. Excellent communication and leadership skills for architecture discussions and mentoring are expected in this role.,
Posted 1 week ago
8.0 - 13.0 years
35 - 45 Lacs
ahmedabad
Remote
We are looking for a highly skilled Lead AI/ML Engineer with 6+ years of hands-on experience in designing, deploying, and scaling AI/ML and Generative AI solutions. The ideal candidate will lead technical efforts across a wide spectrum of AI technologiesincluding large language models (LLMs), computer vision, machine learning, and data scienceto solve high-impact business problems. This is a technical leadership role that involves mentoring team members, shaping architecture decisions, and driving end-to-end AI solution delivery across cloud-based environments. Key Responsibilities: Leadership & Strategy Drive the technical vision and roadmap for AI/ML and Generative AI initiatives. Lead and mentor a team of AI/ML engineers and data scientists, conducting regular code reviews and guiding solution architecture. Collaborate with cross-functional stakeholders to translate business problems into scalable AI/ML solutions. Own the AI/ML lifecyclefrom prototyping and experimentation to production deployment and monitoring. Generative AI & LLMs Architect and lead the development of scalable Generative AI solutions using AWS Cloud and modern frameworks. Deep experience in working with large language models (LLMs) including fine-tuning, evaluation, and deployment. Expertise in prompt engineering, agentic workflows, and custom LLM development. Proficient with Langchain, Langgraph, Langfuse, Crew AI, and other GenAI frameworks. Build and optimize Retrieval-Augmented Generation (RAG) pipelines for enterprise applications. Machine Learning & Data Science Solve complex machine learning problems using supervised, unsupervised, and deep learning methods. Lead the implementation of ML pipelines including automated data preprocessing, training, validation, and monitoring. Manage ML lifecycle using AWS SageMaker, including model versioning, drift detection, and retraining strategies. Apply advanced analytics, time series forecasting, and predictive modeling techniques. Computer Vision Design and deploy advanced computer vision models for tasks such as classification, object detection, segmentation, and image captioning. Skills & Qualifications: 6+ years of experience in AI/ML, with at least 3 years in Generative AI and LLMs. Strong proficiency in Python and ML libraries such as Scikit-learn, XGBoost, LightGBM, CatBoost. Deep understanding of LLMs including OpenAI (GPT-4, GPT-4o), Claude, Gemini, LLaMa, DeepSeek, etc. Proficiency with GenAI tooling: Langchain, Langfuse, Crew AI, LLAMA Index, Langgraph, etc. Experience working with vector databases (Pinecone, FAISS, OpenSearch, Chroma) and indexing strategies. Hands-on with deep learning frameworks such as PyTorch and TensorFlow. Strong experience with AWS services: SageMaker, Bedrock, DynamoDB, S3, Lambda, etc. Proficiency in REST API development using FastAPI, Uvicorn, Flask, Docker, and related tools. Strong database knowledge—both SQL (PostgreSQL) and NoSQL (DynamoDB). Familiarity with caching systems like Redis and Memcached. Strong communication skills and ability to lead discussions with both technical and non-technical stakeholders. Preferred: Experience in leading enterprise-scale AI/ML deployments. Contributions to open-source GenAI/ML projects. Certifications in AWS AI/ML or equivalent.
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
bengaluru, karnataka, india
Remote
Company Overview At Zuora, we do Modern Business. Were helping people subscribe to new ways of doing business that are better for people, companies, and ultimately the planet. This shift to the Subscription Economy puts customers first by building recurring relationships rather than one-time sales, enabling long-term sustainable growth. With our leading multi-product suite and deep expertise, were transforming industries and enabling the worlds most innovative companies to monetize new business models, deepen subscriber relationships, and optimize digital experiences. The Team & Role Are you excited to bring ML to life in production Do you enjoy working at the intersection of scalable engineering and cutting-edge AI Join Zuora Platform Tech and help shape the future of monetization. As a Machine Learning Engineer in our 10X AI team (AI, ML, and Science), you&aposll design, implement, and maintain production-grade ML systems that power mission-critical business decisions and customer experiences. Your role will focus on building reliable pipelines, deploying models at scale, enabling experimentation, and creating the infrastructure that makes AI real across Zuoras product suite. Our Tech Stack Core: Java, Spring, REST APIs, Microservices, Kafka, Spark, NodeJS, AWS, Kubernetes, Terraform, AngularJS AI/ML: AWS Bedrock, SageMaker, Athena, Python, LangGraph, LangChain, Streamlit, Claude, Kubernetes ML Infra: FAISS, Neo4j, MLFlow, Docker, CI/CD, Airflow, Vector DBs, Graph DBs What Youll Do Productionize ML Models Deploy models (ML & GenAI) into robust production environments using modern ML infrastructure and MLOps practices. Optimize for Scale & Performance Build scalable, low-latency ML services and APIs with observability, testing, and failover mechanisms. Pipeline Automation Design and implement automated training, testing, and deployment pipelines using tools like SageMaker Pipelines, Airflow, and MLFlow. Model Monitoring & Maintenance Implement monitoring for model drift, data quality, and performance metrics. Own retraining and rollback strategies. Partner with Scientists & Engineers Collaborate with data scientists to take notebooks to production, and with software engineers to integrate ML into customer-facing systems. Champion Best Practices Define best practices for ML development lifecycle, including CI/CD for models, reproducibility, and secure deployment. Qualifications 5+ years of experience in machine learning engineering or applied ML development Proven experience deploying ML models to production, maintaining APIs, and building CI/CD pipelines for ML Strong foundations in data engineering: ETL, batch/stream processing, and data quality practices Hands-on experience with MLOps tools like MLflow, SageMaker, Airflow, or similar Proficiency in Python and SQL; familiarity with Java or Spark is a plus Experience with infrastructure-as-code (e.g., Terraform) and container orchestration (Kubernetes) Familiarity with model monitoring, experimentation, and continuous training workflows Bachelor&aposs or Masters degree in Computer Science, Engineering, or a related technical discipline Nice to Have Experience with GenAI deployment (Bedrock, LangChain, Claude, etc.) Familiarity with vector databases (FAISS, Pinecone) and graph databases (Neo4j) Exposure to A/B testing or online experimentation platforms Understanding of privacy, security, and governance in ML deployments Why Join Us Were scaling a high-impact team to shape the future of monetization using AI. We welcome applicants who bring unique perspectives, even if you dont check every box. Diverse teams lead to better ideasand faster innovation. #ZEOLife at Zuora As an industry pioneer, our work is constantly evolving and challenging us in new ways that require us to think differently, iterate often and learn constantlyits exciting. Our people, whom we refer to as ZEOs" are empowered to take on a mindset of ownership and make a bigger impact here. Our teams collaborate deeply, exchange different ideas openly and together were making whats next possible for our customers, community and the world. As Part Of Our Commitment To Building An Inclusive, High-performance Culture Where ZEOs Feel Inspired, Connected And Valued, We Support ZEOs With Competitive compensation, corporate bonus program and performance rewards, company equity and retirement programs Medical insurance Generous, flexible time off Paid holidays, wellness days and company wide end of year break 6 months fully paid parental leave Learning & Development stipend Opportunities to volunteer and give back, including charitable donation match Free resources and support for your mental wellbeing Specific benefits offerings may vary by country and can be viewed in more detail during your interview process. Location & Work Arrangements Organizations and teams at Zuora are empowered to design efficient and flexible ways of working, being intentional about scheduling, communication, and collaboration strategies that help us achieve our best results. In our dynamic, globally distributed company, this means balancing flexibility and responsibility flexibility to live our lives to the fullest, and responsibility to each other, to our customers, and to our shareholders. For most roles, we offer the flexibility to work both remotely and at Zuora offices. Our Commitment to an Inclusive Workplace Think, be and do you! At Zuora, different perspectives, experiences and contributions matter. Everyone counts. Zuora is proud to be an Equal Opportunity Employer committed to creating an inclusive environment for all. Zuora does not discriminate on the basis of, and considers individuals seeking employment with Zuora without regards to, race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us by sending an email to assistance(at)zuora.com. Show more Show less
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Job Description Job Description Overview As a leading global aerospace company, Boeing develops, manufactures and services commercial airplanes, defense products and space systems for customers in more than 150 countries. As a top U.S. exporter, the company leverages the talents of a global supplier base to advance economic opportunity, sustainability and community impact. Boeing's team is committed to innovating for the future, leading with sustainability, and cultivating a culture based on the company's core values of safety, quality and integrity. Technology for today and tomorrow The Boeing India Engineering & Technology Center (BIETC) is a 5500+ engineering workforce that contributes to global aerospace growth. Our engineers deliver cutting-edge R&D, innovation, and high-quality engineering work in global markets, and leverage new-age technologies such as AI/ML, IIoT, Cloud, Model-Based Engineering, and Additive Manufacturing, shaping the future of aerospace. People-driven culture At Boeing, we believe creativity and innovation thrives when every employee is trusted, empowered, and has the flexibility to choose, grow, learn, and explore. We offer variable arrangements depending upon business and customer needs, and professional pursuits that offer greater flexibility in the way our people work. We also believe that collaboration, frequent team engagements, and face-to-face meetings bring together different perspectives and thoughts - enabling every voice to be heard and every perspective to be respected. No matter where or how our teammates work, we are committed to positively shaping people's careers and being thoughtful about employee wellbeing. With us, you can create and contribute to what matters most in your career, community, country, and world. Join us in powering the progress of global aerospace. Position Overview : Boeing Test and Evaluation team is currently looking for Associate Data Scientist to join their team in Bengaluru, KA. Data Scientists at Boeing make sure that products at the world's largest aerospace company continue to meet the highest standards. From quality and reliability to safety and performance, their expertise is vital to the concept, design and certifications of a wide variety of commercial and military systems. This role will be based out of Bengaluru, India. Position Responsibilities: Design, develop, and deploy machine learning and deep learning models for a variety of business use cases. Design and develop NLP and generative AI models using architectures like transformers, GPT, BERT, etc. Fine-tune, prompt-engineer, or distill pre-trained LLMs for domain-specific tasks (e.g., summarization, Q&A, classification). Collaborate with cross-functional teams to identify business opportunities and translate them into data-driven solutions. Implement and maintain data pipelines for model training, evaluation, and deployment. Integrate external and internal data sources, including unstructured data, into knowledge graphs and vector databases like (ChromaDB, FAISS, Neo4j). Evaluate and select appropriate frameworks and tools for AI/ML projects (e.g., TensorFlow, PyTorch, Hugging Face, LangChain). Monitor model performance and retrain models as necessary to ensure accuracy and relevance. Document processes, models, and code to ensure reproducibility and knowledge sharing. Stay up to date with the latest advancements in AI, ML, GenAI, and related technologies. Employer will not sponsor applicants for employment visa status . Basic Qualifications (Required Skills/Experience): Proficiency in Python and relevant data science libraries (e.g., NumPy, pandas, scikit-learn). Experience with deep learning frameworks such as TensorFlow or PyTorch. Hands-on experience with at least one generative AI framework (e.g., Hugging Face Transformers, LangChain, LangGraph). Strong understanding of machine learning algorithms, model evaluation, and deployment best practices. Experience with cloud platforms (OpenShift , Kubernetes , Docker) for model training and deployment. Excellent problem-solving, communication, and collaboration skills. Preferred Qualifications (Desired Skills/Experience ) : Bachelor's or Master's degree in Computer Science, Data Science, Statistics, Mathematics, or a related field Familiarity with unstructured data processing Contributions to open-source AI/ML projects or research publications. Candidate must be a self-starter with a positive attitude, high ethics, and a track record of working independently in developing the analytics solutions. Must be able to work collaboratively with very strong teaming skills. Must be willing to work flexible hours (early or late as needed) to interface with Boeing personnel around the world. Develop and maintain relationships / partnerships with customers, stakeholders, peers, and partners to develop collaborative plans and execute on projects. Proactively seek information and direction to successfully complete the statement of work. Typical Education & Experience: Bachelor or Master degree in Computer Science/ Engineering ( Software / Instrumentation / Electronics / Electrical / Mechanical or equivalent discipline) with 5 to 8 years experience. (For Ex : Masters with 4+ Experience or Bachelors with 5-8 years). Relocation: This position does offer relocation based on candidate eligibility within INDIA. Applications for this position will be accepted until Sept. 12, 2025 Export Control Requirements: This is not an Export Control position. Relocation This position offers relocation based on candidate eligibility. Visa Sponsorship Employer will not sponsor applicants for employment visa status. Shift Not a Shift Worker (India) Equal Opportunity Employer: We are an equal opportunity employer. We do not accept unlawful discrimination in our recruitment or employment practices on any grounds including but not limited to race, color, ethnicity, religion, national origin, gender, sexual orientation, gender identity, age, physical or mental disability, genetic factors, military and veteran status, or other characteristics covered by applicable law. We have teams in more than 65 countries, and each person plays a role in helping us become one of the world's most innovative, diverse and inclusive companies. We are proud members of the and welcome applications from candidates with disabilities. Applicants are encouraged to share with our recruitment team any accommodations required during the recruitment process. Accommodations may include but are not limited to: conducting interviews in accessible locations that accommodate mobility needs, encouraging candidates to bring and use any existing assistive technology such as screen readers and offering flexible interview formats such as virtual or phone interviews.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a Generative AI Engineer specializing in Natural Language Processing (NLP) and Large Language Models (LLMs) at Airtel Digital in Gurugram, you will be responsible for designing, developing, and deploying advanced AI solutions. Your role will involve creating cutting-edge pipelines, deploying fine-tuned models, and fostering innovation in AI-powered applications. Collaborating closely with data scientists, engineers, and product teams, you will revolutionize user interactions with AI-driven platforms by utilizing the latest tools and frameworks in the Gen AI ecosystem. Your key responsibilities will include designing and orchestrating language model pipelines using LangChain, integrating knowledge graphs with Neo4j or rdflib, fine-tuning and optimizing large language models with Hugging Face Transformers, as well as developing, training, and fine-tuning models using PyTorch or TensorFlow. You will also implement traditional machine learning algorithms, conduct statistical analysis using scikit-learn, manipulate and analyze data efficiently with NumPy and pandas, and create engaging data visualizations using matplotlib and Seaborn. Additionally, you will integrate FAISS for vector search optimization in Retrieval Augmented Generation (RAG) architectures and utilize tools like Elasticsearch or Pinecone for scalable real-time indexing and retrieval systems. Text preprocessing, tokenization, and NLP-related tasks will be carried out using spaCy or NLTK, while experience with LANGgraph and Agentic AI frameworks is a plus. For this role, you are required to have a Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or a related field, along with strong proficiency in Python and experience working with AI/ML frameworks. A solid understanding of NLP, LLMs, and machine learning algorithms is essential, as well as experience in building and deploying AI models using Hugging Face Transformers, PyTorch, or TensorFlow. Proficiency in data manipulation and analysis using NumPy and pandas, and hands-on experience with vector search tools like FAISS, Elasticsearch, or Pinecone are also necessary. Preferred qualifications include experience with advanced AI frameworks such as LANGgraph and Agentic AI frameworks, knowledge of building and integrating knowledge graphs using Neo4j or rdflib, familiarity with spaCy and NLTK for advanced NLP tasks, a strong background in data visualization with matplotlib and Seaborn, and previous experience working with RAG architectures and real-time retrieval systems. Joining our team means being part of a forward-thinking AI team at the forefront of innovation, collaborating with industry leaders on impactful AI-driven projects, and benefiting from a competitive salary, comprehensive benefits, and opportunities for professional growth.,
Posted 1 week ago
7.0 - 9.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Job Title: IBU Solution Services AI Senior Engineer Job Function: Analytics and Data Sciences Location: Bangalore, India Hiring Manager: Shubhra Verma Role Level: 7 to 10 years Detailed Description: Responsibilities: Lead the design, development, and deployment of advanced machine learning models and algorithms for various applications. Perform comprehensive data analysis, feature engineering, and model training with large and complex datasets. Collaborate with cross-functional teams to understand business requirements and translate them into sophisticated technical solutions. Architect and deploy scalable AI/ML models, including large language models (LLMs) and transformer-based architectures. u00A0 Implement MLOps best practices for CI/CD, automated model retraining, and lifecycle management. Optimize AI/ML pipelines for distributed computing, leveraging cloud platforms and accelerators (GPUs/TPUs). Develop and maintain scalable advanced AI solutions such as retrieval-augmented generation (RAG) and fine-tuning techniques (LoRA, PEFT). u00A0. Conduct thorough model evaluation, validation, and testing to ensure high performance and accuracy. Stay at the forefront of AI/ML advancements and integrate cutting-edge techniques into existing and new projects. Mentor and provide technical guidance to junior developers and team members. Author and maintain detailed documentation of processes, methodologies, and best practices. Implement and optimize cutting-edge deep learning models, such as Generative Adversarial Networks (GANs) and transformer architectures (e.g., BERT, GPT). Explore and implement federated learning approaches to build AI/ML models while preserving patient data privacy and security. Integrate explainable AI (XAI) techniques to ensure transparency and interpretability of machine learning models. Translate complex AI concepts into actionable insights for business stakeholders. Required Qualifications: Bachelor's, Master's in Computer Science, AI, ML, Data Science, or related fields. 7+ years of experience with AI / ML solutions Extensive experience in developing and deploying machine learning models in a production environment. Expertise in Python (NumPy, Pandas, scikit-learn), and proficiency in Java, Scala, or similar languages. Deep understanding of deep learning frameworks (TensorFlow, PyTorch, JAX) and NLP libraries (Hugging Face Transformers). u00A0 Hands-on experience with MLOps, including containerization (Docker, Kubernetes), CI/CD, and model monitoring. u00A0 Well versed with Agentic AI. Deep understanding on Langchain ecosystem. Experience with distributed computing (Apache Spark, Ray) and large-scale dataset processing. u00A0 Proficiency in vector databases (FAISS, Chroma DB, Pinecone) for efficient similarity search. u00A0 Strong expertise in fine-tuning transformer models, hyperparameter optimization, and reinforcement learning. In-depth experience with cloud platforms such as AWS, Azure, or Google Cloud for deploying AI solutions. Strong analytical, problem-solving, and communication skills to bridge technical and business perspectives. Preferred Qualifications : Experience in regulated industries such as healthcare, biopharma. Contributions to AI research, open-source projects, or top-tier AI conferences. u00A0 Familiarity with generative AI, prompt engineering, and advanced AI topics like graph neural networks. u00A0 Proven ability to scale AI teams and lead complex AI projects in high-growth environments Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form () for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response. Lillyu00A0does not discriminate on the basis of age, race, color, religion, gender, sexual orientation, gender identity, gender expression, national origin, protected veteran status, disability or any other legally protected status. #WeAreLilly
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
We are looking for a skilled Senior AI Developer to join our dynamic team. The ideal candidate will have expertise in developing and deploying AI-driven applications with a focus on retrieval-augmented generation (RAG), conversational RAG (CRAG), LangChain, LangFlow, and LanceDB. You will be responsible for designing, building, and optimizing AI models and pipelines, ensuring scalable and efficient deployment using FastAPI, Kubernetes (K8s), and Docker. As a Senior AI Developer, your responsibilities will include developing and implementing AI solutions leveraging LangChain, LangFlow, and LanceDB. You will design and optimize retrieval-augmented generation (RAG) and conversational RAG (CRAG) pipelines, as well as build and deploy AI-driven applications using FastAPI for scalable and high-performance APIs. Containerizing applications using Docker and orchestrating deployments on Kubernetes (K8s) will also be part of your role. Additionally, you will integrate AI models with databases and vector stores to enhance search and retrieval capabilities. Collaborating with cross-functional teams to design and implement end-to-end AI solutions, ensuring code quality, scalability, and security of AI applications, and staying updated with the latest advancements in AI, ML, and NLP technologies are essential aspects of this position. The qualifications we are looking for include proficiency in LangChain and LangFlow, experience working with LanceDB or other vector databases, a strong understanding of RAG and CRAG methodologies, hands-on experience with FastAPI, expertise in containerization with Docker and orchestration with Kubernetes (K8s), experience in designing and deploying scalable AI solutions, proficiency in Python and relevant AI/ML frameworks, strong problem-solving skills, ability to work in an agile development environment, familiarity with cloud platforms (AWS, GCP, Azure), experience with FAISS, Pinecone, Weaviate, or other vector stores, knowledge of OpenAI, Hugging Face Transformers, LlamaIndex, or fine-tuning LLMs, familiarity with SupaBase and Jenkins, and understanding of API gateway and load balancer. Preferred qualifications include experience in deploying AI solutions in a production environment, knowledge of MLOps tools like MLflow, Kubeflow, or Airflow for managing AI pipelines, understanding of model fine-tuning and optimization techniques, experience with monitoring and maintaining AI applications post-deployment, familiarity with AWS SageMaker, GCP Vertex AI, or Azure AI Services, and understanding of data privacy, model security, and ethical AI practices. Founded in 1986, Chiposoft is a leading software services firm based in New Delhi, India, focused on AI powered solutions and products. With almost four decades of experience, we have built a reputation for technical excellence, deep business process understanding, and customer-centric solutions. Our expertise spans AI-driven solutions, enterprise applications, and digital platforms that empower organizations to streamline operations, enhance engagement, and drive innovation. At Chipsoft, we pride ourselves on delivering state of the art, reliable, and scalable solutions while maintaining a dynamic and collaborative work environment. Our growing team of skilled professionals thrives on innovation and problem-solving, working on cutting-edge technologies and global projects, including clients in India, the US, the UK, Middle East, and Africa. As we expand our reach and explore AI-powered solutions through strategic partnerships, we are looking for talented and passionate professionals to join our team. If you are eager to work on exciting projects, cutting-edge technologies, and make a real impact, we invite you to be part of our journey. Join us and shape the future of digital transformation!,
Posted 1 week ago
5.0 - 10.0 years
15 - 20 Lacs
noida
Hybrid
- GenAI-powered applications - Retrieval-Augmented Generation (RAG) pipelines - Prompt engineering and fine-tuning - Cloud platforms - Reusable components and APIs - MLOps practices - Integrate GenAI into business workflows - Hallucination mitigation Required Candidate profile - 5+ years of experience in AI/ML, with 2+ years in LLMs /GenAI - vector databases - Proficiency in Python - RAG pipelines, embeddings, and chat-based solutions - Cloud AI services - Prompt safety
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
Nuum X Consulting is a leader in digital transformation, AI-driven solutions, and customer experience innovations. They empower businesses with cutting-edge AI applications to enhance operational efficiency, automate workflows, and drive intelligent decision-making. As a part of the AI innovation team, you will have a significant role in developing and optimizing Generative AI solutions for real-world applications. You will be responsible for designing, developing, and deploying state-of-the-art AI-driven applications that leverage Large Language Models (LLMs), NLP, and Deep Learning. The ideal candidate should have hands-on experience in developing AI-powered applications, fine-tuning pre-trained models, and integrating AI with existing business systems. Your key responsibilities will include building and deploying AI-driven applications using GPT, LLaMA, Falcon, or similar LLMs, fine-tuning pre-trained models for domain-specific tasks, integrating AI solutions into CRMs, automation tools, and cloud-based platforms, creating robust APIs for seamless integration of generative AI capabilities, utilizing tools like FAISS, Pinecone, Weaviate for retrieval-augmented generation (RAG) capabilities, establishing monitoring pipelines for AI model drift and continuous improvement, researching and experimenting with new AI architectures and generative AI innovations, and collaborating with data scientists, engineers, and product teams to ensure seamless AI adoption. To be considered for this role, you should have a Bachelors or Masters Degree in Computer Science, AI, Machine Learning, or a related field, along with 2+ years of experience in AI/ML application development focusing on Generative AI. Proficiency in Python, TensorFlow, PyTorch, Hugging Face Transformers, and LangChain is required. Additionally, experience in LLM fine-tuning, prompt engineering, model optimization, NLP, vector databases, embeddings, and retrieval-augmented generation (RAG) is essential. Hands-on experience with cloud AI services (AWS, Azure, GCP) for model deployment, strong understanding of AI ethics, bias mitigation, and responsible AI development, as well as excellent problem-solving, communication, and collaboration skills are also necessary. Preferred qualifications include experience with MLOps, CI/CD pipelines, and model monitoring, understanding of multi-modal AI (text, image, audio generation), exposure to Reinforcement Learning with Human Feedback (RLHF), and prior experience in AI-driven automation, chatbots, or intelligent assistants. Join Nuum X Consulting for innovative AI projects, growth opportunities, access to AI research and conferences, hands-on learning, and a collaborative culture working with top AI engineers, data scientists, and innovators.,
Posted 2 weeks ago
0.0 - 1.0 years
0 - 0 Lacs
hyderabad
Work from Office
About the Role: We are seeking a highly motivated and technically proficient AI Agent Development Engineer to join our advanced AI team. In this role, you will design, build, and deploy intelligent, autonomous AI agents that leverage Generative AI, Reinforcement Learning, Retrieval-Augmented Generation (RAG), and Agentic AI principles to perform complex, dynamic tasks across diverse domains. You will work at the forefront of AI agent architecture , integrating reasoning, memory, tool usage, and multi-step decision-making capabilities using state-of-the-art libraries and frameworks. Key Responsibilities: Design, develop, and deploy autonomous AI agents for real-world task automation, decision-making, and orchestration. Integrate and fine-tune LLMs (Large Language Models) for goal-oriented, tool-using agents. Implement memory-augmented reasoning, Retrieval-Augmented Generation (RAG) pipelines, and multi-agent coordination . Apply Agentic AI approaches to enable adaptive decision-making, dynamic planning, and autonomous tool usage. Work with vector databases to store and retrieve contextual memory and knowledge. Optimize agent performance using Reinforcement Learning frameworks and human feedback. Collaborate with cross-functional teams to integrate AI agents into applications and services. Monitor, test, and maintain AI pipelines in production environments. Required Skills & Experience: Strong programming skills in Python (C# or R is a plus). Proven experience in building and deploying AI agents or LLM-based applications . Hands-on expertise in: AI Agent Frameworks: LangChain, AutoGPT, BabyAGI, CrewAI, AgentGPT Agentic AI Concepts: reasoning loops, dynamic tool selection, adaptive workflows LLMs & Generative AI: OpenAI (GPT-3.5/4), Hugging Face Transformers, Anthropic Claude, Cohere RAG & Vector Search: Pinecone, FAISS, ChromaDB, Weaviate for context-aware generation Reinforcement Learning: OpenAI Gym, Stable-Baselines3, Ray RLlib NLP & Language Tools: spaCy, NLTK, TextBlob Modeling & Deployment: TensorFlow, PyTorch, Keras, Scikit-learn, MLflow APIs & UI Frameworks: Flask, FastAPI, Streamlit, Gradio DevOps: Docker, Git, CI/CD workflows, Kubernetes, Terraform
Posted 2 weeks ago
0.0 - 1.0 years
0 - 0 Lacs
hyderabad
Work from Office
About the Role: We are seeking a highly motivated and technically proficient AI Agent Development Engineer to join our advanced AI team. In this role, you will design, build, and deploy intelligent, autonomous AI agents that leverage Generative AI, Reinforcement Learning, Retrieval-Augmented Generation (RAG), and Agentic AI principles to perform complex, dynamic tasks across diverse domains. You will work at the forefront of AI agent architecture , integrating reasoning, memory, tool usage, and multi-step decision-making capabilities using state-of-the-art libraries and frameworks. Key Responsibilities: Design, develop, and deploy autonomous AI agents for real-world task automation, decision-making, and orchestration. Integrate and fine-tune LLMs (Large Language Models) for goal-oriented, tool-using agents. Implement memory-augmented reasoning, Retrieval-Augmented Generation (RAG) pipelines, and multi-agent coordination . Apply Agentic AI approaches to enable adaptive decision-making, dynamic planning, and autonomous tool usage. Work with vector databases to store and retrieve contextual memory and knowledge. Optimize agent performance using Reinforcement Learning frameworks and human feedback. Collaborate with cross-functional teams to integrate AI agents into applications and services. Monitor, test, and maintain AI pipelines in production environments. Required Skills & Experience: Strong programming skills in Python (C# or R is a plus). Proven experience in building and deploying AI agents or LLM-based applications . Hands-on expertise in: AI Agent Frameworks: LangChain, AutoGPT, BabyAGI, CrewAI, AgentGPT Agentic AI Concepts: reasoning loops, dynamic tool selection, adaptive workflows LLMs & Generative AI: OpenAI (GPT-3.5/4), Hugging Face Transformers, Anthropic Claude, Cohere RAG & Vector Search: Pinecone, FAISS, ChromaDB, Weaviate for context-aware generation Reinforcement Learning: OpenAI Gym, Stable-Baselines3, Ray RLlib NLP & Language Tools: spaCy, NLTK, TextBlob Modeling & Deployment: TensorFlow, PyTorch, Keras, Scikit-learn, MLflow APIs & UI Frameworks: Flask, FastAPI, Streamlit, Gradio DevOps: Docker, Git, CI/CD workflows, Kubernetes, Terraform
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |