Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12.0 - 20.0 years
0 Lacs
, India
On-site
Senior GenAI & Agentic AI Expert (Architect) Relocation to Abu Dhabi, UAE Location: Abu Dhabi Client: Abu Dhabi Government About The Role Our client, a global consulting firm with distributed teams across the US, Canada, UAE, India, and PK, is hiring a high-caliber Senior Generative AI Expert with proven hands-on experience in building Agentic AI applications . This role is ideal for someone who has a total of 12 to 20+ years of software engineering and AI/ML experience and is now focused on autonomous AI agents, tool-using LLMs, LangChain, AutoGPT, or similar frameworks . Key Responsibilities Design and develop Agentic AI applications using LLM frameworks (LangChain, AutoGPT, CrewAI, Semantic Kernel, or similar) Architect and implement multi-agent systems for enterprise-grade solutions Integrate AI agents with APIs, databases, internal tools, and external SaaS products Lead and mentor a cross-functional team across global time zones Optimize performance, context retention, tool usage, and cost efficiency Build reusable pipelines and modules to support GenAI use cases at scale Ensure enterprise-grade security, privacy, and compliance standards in deployments Collaborate directly with clients and senior stakeholders Ideal Candidate Profile 10 to 15+ years of professional experience in software engineering and AI/ML 3+ years of practical experience in LLM-based application development Strong track record of delivering Agentic AI systems (not just chatbot interfaces) Hands-on experience with: LangChain, AutoGPT, CrewAI, ReAct, Semantic Kernel OpenAI, Claude, Gemini, Mistral, or Llama2 Embedding models, vector databases (FAISS, Pinecone, Weaviate, etc.) Prompt engineering, RAG, memory/context management Serverless, Python, Node.js, AWS/GCP/Azure cloud Experience leading engineering teams and working with enterprise clients Excellent communication, documentation, and stakeholder management skills Must be open to relocation to UAE Why Join Work on UAE Government project(s) Lead cutting-edge Agentic AI projects at enterprise scale Collaborate with senior teams across US, Canada, UAE, India, and PK Competitive compensation + long-term career roadmap Skills: memory/context management,apis integration,enterprise-grade security,crewai,saas products integration,aws,semantic kernel,prompt engineering,openai,node.js,multi-agent systems,azure,databases integration,gemini,cost efficiency,embedding models,rag,autogpt,performance optimization,llm frameworks,agentic ai,generative ai,langchain,gcp,python,vector databases Show more Show less
Posted 1 month ago
5.0 - 10.0 years
8 - 12 Lacs
Mumbai, Maharashtra, India
On-site
In-depth experience with the Eliza framework and its agent coordination capabilities In-depth experience with the Agentic AI Practical implementation experience with vector databases (Pinecone, Weaviate, Milvus, or Chroma) Hands-on experience with embedding models (e.g., OpenAI, Cohere, or open-source alternatives) Deep knowledge of LangChain/LlamaIndex for agent memory and tool integration Experience designing and implementing knowledge graphs at scale Strong background in semantic search optimization and efficient RAG architectures Experience with Model Control Plane (MCP) for both LLM orchestration and enterprise system integration Advanced Pythondevelopment with expertise in async patterns and API design
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
You will be responsible for building and deploying scalable LLM-based systems using technologies such as OpenAI, Claude, LLaMA, or Mistral for contract understanding and legal automation. Additionally, you will design and implement Retrieval-Augmented Generation (RAG) pipelines utilizing vector databases like FAISS, Pinecone, and Weaviate. Your role will involve fine-tuning and evaluating foundation models for domain-specific tasks such as clause extraction, dispute classification, and document QA. Furthermore, you will be expected to create recommendation models that offer suggestions for similar legal cases, past dispute patterns, or clause templates through collaborative and content-based filtering. Developing inference-ready APIs and backend microservices using FastAPI/Flask and integrating them into production workflows will also be part of your responsibilities. You will need to optimize model latency, prompt engineering, caching strategies, and accuracy using A/B testing and hallucination checks. Collaboration with Data Engineers and QA team members to convert ML prototypes into production-ready pipelines will be essential. Continuous error analysis, evaluation metric design (F1, BLEU, Recall@K), and prompt iterations will also fall under your purview. Participation in model versioning, logging, and reproducibility tracking using tools like MLflow or LangSmith is expected. Additionally, staying up-to-date with research on GenAI, prompting techniques, LLM compression, and RAG design patterns will be crucial. Qualifications: - Bachelors or Masters degree in Computer Science, AI, Data Science, or a related field. - 2+ years of experience in applied ML/NLP projects with real-world deployments. - Experience with LLMs like GPT, Claude, Gemini, Mistral, and techniques like fine-tuning, few-shot prompting, and context window optimization. - Strong knowledge of Python, PyTorch, Transformers, LangChain, and embedding models. - Hands-on experience integrating vector stores and building RAG pipelines. - Understanding of NLP techniques such as summarization, token classification, document ranking, and conversational QA. - Bonus: Experience with Neo4j, recommendation systems, or graph embeddings.,
Posted 1 month ago
2.0 - 4.0 years
0 Lacs
, India
Remote
About The Role Masai, in academic collaboration with a premier institute, is seeking a Teaching Assistant (TA) for its New Age Software Engineering program. This advanced 90-hour course equips learners with Generative AI foundations, production-grade AI engineering, serverless deployments, agentic workflows, and vision-enabled AI applications. The TA will play a key role in mentoring learners, resolving queries, sharing real-world practices, and guiding hands-on AI engineering projects. This role is perfect for professionals who want to contribute to next-generation AI-driven software engineering education while keeping their technical skills sharp. Key Responsibilities (KRAs) Doubt-Solving Sessions Conduct or moderate weekly sessions to clarify concepts across: Generative AI & Prompt Engineering AI Lifecycle Management & Observability Serverless & Edge AI Deployments Agentic Workflows and Vision-Language Models (VLMs) Share industry insights and practical examples to reinforce learning. Q&A and Discussion Forum Support Respond to student questions through forums, chat, or email with detailed explanations and actionable solutions. Facilitate peer-to-peer discussions on emerging tools, frameworks, and best practices in AI engineering. Research & Project Support Assist learners in capstone project design and integration, including vector databases, agent orchestration, and performance tuning. Collaborate with the academic team to research emerging AI frameworks like LangGraph, CrewAI, Hugging Face models, and WebGPU deployments. Learner Engagement Drive engagement via assignment feedback, interactive problem-solving, and personalized nudges to keep learners motivated. Encourage learners to adopt best practices for responsible and scalable AI engineering. Content Feedback Loop Collect learner feedback and recommend updates to curriculum modules for continuous course improvement. Candidate Requirements 2+ years of experience in Software Engineering, AI Engineering, or Full-Stack Development. Strong knowledge of Python/Node.js, cloud platforms (AWS Lambda, Vercel, Cloudflare Workers), and modern AI tools. Hands-on experience with LLMs, Vector Databases (Pinecone, Weaviate), Agentic Frameworks (LangGraph, ReAct), and AI observability tools. Understanding of AI deployment, prompt engineering, model fine-tuning, and RAG pipelines. Excellent communication and problem-solving skills; mentoring experience is a plus. Familiarity with online learning platforms or LMS tools is advantageous. Engagement Details Time Commitment: 6 to 8 hours per week Location: Remote (online) Compensation: ?8,000 to ?10,000 per month Why Join Us Benefits and Perks Contribute to a cutting-edge AI & software engineering program with a leading ed-tech platform. Mentor learners on next-generation AI applications and engineering best practices. Engage in flexible remote working while influencing future technological innovations. Access to continuous professional development and faculty enrichment programs. Network with industry experts and professionals in the AI and software engineering domain. Skills: edge,llms,rag pipelines,communication,online,aws lambda,databases,cloudflare workers,ai observability tools,vercel,prompt,learning,model fine-tuning,vector databases,prompt engineering,software,new age,agentic frameworks,mentoring,problem-solving,python,models,learners,node.js Show more Show less
Posted 1 month ago
5.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
STAND 8 provides end-to-end IT solutions to enterprise partners across the United States and with offices in Los Angeles, New York, New Jersey, Atlanta, and more including internationally in Mexico and India. We are seeking a Senior AI Engineer / Data Engineer to join our engineering team and help build the future of AI-powered business solutions. In this role, you&aposll be developing intelligent systems that leverage advanced large language models (LLMs), real-time AI interactions, and cutting-edge retrieval architectures. Your work will directly contribute to products that are reshaping how businesses operate-particularly in recruitment, data extraction, and intelligent decision-making. This is an exciting opportunity for someone who thrives in building production-grade AI systems and working across the full stack of modern AI technologies. Responsibilities Design, build, and optimize AI-powered systems using multi-modal architectures (text, voice, visual). Integrate and deploy LLM APIs from providers such as OpenAI, Anthropic, and AWS Bedrock. Build and maintain RAG (Retrieval-Augmented Generation) systems with hybrid search, re-ranking, and knowledge graphs. Develop real-time AI features using streaming analytics and voice interaction tools (e.g., ElevenLabs). Build APIs and pipelines using FastAPI or similar frameworks to support AI workflows. Process and analyze unstructured documents with layout and semantic understanding. Implement predictive models that power intelligent business recommendations. Deploy and maintain scalable solutions using AWS services (EC2, S3, RDS, Lambda, Bedrock, etc.). Use Docker for containerization and manage CI/CD workflows and version control via Git. Debug, monitor, and optimize performance for large-scale data pipelines. Collaborate cross-functionally with product, data, and engineering teams. Qualifications 5+ years of experience in AI/ML or data engineering with Python in production environments. Hands-on experience with LLM APIs and frameworks such as OpenAI, Anthropic, Bedrock, or LangChain. Production experience using vector databases like PGVector, Weaviate, FAISS, or Pinecone. Strong understanding of NLP, document extraction, and text processing. Proficiency in AWS cloud services including Bedrock, EC2, S3, Lambda, and monitoring tools. Experience with FastAPI or similar frameworks for building AI/ML APIs. Familiarity with embedding models, prompt engineering, and RAG systems. Asynchronous programming knowledge for high-throughput pipelines. Experience with Docker, Git workflows, CI/CD pipelines, and testing best practices. Preferred Background in HRTech or ATS integrations (e.g., Greenhouse, Workday, Bullhorn). Experience working with knowledge graphs (e.g., Neo4j) for semantic relationships. Real-time AI systems (e.g., WebRTC, OpenAI Realtime API) and voice AI tools (e.g., ElevenLabs). Advanced Python development skills using design patterns and clean architecture. Large-scale data processing experience (1-2M+ records) with cost optimization techniques for LLMs. Event-driven architecture experience using AWS SQS, SNS, or EventBridge. Hands-on experience with fine-tuning, evaluating, and deploying foundation models. Show more Show less
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Responsibilities Design and fine-tune LLMs (Large Language Models) for BFSI use-cases: intelligent document processing, report generation, chatbots, advisory tools. Evaluate and apply prompt engineering, retrieval-augmented generation (RAG), and fine-tuning methods. Implement safeguards, red-teaming, and audit mechanisms for LLM usage in BFSI. Work with data privacy, legal, and compliance teams to align GenAI outputs with industry regulations. Collaborate with enterprise architects to integrate GenAI into existing digital platforms. Qualifications 35 years in AI/ML; 13 years hands-on in GenAI/LLM-based solutions. BFSI-specific experience in document processing, regulatory reporting, or virtual agents using GenAI is highly preferred. Exposure to prompt safety, model alignment, and RAG pipelines is critical. Essential Skills Tech Stack LLMs: GPT (OpenAI), Claude, LLaMA, Mistral, Falcon Tools: LangChain, LlamaIndex, Pinecone, Weaviate Frameworks: Transformers (Hugging Face), PEFT, DeepSpeed APIs: OpenAI, Cohere, Anthropic, Azure OpenAI Cloud: GCP GenAI Studio, GCP Vertex AI Others: Prompt engineering, RAG, vector databases, role-based guardrails Experience 35 years in AI/ML; 13 years hands-on in GenAI/LLM-based solutions. Show more Show less
Posted 1 month ago
1.0 - 5.0 years
0 Lacs
karnataka
On-site
As a Python Developer specializing in Generative AI, you will play a key role in designing, developing, and deploying intelligent AI-powered systems during the night shift in Bangalore. Your primary responsibility will involve building and maintaining Python-based APIs and backends integrated with cutting-edge Generative AI models. You will collaborate with global teams to implement prompt engineering, fine-tuning, and model deployment pipelines using tools such as GPT, Claude, LLaMA, DALLE, and Stable Diffusion. Your expertise in PyTorch, TensorFlow, Hugging Face, LangChain, or OpenAI API will be crucial in optimizing model performance for latency, accuracy, and scalability. Additionally, you will deploy models using FastAPI, Flask, Docker, or cloud platforms while ensuring thorough testing, monitoring, and documentation of AI integrations. To excel in this role, you should possess at least 4 years of Python development experience along with 1 year of hands-on experience with Generative AI tools and models. Familiarity with vector databases such as FAISS, Pinecone, and Weaviate is also desirable. Exposure to GPU-based training or inference, MLOps tools like MLflow, Airflow, or Kubeflow, and a strong understanding of AI ethics, model safety, and bias mitigation are considered advantageous. This full-time, permanent position offers health insurance, Provident Fund benefits, and requires working in person during the night shift. If you are passionate about leveraging AI to address real-world challenges and thrive in a fast-paced environment, we encourage you to apply and contribute to innovative GenAI and ML projects.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You will be responsible for developing scalable web applications using Python (FastAPI), React.js, and cloud-native technologies. Specifically, you will work on building a low-code/no-code AI agent platform, designing an intuitive workflow UI, and integrating with LLMs, enterprise connectors, and role-based access controls. As a Full-Stack Developer, your responsibilities will include developing and optimizing APIs using FastAPI, integrating with LangChain, Pinecone/Weaviate vector databases, and enterprise connectors like Airbyte/Nifi for backend development. For frontend development, you will build an interactive drag-and-drop workflow UI using React.js along with supporting libraries like React Flow, D3.js, and TailwindCSS. You will also be tasked with implementing authentication mechanisms such as OAuth2, Keycloak, and role-based access controls for multi-tenant environments. Database design will involve working with PostgreSQL for structured data, MongoDB for unstructured data, and Neo4j for knowledge graphs. Your role will extend to DevOps and deployment using Docker, Kubernetes, and Terraform across various cloud platforms like Azure, AWS, and GCP. Performance optimization will be crucial as you strive to enhance API performance and frontend responsiveness for an improved user experience. Collaboration with AI and Data Engineers will be essential to ensure seamless integration of AI models. To excel in this role, you should have at least 5 years of experience in FastAPI, React.js, and cloud-native applications. A strong understanding of REST APIs, GraphQL, and WebSockets is required. Experience with JWT authentication, OAuth2, and multi-tenant security is essential. Proficiency in databases such as PostgreSQL, MongoDB, Neo4j, and Redis is expected. Knowledge of workflow automation tools like n8n, Node-RED, and Temporal.io will be beneficial. Familiarity with containerization tools (Docker, Kubernetes) and CI/CD pipelines is preferred. Any experience with Apache Kafka, WebSockets, or AI-driven chatbots would be considered a bonus.,
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
jaipur, rajasthan
On-site
As an AI / ML Engineer, you will be responsible for utilizing your expertise in the field of Artificial Intelligence and Machine Learning to develop innovative solutions. You should hold a Bachelor's or Master's degree in Computer Science, Engineering, Data Science, AI/ML, Mathematics, or a related field. With a minimum of 6 years of experience in AI/ML, you are expected to demonstrate proficiency in Python and various ML libraries such as scikit-learn, XGBoost, pandas, NumPy, matplotlib, and seaborn. In this role, you will need a strong understanding of machine learning algorithms and deep learning architectures including CNNs, RNNs, and Transformers. Hands-on experience with TensorFlow, PyTorch, or Keras is essential. You should also have expertise in data preprocessing, feature selection, exploratory data analysis (EDA), and model interpretability. Additionally, familiarity with API development and deploying models using frameworks like Flask, FastAPI, or similar tools is required. Experience with MLOps tools such as MLflow, Kubeflow, DVC, and Airflow will be beneficial. Knowledge of cloud platforms like AWS (SageMaker, S3, Lambda), GCP (Vertex AI), or Azure ML is preferred. Proficiency in version control using Git, CI/CD processes, and containerization with Docker is essential for this role. Bonus skills that would be advantageous include familiarity with NLP frameworks (e.g., spaCy, NLTK, Hugging Face Transformers), Computer Vision experience using OpenCV or YOLO/Detectron, and knowledge of Reinforcement Learning or Generative AI (GANs, LLMs). Experience with vector databases such as Pinecone or Weaviate, as well as LangChain for AI agent building, is a plus. Familiarity with data labeling platforms and annotation workflows will also be beneficial. In addition to technical skills, you should possess soft skills such as an analytical mindset, strong problem-solving abilities, effective communication, and collaboration skills. The ability to work independently in a fast-paced, agile environment is crucial. A passion for AI/ML and a proactive approach to staying updated with the latest developments in the field are highly desirable for this role.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a Senior Data Scientist (Gen AI Developer) with 5 to 7+ years of experience, you will be based in Hyderabad and employed full-time in a hybrid work mode, spending 4 days in the office and 1 day working from home. Your primary responsibility will be to tackle a Conversational AI challenge for our client by utilizing your expertise in Speech-to-Text and Text Generation technologies. Your role will involve developing and fine-tuning Automatic Speech Recognition (ASR) models, implementing language models for industry-specific terminology, and incorporating speaker diarization to distinguish multiple voices in a conversation. Additionally, you will build conversation summarization models, apply Named Entity Recognition (NER), and use Large Language Models (LLMs) for deep conversation analysis and smart recommendations. You will also design Retrieval-Augmented Generation (RAG) pipelines leveraging external knowledge sources for enhanced performance. Furthermore, you will be tasked with creating sentiment and intent classification systems, developing predictive models for next-best-action suggestions based on historical call data and engagement, and deploying AI models on cloud platforms like AWS, Azure, or GCP. Optimizing inference and establishing MLOps pipelines for continual learning and performance enhancement will also be part of your responsibilities. To excel in this role, you must possess proven expertise in ASR, NLP, and Conversational AI systems, along with experience in tools such as Whisper, DeepSpeech, Kaldi, AWS Transcribe, and Google STT. Proficiency in programming languages like Python, PyTorch, TensorFlow, and familiarity with RAG, LangChain, and LLM fine-tuning is essential. Hands-on experience with vector databases and deploying AI solutions using Docker, Kubernetes, FastAPI, or Flask will be beneficial. Apart from technical skills, you should have strong business acumen to translate AI insights into impact, be a fast-paced problem-solver with innovative thinking abilities, and possess excellent collaboration and communication skills for effective teamwork across functions. Preferred qualifications include experience in healthcare, pharma, or life sciences NLP projects, knowledge of multimodal AI, prompt engineering, and exposure to Reinforcement Learning (RLHF) techniques for conversational models. Join us to work on impactful real-world projects in Conversational AI and Gen AI, collaborate with innovative teams and industry experts, leverage cutting-edge tools and cloud platforms, and enjoy a hybrid work environment that promotes flexibility and balance. Your ideas will be valued in our forward-thinking, AI-first culture. To apply for this exciting opportunity, please share your updated resume at resumes@empglobal.ae or apply directly through the platform. Kindly note that while we appreciate all applications, only shortlisted candidates will be contacted. Thank you for your understanding!,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a Senior Machine Learning Engineer at our AI/ML team, you will be responsible for designing and building intelligent search systems. Your focus will be on utilizing cutting-edge techniques in vector search, semantic similarity, and natural language processing to create innovative solutions. Your key responsibilities will include designing and implementing high-performance vector search systems using tools like FAISS, Milvus, Weaviate, or Pinecone. You will develop semantic search solutions that leverage embedding models and similarity scoring for precise and context-aware retrieval. Additionally, you will be expected to research and integrate the latest advancements in ANN algorithms, transformer-based models, and embedding generation. Collaboration with cross-functional teams, including data scientists, backend engineers, and product managers, will be essential to bring ML-driven features from concept to production. Furthermore, maintaining clear documentation of methodologies, experiments, and findings for technical and non-technical stakeholders will be part of your role. To qualify for this position, you should have at least 3 years of experience in Machine Learning, with a focus on NLP and vector search. A deep understanding of semantic embeddings, transformer models (e.g., BERT, RoBERTa, GPT), and hands-on experience with vector search frameworks is required. You should also possess a solid understanding of similarity search techniques such as cosine similarity, dot-product scoring, and clustering methods. Strong programming skills in Python and familiarity with libraries like NumPy, Pandas, Scikit-learn, and Hugging Face Transformers are necessary. Exposure to cloud platforms, preferably Azure, and container orchestration tools like Docker and Kubernetes is preferred. This is a full-time position with benefits including health insurance, internet reimbursement, and Provident Fund. The work schedule consists of day shifts, fixed shifts, and morning shifts, and the work location is in-person. The application deadline for this role is 18/04/2025.,
Posted 1 month ago
2.0 - 4.0 years
4 - 8 Lacs
Pune
Hybrid
We seek a Senior AI Developer skilled in OpenAI (GPT-4/GPT-4o) and Google Gemini to build advanced AI apps and agents. You'll lead LLM integrations, RAG pipelines, and GenAI solutions across industries in a fast-paced, innovative product team. Required Candidate profile 4+ yrs dev exp in GenAI/LLMs. Proficient in OpenAI, Gemini, Python, LangChain, vector DBs. Strong in RAG, prompt engineering, chatbot, cloud. Bonus: experience with Whisper, Gemini Pro, Hugging Face
Posted 1 month ago
6.0 - 9.0 years
18 - 25 Lacs
Bengaluru
Hybrid
About the Role We are seeking a BI Architect to advise the BI Lead of a global CPG organization and architect an intelligent, scalable Business Intelligence ecosystem. This includes an enterprise-wide KPI dashboard suite augmented by a GenAI-driven natural language interface for insight discovery. The ideal candidate will be responsible for end-to-end architecture: from scalable data models and dashboards to a conversational interface powered by Retrieval-Augmented Generation (RAG) and/or Knowledge Graphs. The solution must synthesize internal BI data with external (web-scraped and competitor) data to deliver intelligent, context-rich insights. Key Responsibilities • Architect BI Stack : Design and oversee a scalable and performant BI platform that serves as a single source of truth for key business metrics across functions (Sales, Marketing, Supply Chain, Finance, etc.). • Advise BI Lead : Act as a technical thought partner to the BI Lead, aligning architecture decisions with long-term strategy and business priorities. • Design GenAI Layer : Architect a GenAI-powered natural language interface on top of BI dashboards to allow business users to query KPIs, trends, and anomalies conversationally. • RAG/Graph Approach : Select and implement appropriate architectures (e.g., RAG using vector stores, Knowledge Graphs) to support intelligent, context-aware insights. • External Data Integration : Build mechanisms to ingest and structure data from public sources (e.g., competitor websites, industry reports, social sentiment) to augment internal insights. • Security & Governance : Ensure all layers (BI + GenAI) adhere to enterprise data governance, security, and compliance standards. • Cross-functional Collaboration : Work closely with Data Engineering, Analytics, and Product teams to ensure seamless integration and operationalization. Qualifications • 69 years of experience in BI architecture and analytics platforms, with at least 2 years working on GenAI, RAG, or LLM-based solutions. • Strong expertise in BI tools (e.g., Power BI, Tableau, Looker) and data modeling. • Experience with GenAI frameworks (e.g., LangChain, LlamaIndex, Semantic Kernel) and vector databases (e.g., Pinecone, FAISS, Weaviate). • Knowledge of graph-based data models and tools (e.g., Neo4j, RDF, SPARQL) is a plus. • Proficiency in Python or relevant scripting language for pipeline orchestration and AI integration. • Familiarity with web scraping and structuring external/third-party datasets. • Prior experience in CPG domain or large-scale KPI dashboarding preferred.
Posted 1 month ago
2.0 - 5.0 years
2 - 2 Lacs
Kurnool
Work from Office
looking for GenAI support. WORK FROM HOME
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You are an experienced Full-Stack Developer with 5+ years of experience in building scalable web applications using Python (FastAPI), React.js, and cloud-native technologies. In this role, you will be responsible for developing a low-code/no-code AI agent platform, implementing an intuitive workflow UI, and integrating with LLMs, enterprise connectors, and role-based access controls. Your responsibilities will include backend development where you will develop and optimize APIs using FastAPI, integrating with LangChain, vector databases (Pinecone/Weaviate), and enterprise connectors (Airbyte/Nifi). Additionally, you will work on frontend development to build an interactive drag-and-drop workflow UI using React.js (React Flow, D3.js, TailwindCSS). You will also be involved in implementing OAuth2, Keycloak, and role-based access controls (RBAC) for multi-tenant environments. Database design is a crucial part of this role, where you will work with PostgreSQL (structured data), MongoDB (unstructured data), and Neo4j (knowledge graphs). DevOps & Deployment tasks will involve deploying using Docker, Kubernetes, and Terraform across multi-cloud (Azure, AWS, GCP) to ensure smooth operations. Performance optimization is another key area where you will focus on improving API performance and optimizing frontend responsiveness for seamless user experience. Collaboration with AI & Data Engineers is essential, as you will work closely with the Data Engineering team to ensure smooth AI model integration. To be successful in this role, you are required to have 5+ years of experience in FastAPI, React.js, and cloud-native applications. Strong knowledge of REST APIs, GraphQL, and WebSockets is essential, along with experience in JWT authentication, OAuth2, and multi-tenant security. Additionally, proficiency in PostgreSQL, MongoDB, Neo4j, and Redis is expected. Knowledge of workflow automation tools (n8n, Node-RED, Temporal.io), familiarity with containerization (Docker, Kubernetes), and CI/CD pipelines is also required. Bonus skills include experience in Apache Kafka, WebSockets, or AI-driven chatbots.,
Posted 1 month ago
1.0 - 5.0 years
0 Lacs
jaipur, rajasthan
On-site
You will be part of a team that is building an Agentic AI Platform aimed at enabling enterprises to solve real business problems using Agentic AI workflows. The platform covers a wide range of areas from utility operations to legal document review. The ultimate goal is to empower AI agents to think, act, and deliver quickly, securely, and locally. As an Agentic AI Engineer, you will have the opportunity to work with cutting-edge frameworks such as Lang Chain, CrewAI, Lang Graph, Google ADK, and more. Your role will involve translating real enterprise challenges into intelligent multi-agent workflows. Your responsibilities will include building and deploying AI agents using open-source agentic frameworks, integrating models from sources like OpenAI, Mistral, Gemini, Llama, Claude, among others, and utilizing tools such as Retrieval-Augmented Generation (RAG), knowledge graphs, and vector stores. Collaboration with product managers and domain experts to address real problems in various enterprise domains like utilities, legal, marketing, and supply chain will be a key aspect of your role. Additionally, you will play a significant role in continuously testing and refining agent behavior and contributing to the enhancement of the proprietary DataInsightAI platform. To excel in this role, you should possess at least 1+ years of hands-on experience in implementing enterprise-level Gen AI projects that have been successfully deployed. Strong Python skills, familiarity with LLMs, agentic AI workflows, RAG, vector databases, and the ability to simplify complex problems are essential. A curiosity-driven mindset, a fast learner, and hands-on coding abilities are also crucial for success in this role. While not mandatory, experience in multi-agent architectures, graph databases like Neo4j, deployment on cloud platforms such as Azure/AWS/GCP, and familiarity with LangGraph or Google ADK are considered advantageous. Joining our team will offer you the opportunity to work on cutting-edge agentic AI projects daily, be part of a small team with significant ownership, ship solutions rapidly, and tackle real enterprise challenges. Moreover, you will be supported by InTimeTec, a company with a strong AI/ML engineering culture.,
Posted 1 month ago
2.0 - 10.0 years
0 Lacs
coimbatore, tamil nadu
On-site
You should have 3 to 10 years of experience in AI development and be located in Coimbatore. Immediate joiners are preferred. A minimum of 2 years of experience in core Gen AI is required. As an AI Developer, your responsibilities will include designing, developing, and fine-tuning Large Language Models (LLMs) for various in-house applications. You will implement and optimize Retrieval-Augmented Generation (RAG) techniques to enhance AI response quality. Additionally, you will develop and deploy Agentic AI systems capable of autonomous decision-making and task execution. Building and managing data pipelines for processing, transforming, and feeding structured/unstructured data into AI models will be part of your role. It is essential to ensure scalability, performance, and security of AI-driven solutions in production environments. Collaboration with cross-functional teams, including data engineers, software developers, and product managers, is expected. You will conduct experiments and evaluations to improve AI system accuracy and efficiency while staying updated with the latest advancements in AI/ML research, open-source models, and industry best practices. You should have strong experience in LLM fine-tuning using frameworks like Hugging Face, DeepSpeed, or LoRA/PEFT. Hands-on experience with RAG architectures, including vector databases such as Pinecone, ChromaDB, Weaviate, OpenSearch, and FAISS, is required. Experience in building AI agents using LangChain, LangGraph, CrewAI, AutoGPT, or similar frameworks is preferred. Proficiency in Python and deep learning frameworks like PyTorch or TensorFlow is necessary. Experience in Python web frameworks such as FastAPI, Django, or Flask is expected. You should also have experience in designing and managing data pipelines using tools like Apache Airflow, Kafka, or Spark. Knowledge of cloud platforms (AWS/GCP/Azure) and containerization technologies (Docker, Kubernetes) is essential. Familiarity with LLM APIs (OpenAI, Anthropic, Mistral, Cohere, Llama, etc.) and their integration in applications is a plus. A strong understanding of vector search, embedding models, and hybrid retrieval techniques is required. Experience with optimizing inference and serving AI models in real-time production systems is beneficial. Experience with multi-modal AI (text, image, audio) and familiarity with privacy-preserving AI techniques and responsible AI frameworks are desirable. Understanding of MLOps best practices, including model versioning, monitoring, and deployment automation, is a plus. Skills required for this role include PyTorch, RAG architectures, OpenSearch, Weaviate, Docker, LLM fine-tuning, ChromaDB, Apache Airflow, LoRA, Python, hybrid retrieval techniques, Django, GCP, CrewAI, OpenAI, Hugging Face, Gen AI, Pinecone, FAISS, AWS, AutoGPT, embedding models, Flask, FastAPI, LLM APIs, DeepSpeed, vector search, PEFT, LangChain, Azure, Spark, Kubernetes, AI Gen, TensorFlow, real-time production systems, LangGraph, and Kafka.,
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
The Senior Data Science Lead is a pivotal role responsible for driving the research, development, and deployment of semi-autonomous AI agents to solve complex enterprise challenges. This role involves hands-on experience with LangGraph, leading initiatives to build multi-agent AI systems that operate with greater autonomy, adaptability, and decision-making capabilities. The ideal candidate will have deep expertise in LLM orchestration, knowledge graphs, reinforcement learning (RLHF/RLAIF), and real-world AI applications. As a leader in this space, they will be responsible for designing, scaling, and optimizing agentic AI workflows, ensuring alignment with business objectives while pushing the boundaries of next-gen AI automation. Key Responsibilities: 1. Architecting & Scaling Agentic AI Solutions: - Design and develop multi-agent AI systems using LangGraph for workflow automation, complex decision-making, and autonomous problem-solving. - Build memory-augmented, context-aware AI agents capable of planning, reasoning, and executing tasks across multiple domains. - Define and implement scalable architectures for LLM-powered agents that seamlessly integrate with enterprise applications. 2. Hands-On Development & Optimization: - Develop and optimize agent orchestration workflows using LangGraph, ensuring high performance, modularity, and scalability. - Implement knowledge graphs, vector databases (Pinecone, Weaviate, FAISS), and retrieval-augmented generation (RAG) techniques for enhanced agent reasoning. - Apply reinforcement learning (RLHF/RLAIF) methodologies to fine-tune AI agents for improved decision-making. 3. Driving AI Innovation & Research: - Lead cutting-edge AI research in Agentic AI, LangGraph, LLM Orchestration, and Self-improving AI Agents. - Stay ahead of advancements in multi-agent systems, AI planning, and goal-directed behavior, applying best practices to enterprise AI solutions. - Prototype and experiment with self-learning AI agents, enabling autonomous adaptation based on real-time feedback loops. 4. AI Strategy & Business Impact: - Translate Agentic AI capabilities into enterprise solutions, driving automation, operational efficiency, and cost savings. - Lead Agentic AI proof-of-concept (PoC) projects that demonstrate tangible business impact and scale successful prototypes into production. 5. Mentorship & Capability Building: - Lead and mentor a team of AI Engineers and Data Scientists, fostering deep technical expertise in LangGraph and multi-agent architectures. - Establish best practices for model evaluation, responsible AI, and real-world deployment of autonomous AI agents.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
You will be working as a Mid-Level Full Stack Developer at Soothsayer Analytics in Hyderabad. Your main responsibility will be to contribute to the development of Generative AI applications using Azure and OpenAI technologies. This role requires you to work on both front-end and back-end components of AI applications, leveraging React and Python, and integrating Azure OpenAI models, Bing Search, and Azure ML Studio into custom-built applications. Your key responsibilities include developing and maintaining front-end and back-end components of AI applications, integrating Azure OpenAI models and Azure services, working with databases such as Cosmos DB, SQL Server, and Weaviate, developing secure and scalable applications using VNet and Private Endpoint configurations, implementing APIs for interaction with large language models and document processing systems, collaborating with AI engineers to design and deploy Generative AI solutions, and implementing CI/CD automation using Azure DevOps and GitHub. To be successful in this role, you should have 3+ years of professional experience in full-stack development, strong proficiency in front-end development using React and back-end development with Python, experience with Azure services, familiarity with integrating AI models, experience with cloud security practices, knowledge of containerization and orchestration, familiarity with CI/CD pipelines and version control, and ideally, experience with Weaviate, LangChain, Hugging Face, or Neo4j databases. If you are passionate about developing innovative AI applications and have the required skills and experience, we encourage you to apply for this exciting opportunity at Soothsayer Analytics.,
Posted 1 month ago
4.0 - 6.0 years
18 - 22 Lacs
Pune
Work from Office
We are looking for a GenAI/ML Engineer to design, develop, and deploy cutting-edge AI/ML models and Generative AI applications . This role involves working on large-scale enterprise use cases, implementing Large Language Models (LLMs) , building Agentic AI systems , and developing data ingestion pipelines . The ideal candidate should have hands-on experience with AI/ML development , Generative AI applications , and a strong foundation in deep learning , NLP , and MLOps practices. Key Responsibilities Design, develop , and deploy AI/ML models and Generative AI applications for various enterprise use cases. Implement and integrate Large Language Models (LLMs) using frameworks such as LangChain , LlamaIndex , and RAG pipelines . Develop Agentic AI systems capable of multi-step reasoning and autonomous decision-making . Create secure and scalable data ingestion pipelines for structured and unstructured data, enabling indexing, vector search, and advanced retrieval techniques. Collaborate with cross-functional teams (Data Engineers, Product Managers, Architects) to deploy AI solutions and enhance the AI stack. Build CI/CD pipelines for ML/GenAI workflows and support end-to-end MLOps practices. Leverage Azure and Databricks for training , serving , and monitoring AI models at scale. Required Qualifications & Skills (Mandatory) 4+ years of hands-on experience in AI/ML development , including Generative AI applications . Expertise in RAG , LLMs , and Agentic AI implementations. Strong experience with LangChain , LlamaIndex , or similar LLM orchestration frameworks. Proficiency in Python and key ML/DL libraries : TensorFlow , PyTorch , Scikit-learn . Solid foundation in Deep Learning , Natural Language Processing (NLP) , and Transformer-based architectures . Experience in building data ingestion , indexing , and retrieval pipelines for real-world enterprise use cases. Hands-on experience with Azure cloud services and Databricks . Proven track record in designing CI/CD pipelines and using MLOps tools like MLflow , DVC , or Kubeflow . Soft Skills Strong problem-solving and critical thinking ability. Excellent communication skills , with the ability to explain complex AI concepts to non-technical stakeholders. Ability to collaborate effectively in agile , cross-functional teams . A growth mindset , eager to explore and learn emerging technologies. Preferred Qualifications Familiarity with vector databases such as FAISS , Pinecone , or Weaviate . Experience with AutoGPT , CrewAI , or similar agent frameworks . Exposure to Azure OpenAI , Cognitive Search , or Databricks ML tools . Understanding of AI security , responsible AI , and model governance . Role Dimensions Design and implement innovative GenAI applications to address complex business problems. Work on large-scale, complex AI solutions in collaboration with cross-functional teams. Take ownership of the end-to-end AI pipeline , from model development to deployment and monitoring. Success Measures (KPIs) Successful deployment of AI and Generative AI applications . Optimization of data pipelines and model performance at scale. Contribution to the successful adoption of AI-driven solutions within enterprise use cases. Effective collaboration with cross-functional teams, ensuring smooth deployment of AI workflows. Competency Alignment AI/ML Development : Expertise in building and deploying scalable and efficient AI models. Generative AI : Strong hands-on experience in Generative AI , LLMs , and RAG frameworks. MLOps : Proficiency in designing and maintaining CI/CD pipelines and implementing MLOps practices . Cloud Platforms : Experience with Azure and Databricks for AI model training and serving.
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Position: Senior Software Engineer-AI/ML Backend Developer Experience: 4-6 years Category: Software Development/ Engineering Location: Bangalore/Hyderabad/Chennai/Pune/Mumbai Shift Timing: General Shift Position ID: J0725-0150 Employment Type: Full Time Education Qualification: Bachelor's degree in computer science or related field or higher with minimum 4 years of relevant experience. We are seeking an experienced AI/ML Backend Developer to join our dynamic technology team. The ideal candidate will have a strong background in developing and deploying machine learning models, implementing AI algorithms, and managing backend systems and integrations. You will play a key role in shaping the future of our technology by integrating cutting-edge AI/ML techniques into scalable backend solutions. Your future duties and responsibilities Develop, optimize, and maintain backend services for AI/ML applications. Implement and deploy machine learning models to production environments. Collaborate closely with data scientists and frontend engineers to ensure seamless integration of backend APIs and services. Monitor and improve the performance, reliability, and scalability of existing AI/ML services. Design and implement robust data pipelines and data processing workflows. Identify and solve performance bottlenecks and optimize AI/ML algorithms for production. Stay current with emerging AI/ML technologies and frameworks to recommend and implement improvements. Required qualifications to be successful in this role Must-have Skills: - Python, TensorFlow, PyTorch, scikit-learn - Machine learning frameworks: TensorFlow, PyTorch, scikit-learn - Backend development frameworks: Flask, Django, FastAPI - Cloud technologies: AWS, Azure, Google Cloud Platform (GCP) - Containerization and orchestration: Docker, Kubernetes - Data management and pipeline tools: Apache Kafka, Apache Airflow, Spark - Database technologies: SQL databases (PostgreSQL, MySQL), NoSQL databases (MongoDB, Cassandra) - Vector Databases: Pinecone, Milvus, Weaviate - Version Control: Git - Continuous Integration/Continuous Deployment (CI/CD) pipelines: Jenkins, GitHub Actions, GitLab CI/CD Minimum of 4 years of experience developing backend systems, specifically in AI/ML contexts. Proven experience in deploying machine learning models and AI-driven applications in production. Solid understanding of machine learning concepts, algorithms, and deep learning techniques. Proficiency in writing efficient, maintainable, and scalable backend code. Experience working with cloud platforms (AWS, Azure, Google Cloud). Strong analytical and problem-solving skills. Excellent communication and teamwork abilities. Good-to-have Skills: - Java (preferred), Scala (optional) Together, as owners, let's turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect, and belonging. Here, you'll reach your full potential because You are invited to be an owner from day 1 as we work together to bring our Dream to life. That's why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company's strategy and direction. Your work creates value. You'll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You'll shape your career by joining a company built to grow and last. You'll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team, one of the largest IT and business consulting services firms in the world.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You will be working as an AI Engineer with expertise in Speech-to-text and Text Generation to tackle a Conversational AI challenge for a client in EMEA. The project aims to transcribe conversations and utilize generative AI-powered text analytics for enhancing engagement strategies and decision-making processes. Your main responsibilities will include developing Conversational AI & Call Transcription solutions, creating NLP & Generative AI Applications, performing Sentiment Analysis & Decision Support tasks, and handling AI Deployment & Scalability aspects. You will be expected to work on real-time transcription, intent analysis, sentiment analysis, summarization, and decision-support tools. Key technical skills required for this role include a strong background in Speech-to-Text (ASR), NLP, and Conversational AI, along with hands-on experience in tools like Whisper, DeepSpeech, Kaldi, AWS Transcribe, Google Speech-to-Text, Python, PyTorch, TensorFlow, Hugging Face Transformers, LLM fine-tuning, RAG-based architectures, LangChain, and Vector Databases (FAISS, Pinecone, Weaviate, ChromaDB). Experience in deploying AI models using Docker, Kubernetes, FastAPI, Flask will be essential. In addition to technical skills, soft skills such as translating AI insights into business impact, problem-solving abilities, and effective communication skills to collaborate with cross-functional teams will be crucial for success in this role. Preferred qualifications include experience in healthcare, pharma, or life sciences NLP use cases, a background in knowledge graphs, prompt engineering, and multimodal AI, as well as familiarity with Reinforcement Learning (RLHF) for enhancing conversation models.,
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
We are looking for a highly motivated Mid-Level AI Engineer to join our growing AI team. Your main responsibility will be to develop intelligent applications using Python, Large Language Models (LLMs), and Retrieval-Augmented Generation (RAG) systems. Working closely with data scientists, backend engineers, and product teams, you will build and deploy AI-powered solutions that provide real-world value. Your key responsibilities will include designing, developing, and optimizing applications utilizing LLMs such as GPT, LLaMA, and Claude. You will also be tasked with implementing RAG pipelines to improve LLM performance using domain-specific knowledge bases and search tools. Developing and maintaining robust Python codebases for AI-driven solutions will be a crucial part of your role. Additionally, integrating vector databases like Pinecone, Weaviate, and FAISS, as well as embedding models for information retrieval, will be part of your daily tasks. You will work with APIs, frameworks like LangChain and Haystack, and various tools to create scalable AI workflows. Collaboration with product and design teams to define AI use cases and deliver impactful features will also be a significant aspect of your job. Conducting experiments to assess model performance, retrieval relevance, and system latency will be essential for continuous improvement. Staying up-to-date with the latest research and advancements in LLMs, RAG, and AI infrastructure is crucial for this role. To be successful in this position, you should have at least 3-5 years of experience in software engineering or AI/ML engineering, with a strong proficiency in Python. Experience working with LLMs such as OpenAI and Hugging Face Transformers is required, along with hands-on experience in RAG architecture and vector-based retrieval techniques. Familiarity with embedding models like SentenceTransformers and OpenAI embeddings is also necessary. Knowledge of API design, deployment, performance optimization, version control (e.g., Git), containerization (e.g., Docker), and cloud platforms (e.g., AWS, GCP, Azure) is expected. Preferred qualifications include experience with LangChain, Haystack, or similar LLM orchestration frameworks. Understanding NLP evaluation metrics, prompt engineering best practices, knowledge graphs, semantic search, and document parsing pipelines will be beneficial. Experience deploying models in production, monitoring system performance, and contributing to open-source AI/ML projects are considered advantageous for this role.,
Posted 2 months ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Senior QA Engineer at our company, you will be leading quality assurance for Generative AI (GenAI) solutions within our Digital Twin platform. Your role will involve focusing on the evaluation, reliability, and guardrails of AI-powered systems in production, going beyond traditional QA practices. Your responsibilities will include designing and implementing end-to-end QA strategies for applications using Node.js integrated with LLMs, RAG, and Agentic AI workflows. You will establish benchmarks and quality metrics for GenAI components, develop evaluation datasheets for LLM behavior validation, and conduct data quality testing for RAG databases. Additionally, you will perform A/B testing, define testing methodologies, collaborate with developers and AI engineers, build QA automation, and lead internal capability development by mentoring QA peers on GenAI testing practices. To be successful in this role, you should have at least 6 years of experience in software quality assurance, with a minimum of 3 years of experience in GenAI or LLM-based systems. You should possess a deep understanding of GenAI quality dimensions, experience in creating and maintaining LLM evaluation datasets, and familiarity with testing retrieval pipelines and RAG architectures. Preferred skills include experience with GenAI tools/platforms, exposure to evaluating LLMs in production settings, familiarity with prompt tuning and few-shot learning in LLMs, and basic scripting knowledge in Python, JavaScript, or TypeScript. If you are a passionate and forward-thinking QA Engineer with a structured QA discipline, hands-on experience in GenAI systems, and a strong sense of ownership, we encourage you to apply for this high-impact role within our innovative team.,
Posted 2 months ago
2.0 - 4.0 years
12 - 15 Lacs
Pune
Work from Office
Lead and scale Django backend features, mentor 2 juniors, manage deployments, and ensure best practices. Expert in Django, PostgreSQL, Celery, Redis, Docker, CI/CD, and vector DBs. Own architecture, code quality, and production stability.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |