Jobs
Interviews

181 Pinecone Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

Join us at Barclays in the role of Strategic Adoption AI Engineer, where you will be tasked with enhancing existing processes, reporting, and controls to ensure flawless execution of BAU. Your responsibilities will include driving efficiencies, implementing process improvements, and standardizing processes across SBUs wherever feasible. At Barclays, we not only anticipate the future but also actively participate in creating it. To excel in this role, you should possess the following skills: - Proficient programming abilities in Python and hands-on experience with ML libraries such as scikit-learn, TensorFlow, and PyTorch. - Familiarity with automation tools like Jenkins, GitHub Actions, or GitLab CI/CD for streamlining ML pipelines. - Strong knowledge of Docker and Kubernetes for facilitating scalable deployments. - Extensive experience with various AWS services like SageMaker, Bedrock, Lambda, CloudFormation, Step Functions, S3, and IAM. - Managing infrastructure for training and inference using AWS S3, EC2, EKS, and Step Functions. - Expertise in Infrastructure as Code tools like Terraform, AWS CDK. - Familiarity with model lifecycle management tools such as MLflow, SageMaker Model Registry. - Solid understanding of applying DevOps principles to ML workflows. Additionally, valuable skills may include: - Experience with Snowflake and Databricks for collaborative ML development and scalable data processing. - Knowledge of data engineering tools like Apache Airflow, Kafka, Spark. - Understanding of model interpretability, responsible AI, and governance. - Contributions to open-source MLOps tools or communities. - Strong leadership, communication, and cross-functional collaboration skills. - Understanding of data privacy, model governance, and regulatory compliance in AI systems. - Exposure to LangChain, Vector DBs like FAISS, Pinecone, and retrieval-augmented generation (RAG) pipelines. Your assessment may focus on critical skills essential for success in this role, including risk and controls, change and transformation, business acumen, strategic thinking, and digital and technology expertise. This position is located at our Pune office. **Purpose of the role:** To design, develop, and enhance software solutions using various engineering methodologies to provide business, platform, and technology capabilities for our customers and colleagues. **Accountabilities:** - Develop and deliver high-quality software solutions using industry-aligned programming languages, frameworks, and tools. Ensure scalability, maintainability, and performance optimization of the code. - Collaborate with product managers, designers, and engineers to define software requirements, devise solution strategies, and integrate them seamlessly with business objectives. - Engage in peer collaboration, participate in code reviews, and promote a culture of code quality and knowledge sharing. - Stay updated on industry technology trends, contribute to organizational technology communities, and foster technical excellence and growth. - Adhere to secure coding practices to mitigate vulnerabilities, protect sensitive data, and ensure secure software solutions. - Implement effective unit testing practices to ensure proper code design, readability, and reliability. **Assistant Vice President Expectations:** - Provide advice and influence decision-making, contribute to policy development, and take responsibility for operational effectiveness. Collaborate closely with other functions/business divisions. - Lead a team in performing complex tasks, using professional knowledge and skills to impact the entire business function. Set objectives, coach employees, appraise performance, and determine reward outcomes. - Demonstrate clear leadership behaviours to create an environment for colleagues to thrive and deliver excellent results. The four LEAD behaviours are: Listen and be authentic, Energise and inspire, Align across the enterprise, Develop others.,

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

hyderabad, telangana

On-site

We are seeking a highly experienced Lead Backend Engineer with a strong background in Node.js and deep expertise in Generative AI, LLMs (Large Language Models), and Chatbot development. You will have a significant role in the design, development, and scaling of backend systems that support AI-driven applications and conversational interfaces. Your responsibilities will include designing and implementing scalable backend services and APIs using Node.js and modern frameworks such as Express and NestJS. You will lead the architecture and integration of LLMs (OpenAI, Anthropic, open-source models) into production applications, focusing on prompt engineering to fine-tune LLM outputs for specific business tasks. Additionally, you will be responsible for building and optimizing chatbot platforms and conversational AI pipelines. Collaboration with AI/ML engineers, frontend developers, product managers, and designers to deliver robust AI-driven features is a key aspect of this role. You will guide and mentor a team of engineers through best practices in backend engineering, code quality, and system design. Implementing observability, performance tuning, and API reliability measures will also be part of your responsibilities. Staying up to date with advancements in GenAI, NLP, and emerging LLM capabilities is essential. To be successful in this role, you should have 7-10 years of backend development experience with a strong foundation in Node.js. Proven experience in building and deploying applications using LLMs and Generative AI (OpenAI, Claude, Llama, etc.) is required. A solid understanding of Prompt Engineering, prompt tuning, and context management for LLMs is crucial. Experience with chatbot frameworks such as Dialogflow, Rasa, Botpress, or custom solutions is necessary. Strong knowledge of RESTful APIs, WebSockets, and asynchronous programming is expected. Familiarity with vector databases (Pinecone, Weaviate, FAISS) and retrieval-augmented generation (RAG) is a plus. Proficiency in working with cloud platforms (AWS/GCP/Azure), a strong understanding of system design, architecture patterns, and scalability principles, as well as excellent problem-solving, communication, and leadership skills are important for this role.,

Posted 2 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

chandigarh

On-site

As an experienced AI Software Engineer, you will be responsible for designing, developing, and deploying Agentic AI applications and advanced software solutions leveraging Large Language Models (LLMs) and modern AI tools. Your primary focus will be on building autonomous AI agents, integrating them with business systems, and delivering production-grade applications that effectively solve complex problems. Your key responsibilities will include designing and developing agentic AI systems capable of autonomous reasoning, planning, and execution. You will also be integrating LLMs (such as OpenAI, Anthropic, Mistral, LLaMA, etc.) into scalable software applications, and building and optimizing multi-agent workflows using frameworks like LangChain, AutoGen, CrewAI, or custom solutions. Additionally, you will implement vector databases (Pinecone, Weaviate, FAISS, Milvus) for semantic search and retrieval-augmented generation (RAG), fine-tune LLMs or use instruction-tuning and prompt engineering for domain-specific tasks, develop APIs, microservices, and backend logic to support AI applications, collaborate with product teams to identify opportunities for AI automation, and deploy applications to cloud platforms (AWS/GCP/Azure) with a focus on security, scalability, and reliability. Monitoring, testing, and improving model performance through real-world feedback loops will also be part of your responsibilities. To qualify for this role, you should have at least 2+ years of experience in software development using Python, Node.js, or similar technologies, along with 1+ years of hands-on experience in AI/ML development, specifically with LLMs. Additionally, you should have a minimum of 1 year of experience in building autonomous AI agents or agent-based applications, a strong understanding of AI orchestration frameworks like LangChain, AutoGen, CrewAI, etc., experience with vector databases and embedding models, and cloud deployment experience (AWS Lambda, EC2, ECS, GCP Cloud Run, Azure Functions). Strong problem-solving skills, the ability to work independently, and an ownership mindset are also required. Preferred skills for this role include experience with multi-modal AI (text, image, audio), knowledge of data pipelines, ETL, and real-time event-driven architectures, a background in NLP, transformers, and deep learning frameworks (PyTorch, TensorFlow), familiarity with DevOps, CI/CD pipelines, and container orchestration (Docker, Kubernetes), and contributions to open-source AI projects or publications in AI/ML. In return for your expertise, we offer a competitive salary, performance-based bonuses, flexible working hours, remote work opportunities, exposure to cutting-edge AI technologies and tools, and the opportunity to lead innovative AI projects with real-world impact. This is a full-time, permanent position with benefits including Provident Fund. If you meet the required qualifications and have the desired skills, we look forward to welcoming you to our team on the expected start date of 19/08/2025.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

At NiCE, we believe in pushing boundaries and challenging ourselves constantly. We are a team of ambitious individuals who are dedicated to being game changers and always strive to emerge victorious. If you are someone who shares our passion for excellence, we have an exciting career opportunity that will ignite a spark within you. The role revolves around the development of the next-generation advanced analytical cloud platform within Actimize's AI and Analytics Team. This platform aims to leverage data to enhance the accuracy of our clients" Financial Crime programs. As a part of the PaaS/SaaS development group, your responsibilities will include crafting this platform for Actimize's cloud-based solutions and working with cutting-edge cloud technologies. NICE Actimize stands as the leading provider of financial crime, risk, and compliance solutions for financial institutions globally. We value the contributions of every employee as crucial to our company's growth and triumph. To attract top talent worldwide, we offer a dynamic work environment, competitive compensation, benefits, and promising career prospects. Join us to share, learn, and grow in a challenging yet enjoyable setting within a rapidly expanding and esteemed organization. The primary objective of this role is to develop and execute advanced analytics projects comprehensively, covering aspects such as data collection, preprocessing, model development, evaluation, and deployment. You will be tasked with designing and implementing predictive and generative models to derive actionable insights from complex datasets. Your expertise in statistical techniques and quantitative analysis will be pivotal in identifying trends, patterns, and correlations in the data. Collaborating with Product, Engineering, and domain SMEs, you will translate business issues into analytical solutions. Moreover, mentoring junior data scientists, advocating best practices in code quality, experimentation, and documentation, is an integral part of this role. To qualify for this position, you should hold a Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field, with a preference for an advanced degree (Master's or Ph.D.). You must possess 2-4 years of hands-on experience in data science and machine learning, including a minimum of 2 years in Generative AI development. Proficiency in programming languages like Python or R, along with experience in data manipulation and analysis libraries, is essential. A strong grasp of machine learning techniques, algorithms, LLMs, NLP techniques, and evaluation methods for generative outputs is required. Your problem-solving skills, ability to work independently and collaboratively, excellent communication, and leadership qualities are key attributes we seek. Preferred qualifications include prior experience in finance or banking industries, familiarity with cloud computing platforms, knowledge of data visualization tools, and a track record of contributions to the data science community. Hands-on experience with vector databases and embedding techniques would be advantageous. Join NiCE as we disrupt the market with our innovative solutions and global presence. Be part of a team that thrives in a fast-paced, collaborative, and creative environment, where learning and growth opportunities are endless. If you are passionate, innovative, and eager to excel, you might just be the perfect addition to our team. Experience NiCE-FLEX, our hybrid work model that offers maximum flexibility - 2 days in the office and 3 days of remote work each week. Office days focus on face-to-face interactions, fostering teamwork, collaborative thinking, and a vibrant atmosphere that sparks innovation and new ideas. If you are ready to embark on a rewarding journey with NiCE, apply now and seize the opportunity to work with the best of the best in a company that leads the market in AI, cloud, and digital domains. Requisition ID: 8296 Reporting into: Tech Manager Role Type: Data Scientist About NiCE: NICELtd. (NASDAQ: NICE) is a global leader in software products, serving over 25,000 businesses worldwide, including 85 of the Fortune 100 companies. Our solutions deliver exceptional customer experiences, combat financial crime, and ensure public safety. NiCE software manages more than 120 million customer interactions and monitors over 3 billion financial transactions daily. Renowned for our innovation in AI, cloud, and digital technologies, NiCE has a workforce of over 8,500 employees across 30+ countries and is consistently acknowledged as the market leader in its domains.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

navi mumbai, maharashtra

On-site

You will be joining a fast-growing enterprise AI & data science consultancy that caters to global clients in finance, healthcare, and enterprise software sectors. Your primary responsibility as a Senior LLM Engineer will involve designing, fine-tuning, and operationalizing large language models for real-world applications using PyTorch and Hugging Face tooling. You will also be tasked with architecting and implementing RAG pipelines, building scalable inference services and APIs, collaborating with data engineers and ML scientists, and establishing engineering best practices. To excel in this role, you must have at least 4 years of experience in data science/ML engineering with a proven track record of delivering LLM-based solutions to production. Proficiency in Python programming, PyTorch, and Hugging Face Transformers is essential. Experience with RAG implementation, production deployment technologies such as Docker and Kubernetes, and cloud infrastructure (AWS/GCP/Azure) is crucial. Preferred qualifications include familiarity with orchestration frameworks like LangChain/LangGraph, ML observability, model governance, and mitigation techniques for bias. Besides technical skills, you will benefit from a hybrid working model in India, opportunities to work on cutting-edge GenAI projects, and a collaborative consultancy culture that emphasizes mentorship and career growth. If you are passionate about LLM engineering and seek end-to-end ownership of projects, Zorba Consulting India offers an equal opportunity environment where you can contribute to diverse and inclusive teams. To apply for this role, submit your resume along with a brief overview of a recent LLM project you led, showcasing your expertise in models, infrastructure, and outcomes.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

delhi

On-site

You are a highly skilled GenAI Lead Engineer responsible for designing and implementing advanced frameworks for alternate data analysis in the investment management domain. You will leverage LLM APIs (GPT, LLaMA, etc.), build scalable orchestration pipelines, and architect cloud/private deployments to drive next-generation AI-driven investment insights. Your role includes leading a cross-functional team of Machine Learning Engineers and UI Developers to deliver robust, production-ready solutions. Your responsibilities will include developing custom frameworks using GPT APIs or LLaMA for alternate data analysis and insights generation. You will optimize LLM usage for investment-specific workflows, including data enrichment, summarization, and predictive analysis. Additionally, you will design and implement document ingestion workflows using tools such as n8n (or similar orchestration frameworks) and build modular pipelines for structured and unstructured data. You are also expected to architect deployment strategies on cloud (AWS, GCP, Azure) or private compute environments (CoreWeave, on-premises GPU clusters) ensuring high availability, scalability, and security in deployed AI systems. The ideal candidate should have a strong proficiency in Python with experience in frameworks such as TensorFlow or PyTorch. You must have 2+ years of experience in Generative AI and Large Language Models (LLMs) along with experience with VectorDBs (e.g., Pinecone, Weaviate, Milvus, FAISS) and document ingestion pipelines. Familiarity with data orchestration tools (e.g., n8n, Airflow, LangChain Agents), cloud deployments, GPU infrastructure (CoreWeave or equivalent), proven leadership skills, and experience managing cross-functional engineering teams are essential. Strong problem-solving skills, the ability to work in fast-paced, data-driven environments, experience with financial or investment data platforms, knowledge of RAG (Retrieval-Augmented Generation) systems, familiarity with frontend integration for AI-powered applications, and exposure to MLOps practices for continuous training and deployment are also required.,

Posted 2 weeks ago

Apply

0.0 - 5.0 years

2 - 7 Lacs

hyderabad

Work from Office

About the Role: We are seeking a highly motivated and technically proficient AI Agent Development Engineer to join our advanced AI team. In this role, you will design, build, and deploy intelligent, autonomous AI agents that leverage Generative AI , Reinforcement Learning , and Natural Language Processing (NLP) to perform complex, dynamic tasks across diverse domains. You will work at the forefront of AI agent architecture , integrating reasoning, memory, tool usage, and multi-step decision-making capabilities using state-of-the-art libraries and frameworks. Key Responsibilities: Design, develop, and deploy autonomous AI agents for real-world task automation and decision-making and orchestration. Integrate and fine-tune LLMs (Large Language Models) for goal-oriented, tool-using agents. Implement memory-augmented reasoning, retrieval-augmented generation (RAG), and multi-agent coordination. Work with vector databases to store and retrieve contextual memory and knowledge. Optimize agent performance using Reinforcement Learning frameworks and human feedback. Collaborate with cross-functional teams to integrate AI agents into applications and services. Monitor, test, and maintain AI pipelines in production environments. Required Skills & Experience: Strong programming skills in Python (C# or R is a plus). Proven experience in building and deploying AI agents or LLM-based applications . Hands-on expertise in: AI Agent Frameworks: LangChain, AutoGPT, BabyAGI, CrewAI, AgentGPT LLMs & Generative AI: OpenAI (GPT-3.5/4), Hugging Face Transformers, Anthropic Claude, Cohere Vector Search & Memory: Pinecone, FAISS, ChromaDB, Weaviate Reinforcement Learning: OpenAI Gym, Stable-Baselines3, Ray RLlib NLP & Language Tools: spaCy, NLTK, TextBlob Modeling & Deployment: TensorFlow, PyTorch, Keras, Scikit-learn, MLflow APIs & UI Frameworks: Flask, FastAPI, Streamlit, Gradio DevOps: Docker, Git, CI/CD workflows, Kubernetes, Terraform Preferred Qualifications: Experience with agent orchestration tools like LangGraph , ReAct, or Toolformer. Exposure to real-time or asynchronous multi-agent systems. Familiarity with prompt engineering, context management, and RAG pipelines. Contributions to open-source agent/LLM projects or research publications in AI/ML. Why Join Us? Be part of an AI-first product company solving real-world automation and decision-making challenges. Work with a team of AI innovators and contribute to cutting-edge research and applications.

Posted 3 weeks ago

Apply

2.0 - 4.0 years

0 Lacs

hyderabad, telangana, india

Remote

We&aposre looking for a minimum of 2+ years experience and currently accepting only India based applicants . About Us We are a Stealth GenAI B2B Startup focused on revolutionizing the construction industry using Artificial Intelligence. Our team is composed of visionaries, researchers, and engineers dedicated to pushing the boundaries of AI to develop transformative solutions. Founded by a repeat founder who is an alumnus of IIT Madras who had previously built and exited successfully to a Fortune 1000 company. We are committed to using technology and innovation to disrupt the status quo and create the future. About the Role Were hiring a Founding AI Engineer to lead the design and implementation of our document intelligence systems. Youll be responsible for building scalable AI pipelines that extract structure, meaning, and insights from unstructured engineering design documents. This is a hands-on, high-impact role where youll help execute the document intelligence roadmap and product direction from the ground up. Key Responsibilities Build and optimize document processing pipelines for unstructured and semi-structured data Design and implement intelligent extraction systems using LLMs, OCR, and NLP techniques Design and refine pipelines that apply large language models (LLMs) to extract, structure, and validate information from construction documents, and free-text project documents Prototype, evaluate, and optimize techniques for information extraction, entity linking, data normalization, and confidence scoring Apply RAG (retrieval-augmented generation) and embedding-based search across large document sets Work with tools such as LangChain, CrewAI, Hugging Face, PyMuPDF, Tesseract, or Amazon Textract Integrate and fine-tune LLMs (e.g., OpenAI GPT-4, Claude, Mistral ) for domain-specific understanding Set up vector search and memory frameworks (e.g., FAISS, Pinecone) for contextual recall Deploy and iterate on systems for prompt tuning, relevance evaluation, and performance tracking across varied document types and project datasets Collaborate with product and engineering teams to embed document intelligence into core workflows Ensure accuracy, scalability, and observability of models in production Bridge the gap between research and production , transforming prototypes into scalable, production-ready AI components Required Skills & Qualifications 2+ years of experience in AI/ML with a strong focus on Information extraction from pdf documents Proven experience working with PDFs, scanned documents, structured and unstructured formats Hands-on with LLMs, embeddings, prompt engineering, OCR, and NLP libraries Proficient in Python and frameworks such as PyTorch, Hugging Face Transformers, or LangChain Experience building scalable inference pipelines and optimizing LLM performance Excellent problem-solving skills and ability to thrive in fast-paced, high-ownership environments Ability to work independently with attention to detail and take full ownership Strong communication and collaboration in agile cross-functional teams Passion for delivering high-quality, reliable, and scalable solutions Beyond Technical Skills What Were Looking For We&aposre building more than just software. Were building a team that thrives in a fast-paced, high-ownership environment. Heres what we value deeply beyond strong technical capabilities Startup Readiness & Ownership Bias for action: Youre someone who ships fast, tests quickly, and iterates with purpose. Comfort with ambiguity: Ability to make decisions with limited information and adapt as things evolve. Ownership mindset: You treat the product as your own - not just a list of tickets to complete. Resourcefulness: You know when to hack something together to keep moving, and when its time to build it right. Product Thinking User-Centric Approach: You care about the why behind what you&aposre building and understand the users perspective. Collaborative in Shaping Product: Youre comfortable challenging and refining product specs instead of just executing them. Strategic Trade-off Awareness: You can navigate choicesspeed vs scalability, UX vs tech debt, MVP vs V1with clarity. Collaboration & Communication Cross-Functional Comfort: You work well with product, design, and founders. Clear communicator: You can explain technical concepts in simple terms when needed. Feedback culture fit: You give and receive feedback without ego. Growth Potential Fast Learner: Startups change, and so do stacks. Willingness to learn is gold. Long-Term Mindset: Lot of opportunity to scale Mentorship Readiness: If you can bring others up as the team scales, thats a win. Startup Cultural Fit Mission-Driven: You care deeply about what youre building. Flexible Work Style: Especially if remote, please be flexible. No big-company baggage: No expectations of layered teams or polished specs. We move fast and build together. Why Join our Startup Shape the Future of construction technology through intelligent automation and smart workflows Ownership & Impact: See the results of your work in a fast-paced, high-impact startup environment Competitive Package: Market-aligned salary, and performance-based incentives Remote Flexibility: Hybrid/remote-friendly culture Work with Experts: Collaborate with leaders in AI, construction, and cloud-native development How to Apply Excited to drive innovation in one of the worlds largest industries Just fill out this quick form https://forms.gle/yCw6coGhjJzTJdZ69 to tell us about your experience and skills - it wont take long! Well review your info and get in touch if its a good match. Show more Show less

Posted 3 weeks ago

Apply

8.0 - 10.0 years

0 Lacs

hyderabad, telangana, india

Remote

Hello Connections, This is a REMOTE position. We are seeking a Staff LLMRAG Engineer to lead the development and optimization of enterprise-grade retrieval-augmented generation systems. You will architect scalable AI solutions, integrate large language models with advanced retrieval pipelines, and ensure production readiness. This role combines deep technical expertise with the ability to guide teams and deliver results on aggressive timelines. Most Important Skills/Responsibilities: Lead RAG Architecture Design Define and implement best practices for retrieval-augmented generation systems, ensuring reliability, scalability, and low-latency performance. Full-Stack AI Development Build and optimize multi-stage pipelines using LLM orchestration frameworks (LangChain, LangGraph, LlamaIndex, or custom). Programming & Integration Develop services and APIs in Python and Golang to support AI workflows, document ingestion, and retrieval processes. Search & Retrieval Optimization Implement hybrid search, vector embeddings, and semantic ranking strategies to improve contextual accuracy. Prompt Engineering Design and iterate on few-shot, chain-of-thought, and tool-augmented prompts for domain-specific applications. -Bachelors degree required - Strong proficiency in Python and Golang or RUST, with experience building high-performance services and APIs. Key Responsibilities Lead RAG Architecture Design Define and implement best practices for retrieval-augmented generation systems, ensuring reliability, scalability, and low-latency performance. Full-Stack AI Development Build and optimize multi-stage pipelines using LLM orchestration frameworks (LangChain, LangGraph, LlamaIndex, or custom). Programming & Integration Develop services and APIs in Python and Golang to support AI workflows, document ingestion, and retrieval processes. Search & Retrieval Optimization Implement hybrid search, vector embeddings, and semantic ranking strategies to improve contextual accuracy. Prompt Engineering Design and iterate on few-shot, chain-of-thought, and tool-augmented prompts for domain-specific applications. Mentorship & Collaboration Partner with cross-functional teams and guide engineers on RAG and LLM best practices. Performance Monitoring Establish KPIs and evaluation metrics for RAG pipeline quality and model performance. Qualifications Must Have: 8+ years in software engineering or applied AI/ML, with at least 2+ years focused on LLMs and retrieval systems. Strong proficiency in Python and Golang or RUST, with experience building high-performance services and APIs. Expertise in RAG frameworks (LangChain, LangGraph, LlamaIndex) and embedding models. Hands-on experience with vector databases (Databricks Vector Store, Pinecone, Weaviate, Milvus, Chroma). Strong understanding of hybrid search (semantic + keyword) and embedding optimization. -Bachelors degree required Preferred: LLM fine-tuning experience (LoRA, PEFT). Knowledge graph integration with LLMs. Familiarity with cloud ML deployment (AWS (preferred), Databricks, Azure). - Masters or PHD degree in CS Soft Skills Strong problem-solving and decision-making skills under tight timelines. Excellent communication for cross-functional collaboration. Ability to work independently while aligning with strategic goals. Show more Show less

Posted 3 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Greetings from TAO Digital!! We are currently looking for AI/ML Engineer for Bangalore location. Interested candidates can share your updated CV to [HIDDEN TEXT] 5+ years of professional experience in full-stack or backend development, with at least 5 years of recent hands-on experience working with Python and AI/ML-powered applications. Strong understanding of ML/NLP concepts, especially in areas such as text classification, summarization, and conversational AI. Proficiency working with Langchain, AWS Bedrock, OpenAI APIs, Hugging Face, and Vector Databases (e.g., Pinecone, FAISS, Weaviate, etc.). Experience with AWS services (especially Lambda, S3, Bedrock, Transcribe) in a SaaS or cloud-native environment. Solid experience in integrating AI features with frontend applications (React or similar). Ability to design clean, scalable APIs and integrate ML models into production systems. Exposure to data pipelines and unstructured text processing is a plus. Strong problem-solving, communication, and collaboration skills. Bachelors or Masters degree in Computer Science or related field. Show more Show less

Posted 3 weeks ago

Apply

5.0 - 8.0 years

0 Lacs

pune, maharashtra, india

On-site

Experience: 5-8 Years Work Mode: Pune & Hyderabad Job Type: Fulltime Mandatory Skills: Python, SQL, ETL, Numpy / Scikit-learn / Pandas, AI/ML, Gen AI, TensorFlow, PyTorch, AI frameworks like Autogen, Langgraph, CrewAI, Agentforce, Machine Learning, Predictive Machine Learning and LLM. Job Description We are seeking an experienced and driven Senior AI/ML Engineer with 5-8 years of experience in AI/ML Predictive ML, GenAI and Agentic AI. The ideal candidate should have a strong background in developing and deploying machine learning models, as well as a passion for innovation and problem-solving. Required Qualifications & Skills Bachelors or Masters degree in computer science / AIML / Data Science. 5 to 8 years of overall experience and hands-on experience with the design and implementation of Machine Learning models, Deep Learning models, and?LLM models for solving business problems. Proven experience working with?Generative AI technologies, including prompt engineering, fine-tuning large language models (LLMs), embeddings, vector databases (e.g., FAISS, Pinecone), and Retrieval-Augmented Generation (RAG) systems. Expertise in Python (NumPy, Scikit-learn, Pandas), TensorFlow, PyTorch,?transformers (e.g., Hugging Face), or MLlib. Experience in working on Agentic AI frameworks like Autogen, Langgraph, CrewAI, Agentforce etc Expertise in cloud-based data and AI solution design and implementation using?GCP / AWS / Azure, including the use of their Gen AI services. Good experience in building complex and scalable ML and?Gen AI?solutions and deploying them into production environments. Experience with scripting in SQL, extracting large datasets, and designing ETL flows. Excellent problem-solving and analytical skills with the ability to translate business requirements into data science and?Gen AI?solutions. Effective communication skills, with the ability to convey complex technical concepts to both technical and non-technical stakeholders. Key Responsibilities As an?AI/ML Solution Engineer, build AI/ML, Gen AI, Agentic AI empowered?practical in-depth solutions for solving customers business problems. As an?AI/ML Solution Engineer, design, develop, and deploy machine learning models and algorithms. Conduct research and stay up-to-date with the latest advancements in AI/ML, GenAI, and Agentic AI. Lead a team of junior AI engineers, providing direction and support. Skills: ml,learning,models,machine learning,skills,data,design,machine learning models,data science,etl,ai,sql,python,numpy,scikit-learn,pandas,gen ai,tensorflow,pytorch,ai framework,autogen,langgraph,crewai,agent force,llm Show more Show less

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

mumbai, maharashtra, india

On-site

Job Description Role Summary We are seeking a Generative AI/ML Engineer to design, build, and deploy intelligent AI solutions leveraging Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), agentic workflows, and edge computing. You will work hands-on with cloud-native AI services, LLMOps pipelines, and enterprise-grade deployment patterns to solve business problems. Key Responsibilities Design, develop, and fine-tune LLM-powered applications for enterprise use cases. Experience in evaluating LLM applications and developing observability frameworks Implement RAG pipelines using vector databases, embeddings, and optimized retrieval strategies. Build agentic AI workflows with multi-step reasoning, tool calling, and integration with APIs. Integrate GenAI solutions into multi-cloud or hybrid cloud environments (AWS, Azure, GCP). Develop and optimize edge AI deployments for low-latency use cases. Create data strategy, ingestion, transformation, enrichment, validations, quality checks via pipelines for AI ingestion, preprocessing, and governance. Implement AI safety, bias mitigation, and compliance measures. Work closely with LLMOps teams to enable continuous integration & deployment of AI models. Write well-documented, production-ready code in Python, Node.js, Rust . Benchmark and evaluate model performance , latency, and cost-efficiency. Required Skills Proficient in cloud AI services (e.g. AWS Bedrock/SageMaker, Azure AI Foundry, Google Vertex AI, Anthropic, OpenAI APIs). Strong proficiency with Python and LLM frameworks (e.g. PromptFlow, LangGraph, LlamaIndex, HuggingFace, PyTorch, TensorFlow). Hands-on experience with vector DBs (e.g. Pinecone, Weaviate, Milvus, FAISS, Azure Cognitive Search). Experience building RAG-based and agentic AI solutions. Familiarity with Edge AI frameworks (e.g. NVIDIA Jetson, AWS IoT Greengrass, Azure Percept). Multi-modal AI (text, image, speech, video) experience. Strong grasp of APIs, microservices, and event-driven architectures . Knowledge of AI governance (data privacy, model explainability, security). Experience in containerized deployments (Docker, Kubernetes, serverless AI functions). Preferred Skills Generative agents with memory and planning capabilities . Real-time AI streaming with WebSockets or Kafka. Prior contributions to open-source GenAI projects . Experience in build, test and deploy various ML models Experience in building MCP, A2A protocol Show more Show less

Posted 3 weeks ago

Apply

3.0 - 6.0 years

6 - 12 Lacs

gurugram

Hybrid

Build Gen AI applications using LLMs (OpenAI, LLaMA,Falcon) Develop (RAG) pipelines with vector databases such as ChromaDB, Pinecone, LanceDB. Implement & optimize prompt engineering, embeddings, & semantic search Fine-tune and adapt pre-trained LLMs Required Candidate profile Strong programming skills in Python LangChain, Transformers (Hugging Face, BERT, GPT models) Vector databases (Chroma, Pinecone, LanceDB, Weaviate, FAISS) Deep Learning, Neural Networks, NLP technique

Posted 3 weeks ago

Apply

9.0 - 14.0 years

30 - 45 Lacs

noida, south goa, hyderabad

Hybrid

Job Title: GenAI Architect Experience Required: 10+ years in software architecture or machine learning roles, including at least 2+ years focused on Generative AI or Large Language Models (LLMs). Role Overview We are seeking a GenAI Architect to lead the design, development, and deployment of cutting-edge Generative AI solutions. This role requires deep expertise in LLMs, advanced AI architectures, and the ability to build scalable and production-ready systems leveraging the latest in AI technologies. Key Responsibilities Architect and implement GenAI solutions, including LLMs, RAG pipelines, embedding models, and vector search . Design and optimize prompt engineering strategies, fine-tuning approaches , and mitigate hallucinations in AI outputs. Develop robust AI workflows using Python, LangChain, Hugging Face Transformers , and vector databases such as Pinecone or Weaviate . Integrate and leverage GenAI APIs (OpenAI, Azure OpenAI, Anthropic, Google Vertex AI). Deploy AI applications in cloud-native environments (AWS, GCP, Azure) using Kubernetes and containerized architectures . Lead model evaluation, performance tuning, and ensure scalable architecture for enterprise use cases. Collaborate with business stakeholders to translate complex AI concepts into actionable solutions and value-driven outcomes. Required Skills & Qualifications 10+ years of experience in software architecture or machine learning, with 2+ years in GenAI/LLM-focused roles . Strong expertise in: LLMs and RAG pipelines Prompt engineering, fine-tuning, and hallucination mitigation LangChain, Hugging Face Transformers, Python Vector databases (Pinecone, Weaviate, etc.) Proficiency in deploying solutions on AWS/GCP/Azure with Kubernetes. Experience with model evaluation frameworks and industry best practices. Ability to communicate AI strategies and outcomes to non-technical stakeholders. Preferred Qualifications Hands-on experience with LLMOps tools (Weights & Biases, MLflow, Trulens, PromptLayer). Knowledge of domain-specific AI applications (real estate, finance, risk modeling). Exposure to multi-modal GenAI (text, image, document, voice). Contributions to open-source projects, research publications, or patents in AI/ML.

Posted 3 weeks ago

Apply

9.0 - 14.0 years

30 - 45 Lacs

pune, bengaluru, mumbai (all areas)

Hybrid

Job Title: GenAI Architect Experience Required: 10+ years in software architecture or machine learning roles, including at least 2+ years focused on Generative AI or Large Language Models (LLMs). Role Overview We are seeking a GenAI Architect to lead the design, development, and deployment of cutting-edge Generative AI solutions. This role requires deep expertise in LLMs, advanced AI architectures, and the ability to build scalable and production-ready systems leveraging the latest in AI technologies. Key Responsibilities Architect and implement GenAI solutions, including LLMs, RAG pipelines, embedding models, and vector search . Design and optimize prompt engineering strategies, fine-tuning approaches , and mitigate hallucinations in AI outputs. Develop robust AI workflows using Python, LangChain, Hugging Face Transformers , and vector databases such as Pinecone or Weaviate . Integrate and leverage GenAI APIs (OpenAI, Azure OpenAI, Anthropic, Google Vertex AI). Deploy AI applications in cloud-native environments (AWS, GCP, Azure) using Kubernetes and containerized architectures . Lead model evaluation, performance tuning, and ensure scalable architecture for enterprise use cases. Collaborate with business stakeholders to translate complex AI concepts into actionable solutions and value-driven outcomes. Required Skills & Qualifications 10+ years of experience in software architecture or machine learning, with 2+ years in GenAI/LLM-focused roles . Strong expertise in: LLMs and RAG pipelines Prompt engineering, fine-tuning, and hallucination mitigation LangChain, Hugging Face Transformers, Python Vector databases (Pinecone, Weaviate, etc.) Proficiency in deploying solutions on AWS/GCP/Azure with Kubernetes. Experience with model evaluation frameworks and industry best practices. Ability to communicate AI strategies and outcomes to non-technical stakeholders. Preferred Qualifications Hands-on experience with LLMOps tools (Weights & Biases, MLflow, Trulens, PromptLayer). Knowledge of domain-specific AI applications (real estate, finance, risk modeling). Exposure to multi-modal GenAI (text, image, document, voice). Contributions to open-source projects, research publications, or patents in AI/ML.

Posted 3 weeks ago

Apply

4.0 - 6.0 years

0 Lacs

bengaluru, karnataka, india

Remote

Job Title: Sr. Data scientist-Gen AI Location: Bangalore Experience: 4+ years About the Role: We are looking for an experienced AI/ML Engineer with a strong background in Generative AI (GenAI), Large Language Models (LLMs), and Retrieval-Augmented Generation (RAG) implementation . You will play a key role in designing, developing, and deploying cutting-edge AI solutions that leverage state-of-the-art machine learning techniques to solve complex problems. Key Responsibilities: Design, develop, and optimize LLM-based applications with a focus on RAG pipelines, fine-tuning, and model deployment . Implement GenAI solutions for text generation, summarization, code generation, and other NLP tasks. Develop and optimize ML models , leveraging classical and deep learning techniques. Build and integrate retrieval systems using vector databases like FAISS, ChromaDB, Pinecone, or Weaviate . Optimize and fine-tune LLMs (e.g., OpenAI GPT, LLaMA, Mistral, Falcon ) for domain-specific use cases. Develop data pipelines for training, validation, and inference. Work on scalable AI solutions , ensuring performance and cost efficiency in deployment. Collaborate with cross-functional teams to integrate AI models into production applications. Required Skills & Qualifications: 4+ years of experience in AI/ML development. Strong proficiency in Python and ML frameworks like TensorFlow, PyTorch, or Hugging Face Transformers . Hands-on experience with LLMs, GenAI, and RAG architectures . Experience working with vector databases (e.g., FAISS, Pinecone, ChromaDB). Knowledge of ML algorithms , deep learning architectures, and NLP techniques. Familiarity with cloud platforms ( AWS, GCP, or Azure ) and AI/ML model deployment. Experience with LangChain or LlamaIndex for LLM applications is a plus. Strong problem-solving skills and the ability to work in a fast-paced environment. Perks & Benefits: Health and Wellness: Healthcare policy covering your family and parents. Food: Enjoy scrumptious buffet lunch at the office every day. (Bangalore) Hybrid work policy: Beat the everyday traffic commute. A 3-day in office and 2-day WFH policy based on your roles and responsibilities. Professional Development: Learn and propel your career. We provide workshops, funded online courses and other learning opportunities based on individual needs. Rewards and Recognition&aposs: Recognition and rewards programs in place to celebrate your achievements and contributions. To find out more about us, head over to our Website and LinkedIn Show more Show less

Posted 3 weeks ago

Apply

4.0 - 6.0 years

0 Lacs

bengaluru, karnataka, india

Remote

Job Title: Sr. Data scientist-Gen AI Location: Bangalore Experience: 4+ years No. of openings: 2 About The Role We are looking for an experienced AI/ML Engineer with a strong background in Generative AI (GenAI), Large Language Models (LLMs), and Retrieval-Augmented Generation (RAG) implementation. You will play a key role in designing, developing, and deploying cutting-edge AI solutions that leverage state-of-the-art machine learning techniques to solve complex problems. Key Responsibilities Design, develop, and optimize LLM-based applications with a focus on RAG pipelines, fine-tuning, and model deployment. Implement GenAI solutions for text generation, summarization, code generation, and other NLP tasks. Develop and optimize ML models, leveraging classical and deep learning techniques. Build and integrate retrieval systems using vector databases like FAISS, ChromaDB, Pinecone, or Weaviate. Optimize and fine-tune LLMs (e.g., OpenAI GPT, LLaMA, Mistral, Falcon) for domain-specific use cases. Develop data pipelines for training, validation, and inference. Work on scalable AI solutions, ensuring performance and cost efficiency in deployment. Collaborate with cross-functional teams to integrate AI models into production applications. Required Skills & Qualifications 4+ years of experience in AI/ML development. Strong proficiency in Python and ML frameworks like TensorFlow, PyTorch, or Hugging Face Transformers. Hands-on experience with LLMs, GenAI, and RAG architectures. Experience working with vector databases (e.g., FAISS, Pinecone, ChromaDB). Knowledge of ML algorithms, deep learning architectures, and NLP techniques. Familiarity with cloud platforms (AWS, GCP, or Azure) and AI/ML model deployment. Experience with LangChain or LlamaIndex for LLM applications is a plus. Strong problem-solving skills and the ability to work in a fast-paced environment. Perks & Benefits Health and Wellness: Healthcare policy covering your family and parents. Food: Enjoy scrumptious buffet lunch at the office every day. (Bangalore) Hybrid work policy: Beat the everyday traffic commute. A 3-day in office and 2-day WFH policy based on your roles and responsibilities. Professional Development: Learn and propel your career. We provide workshops, funded online courses and other learning opportunities based on individual needs. Rewards and Recognition&aposs: Recognition and rewards programs in place to celebrate your achievements and contributions. To find out more about us, head over to our Website and LinkedIn. Show more Show less

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

bengaluru, karnataka, india

On-site

About the Role We are seeking a skilled Full-Stack AI Engineer to design, build, and deploy advanced AI/ML solutions. You will work on Retrieval-Augmented Generation (RAG), semantic search optimization, and production-scale ML systems. Core Skills RAG implementation Vector database expertise Semantic search optimization Building production ML systems Technical Requirements Experience with LangChain/LlamaIndex Hands-on with Pinecone/Weaviate/Qdrant Knowledge of OpenAI, Sentence Transformers Strong coding in Python, NumPy Familiarity with FAISS and prompt engineering Document chunking and Redis/caching Show more Show less

Posted 3 weeks ago

Apply

4.0 - 6.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Primary Role Title: Senior LLM Engineer About The Opportunity We are a fast-growing enterprise AI & data science consultancy serving global clients across finance, healthcare, and enterprise software. The team builds production-grade LLM-driven productsRAG systems, intelligent assistants, and custom inference pipelinesthat deliver measurable business outcomes. Location: India (Hybrid) Role & Responsibilities Design, fine-tune and productionize large language models (instruction tuning, LoRA/PEFT) using PyTorch and Hugging Face tooling for real-world applications. Architect and implement RAG pipelines: embeddings generation, chunking strategies, vector search integration (FAISS/Pinecone/Milvus) and relevance tuning for high-quality retrieval. Build scalable inference services and APIs (FastAPI/Falcon), containerize (Docker) and deploy to cloud/Kubernetes with low-latency and cost-optimized inference (quantization, ONNX/Triton). Collaborate with data engineers and ML scientists to productionize data pipelines, automate retraining, monitoring, evaluation and drift detection. Drive prompt-engineering, evaluation frameworks and safety/guardrail implementation to ensure reliable, explainable LLM behavior in production. Establish engineering best-practices (Git workflows, CI/CD, unit tests, observability) and mentor junior engineers to raise team delivery standards. Skills & Qualifications Must-Have 4+ years in data science/ML engineering with demonstrable experience building and shipping LLM-based solutions to production. Strong Python engineering background and hands-on experience with PyTorch and Hugging Face Transformers (fine-tuning, tokenizers, model optimization). Practical experience implementing RAG: embeddings, vector DBs (FAISS/Pinecone/Weaviate/Milvus), chunking and retrieval tuning. Production deployment experience: Docker, Kubernetes, cloud infra (AWS/GCP/Azure) and inference optimization (quantization, batching, ONNX/Triton). Preferred Experience with LangChain/LangGraph or similar orchestration frameworks, and building agentic workflows. Familiarity with ML observability, model governance, safety/bias mitigation techniques and cost/performance trade-offs for production LLMs. Benefits & Culture Highlights Hybrid working model in India with flexible hours, focused on outcomes and work-life balance. Opportunity to work on cutting-edge GenAI engagements for enterprise customers and accelerate your LLM engineering career. Collaborative consultancy culture with mentorship, learning stipend and clear growth paths into technical leadership. This role is with Zorba Consulting India. If you are an experienced LLM practitioner who enjoys end-to-end ownershipfrom research experiments to robust production systemsapply with your resume and a short note on a recent LLM project you led (models, infra, and outcomes). Zorba Consulting India is an equal opportunity employer committed to diversity and inclusion. Skills: ml,llm,data science Show more Show less

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

india

Remote

Job Opportunity: Artificial Intelligence Developer (Remote | 01 Year Experience) Location: Work from Home Experience: 01 Year (Full-time internship or junior developer experience required) About Debales.ai is a fast-growing B2B technology company focused on building intelligent, scalable, and user-centric solutions for businesses across the globe. Our product suite includes custom Shopify applications , browser-based widgets , and npm packages , empowering digital transformation and automation for modern enterprises. We are passionate about innovation, collaboration, and solving real-world problems with cutting-edge technology. Key Responsibilities:- 1. Collaborate with our team of AI experts to develop and deploy AI modelswith end to end AI/ML systems. 2. Assist in analyzing and interpreting complex data sets to drive business insights 3. Contribute to the design and implementation of machine learning algorithms 4. Proficiency in programming languages like Python, with experience in frameworks like LangChain 5. Hands-on experience with LLMs (e.g., OpenAI, Hugging Face, etc.) 6. Strong understanding of retrieval systems and vector databases (e.g., Pinecone, Weaviate) 7. Experience deploying solutions on cloud platforms (AWS, Azure, or Google Cloud) 8. Familiarity with chatbot frameworks and APIs for integration 9. Experience in building conversational AI solutions for specific industries or applications 10. Knowledge of prompt engineering and fine-tuning LLM 11. Knowledge of N8N and Lang flow 11. Knowledge or have worked with agents,tools and built multi agentic architecture. The ideal candidate is a creative problem solver who will work in coordination with cross-functional teams to design, develop, and maintain our next generation websites and web tools. You must be comfortable working as part of a team while taking the initiative to take lead on new innovations and projects. Qualifications Bachelor&aposs degree or equivalent experience in Computer Sciencewith 6 to 12 months of experience. Previous experience in using HTML, CSS, and JavaScript Proficiency in at least one server-side technology (Java, PHP, NodeJS, Python, Ruby Ability to multi-task, organize, and prioritize work Preferred Skills:- Artificial intelligence Data Science Data Structures Deep Learning Machine Learning Natural Language Processing (NLP) Neural Networks Python Show more Show less

Posted 3 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

mumbai, maharashtra, india

On-site

About the Role We are seeking a Agentic AI Developer with 35 years of total software/AI experience and proven hands-on work in Agentic AI . The ideal candidate has built LLM-powered agents using frameworks like LangChain, AutoGen, CrewAI, or Semantic Kernel, and can design, deploy, and optimize autonomous AI systems for real-world business use cases. Key Responsibilities Architect, build, and deploy LLM-driven agents that can plan, reason, and execute multi-step workflows. Work with agent orchestration frameworks (LangChain, AutoGen, CrewAI, Semantic Kernel, Haystack, etc.). Develop and maintain tools, APIs, and connectors for extending agent capabilities. Implement RAG pipelines with vector databases (Pinecone, Weaviate, FAISS, Chroma, etc.). Optimize prompts, workflows, and decision-making for accuracy, cost, and reliability . Collaborate with product and engineering teams to design use-casespecific agents (e.g., copilots, data analysts, support agents). Ensure monitoring, security, and ethical compliance of deployed agents. Stay ahead of emerging trends in multi-agent systems and autonomous AI research . Required Skills 35 years of professional experience in AI/ML, software engineering, or backend development . Demonstrated hands-on experience in building agentic AI solutions (not just chatbots). Proficiency in Python (TypeScript/JavaScript is a plus). Direct experience with LLM APIs (OpenAI, Anthropic, Hugging Face, Cohere, etc.). Strong knowledge of vector databases and embeddings . Experience integrating APIs, external tools, and enterprise data sources into agents. Solid understanding of prompt engineering and workflow optimization . Strong problem-solving, debugging, and system design skills. Nice to Have Experience with multi-agent systems (agents collaborating on tasks). Prior contributions to open-source agentic AI projects . Cloud deployment knowledge ( AWS/GCP/Azure ) and MLOps practices. Background in reinforcement learning or agent evaluation . Familiarity with AI safety, monitoring, and guardrails . What We Offer Work on cutting-edge AI agent projects with direct real-world impact. Collaborative environment with strong emphasis on innovation & experimentation . Competitive salary and growth opportunities. Opportunity to specialize in one of the fastest-growing areas of AI . Show more Show less

Posted 3 weeks ago

Apply

5.0 - 7.0 years

12 - 15 Lacs

noida

Work from Office

We are looking for an AI & Agentic Developer to join our product team. This role is for someone who has worked with AI tools, LLMs (like ChatGPT, Claude), and agent frameworks (LangChain, CrewAI, AutoGen). If you love building smart AI systems and applications that can work on their own, integrate with APIs, and solve real problems is the right role for you. Responsibilities: Build and integrate AI-powered applications using LLMs and AI APIs (OpenAI, Claude, Hugging Face). Create and manage AI agents using tools like LangChain, CrewAI, and AutoGen. Use prompt engineering to improve AI results and user experience. Work with vector databases (Pinecone, FAISS, Chroma) for AI search and knowledge retrieval. Collaborate with developers, designers, and researchers to build smart solutions. Deploy applications using Docker, Kubernetes, Vercel, Railway, and Supabase . Ensure applications run smoothly, are scalable, and perform well. Required Skills: 45 years of experience in AI/ML or Full Stack Development with AI integration . Strong knowledge of Python, JavaScript, React.js, Next.js, Node.js, and TypeScript . Experience with AI APIs (OpenAI, Claude, Hugging Face). Hands-on with LangChain, AutoGen, CrewAI, and similar frameworks. Understanding of prompt engineering and AI search (vector databases) . Knowledge of PostgreSQL, Supabase, Railway, Vercel . Comfortable using AI coding assistants like GitHub Copilot, Cursor, Tabnine . Good problem-solving, teamwork, and communication skills. Tech Stack Youll Use: Frontend: React, Next.js Backend: Node.js, Python Database: PostgreSQL, Supabase AI Tools: OpenAI, Claude, LangChain, CrewAI, AutoGen Vector DBs: Pinecone, FAISS, Chroma Deployment Tools: Docker, Kubernetes, Vercel, Railway, n8n Dev Tools: GitHub, Copilot, Cursor, Tabnine Why Join Us? Work on real AI innovation with the latest tools. Be part of a fast-growing and creative team . Build applications that make an impact. Grow your skills in the exciting world of AI .

Posted 3 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

You are a senior professional with at least 6+ years of hands-on experience in Python focusing on AI technologies. Your proficiency lies in Python and you have worked with libraries such as NumPy, Pandas, scikit-learn, TensorFlow, or PyTorch. Your expertise includes a solid understanding of machine learning algorithms, model evaluation, and data preprocessing techniques. You have experience dealing with both structured and unstructured data sets. Additionally, you are familiar with version control systems like Git and collaborative development practices. Your strong problem-solving skills enable you to work effectively both independently and as part of a team. Desired Skills: - You have experience in natural language processing (NLP), computer vision, or time-series forecasting. - You are familiar with MLOps tools and practices for model deployment and monitoring. - Your background includes working with Generative AI models such as GPT, LLaMA, or Stable Diffusion. - You have knowledge of prompt engineering, fine-tuning, or embedding techniques for large language models (LLMs). - Hands-on experience with LLM frameworks like LangChain, Haystack, or Transformers (Hugging Face) is a plus. - You understand vector databases (e.g., Postgres, Pinecone) for retrieval-augmented generation (RAG) pipelines. - Any contributions to open-source AI/ML projects or published research would be beneficial. Locations: Hyderabad / Chennai / Bangalore / Pune. If you meet the above criteria and are interested in this opportunity, please share your resume at hr@neev.global or call 9429691736. This is a full-time position that requires you to work in person.,

Posted 1 month ago

Apply

0.0 - 3.0 years

0 Lacs

chennai, tamil nadu

On-site

Embark on a transformative journey with SwaaS, where innovation meets opportunity. Explore thrilling career prospects at the cutting edge of technology. Join our dynamic team, dedicated to shaping the future of IT. At SwaaS, we offer more than just jobs; we provide a platform for growth, collaboration, and impactful contributions. Discover a workplace where your aspirations align with limitless possibilities. Your journey towards a rewarding career in technology begins here, with SwaaS as your guide. Perks and Benefits We go beyond Salaries and provide guaranteed benefits that speak about Swaas value and culture. Our employees get common benefits and also performance-based individual benefits Performance based benefits We promote a culture of equity Accept the challenge, deliver the results and get rewarded Healthcare Our comprehensive medical insurance helps you cover your urgent medical needs Competitive Salary We assure with pride that we are on par with the industry leaders in terms of our salary package Employee Engagement A break is always needed out of the regular monotonous work assignments. Our employee engagement program helps our employees enhance their team bonding Upskilling We believe in fostering a culture of Learning and harnessing the untapped potential in our employees. Everyone is encouraged and rewarded for acquiring new skills and certifications Junior AI/ML Developer (Entry-Level) (Experience: 0-2 years) Tech Stack: Python, Node.js (Javascript), LangChain, LLama Index, OpenAI API, Perplexity.ai API, Neo4j, PostgreSQL Responsibilities: Assist in developing AI-driven solutions using LLMs (ChatGPT, Perplexity.ai) and RAG (Retrieval-Augmented Generation). Work on intent extraction and chatbot development, integrating APIs like OpenAI or LLama. Support the design and testing of AI-enhanced workflows. Implement database interactions (MySQL or PostgreSQL for structured data). Write and optimize Python/Node.js scripts for the applications. Debug and refine LLM-powered chatbots. Requirements: Strong programming skills in Python (FastAPI, Flask) or Node.js. Exposure to NLP, LLMs, AI APIs (ChatGPT, Perplexity.ai, LangChain). Familiarity with RESTful APIs and Graph Databases (Neo4j). Basic understanding of cloud platforms (AWS, Azure, GCP). Passion for AI, NLP, and chatbot development. Bonus: Knowledge of UI frameworks (React, Next.js). Good to have - Pinecone or equivalent vector databases.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Generative AI Specialist, you will be responsible for developing GenAI LLM model-driven solutions using state-of-the-art models such as OpenAI, Gemini, and Claude, as well as open-source models like Llama and Mistral. Your main role will involve fine-tuning and training models, with a focus on implementing projects involving Agents, Tools, and RAG solutions. You should have hands-on experience in integrating LLMs with VectorDBs like Chromadb, Faiss, and Pinecone. To excel in this role, you must demonstrate expertise in PEFT, quantization of models, and have experience working with tools such as Tensorflow, Pytorch, Python, Hugging Face, and Transformers. Proficiency in data preparation, analysis, and deep learning model development is highly preferred. Additionally, familiarity with deploying models in AWS is desired but not mandatory. Key skills for this role include OpenAI, Gemini, LangChain, Transformers, Hugging Face, Python, Pytorch, Tensorflow, VectorDBs (Chromadb, Faiss, Pinecone). You should have a track record of at least 1-2 live implementations of Generative AI-driven solutions, with extensive experience in deploying chatbots, knowledge search, and NLP solutions. A solid background in implementing machine learning and deep learning solutions for a minimum of 2 years is also expected. This position is based in Chennai, with the work shift from 11 AM to 8 PM. The mode of work is from the office, and the office address is 4th Floor, Techno Park, 10, Rajiv Gandhi Salai, Customs Colony, Sakthi Nagar, Thoraipakkam, Chennai 600097.,

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies