Jobs
Interviews

653 Mistral Jobs - Page 9

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Calcutta

On-site

Join our Team About this opportunity: We are seeking a highly skilled, hands-on AI Architect - GenAI to lead the design and implementation of production-grade, cloud-native AI and NLP solutions that drive business value and enhance decision-making processes. The ideal candidate will have a robust background in machine learning, generative AI, and the architecture of scalable production systems. As an AI Architect, you will play a key role in shaping the direction of advanced AI technologies and leading teams in the development of cutting-edge solutions. What you will do: Architect and design AI and NLP solutions to address complex business challenges and support strategic decision-making. Lead the design and development of scalable machine learning models and applications using Python, Spark, NoSQL databases, and other advanced technologies. Spearhead the integration of Generative AI techniques in production systems to deliver innovative solutions such as chatbots, automated document generation, and workflow optimization. Guide teams in conducting comprehensive data analysis and exploration to extract actionable insights from large datasets, ensuring these findings are communicated effectively to stakeholders. Collaborate with cross-functional teams, including software engineers and data engineers, to integrate AI models into production environments, ensuring scalability, reliability, and performance. Stay at the forefront of advancements in AI, NLP, and Generative AI, incorporating emerging methodologies into existing models and developing new algorithms to solve complex challenges. Provide thought leadership on best practices for AI model architecture, deployment, and continuous optimization. Ensure that AI solutions are built with scalability, reliability, and compliance in mind. The skills you bring: Minimum of experience in AI, machine learning, or a similar role, with a proven track record of delivering AI-driven solutions. Hands-on experience in designing and implementing end-to-end GenAI-based solutions, particularly in chatbots, document generation, workflow automation, and other generative use cases. Expertise in Python programming and extensive experience with AI frameworks and libraries such as TensorFlow, PyTorch, scikit-learn, and vector databases. Deep understanding and experience with distributed data processing using Spark. Proven experience in architecting, deploying, and optimizing machine learning models in production environments at scale. Expertise in working with open-source Generative AI models (e.g., GPT-4, Mistral, Code-Llama, StarCoder) and applying them to real-world use cases. Expertise in designing cloud-native architectures and microservices for AI/ML applications. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Kolkata || Bangalore || Pune || Chennai Req ID: 768858

Posted 3 weeks ago

Apply

0.0 - 2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: Data Scientist Job Type: Full-time Location: Bengaluru Notice Period: 15 days or immediate joiner Experience: 0-2 Years Job Summary We seek a highly skilled Data Scientist (LLM) to join our AI and Machine Learning team. The ideal candidate will have a strong foundation in Machine Learning (ML), Deep Learning (DL), and Large Language Models (LLMs) , along with hands-on experience in building and deploying conversational AI/chatbots . The role requires expertise in LLM agent development frameworks such as LangChain, LlamaIndex, AutoGen, and LangGraph . You will work closely with cross-functional teams to drive the development and enhancement of AI-powered applications. Key Responsibilities Develop, fine-tune, and deploy Large Language Models (LLMs) for various applications, including chatbots, virtual assistants, and enterprise AI solutions. Build and optimize conversational AI solutions with at least 1 year of experience in chatbot development. Implement and experiment with LLM agent development frameworks such as LangChain, LlamaIndex, AutoGen, and LangGraph. Design and develop ML/DL-based models to enhance natural language understanding capabilities. Work on retrieval-augmented generation (RAG) and vector databases (e.g., FAISS, Pinecone, Weaviate, ChromaDB) to enhance LLM-based applications. Optimize and fine-tune transformer-based models such as GPT, LLaMA, Falcon, Mistral, Claude, etc., for domain-specific tasks. Develop and implement prompt engineering techniques and fine-tuning strategies to improve LLM performance. Work on AI agents, multi-agent systems, and tool-use optimization for real-world business applications. Develop APIs and pipelines to integrate LLMs into enterprise applications. Research and stay up-to-date with the latest advancements in LLM architectures, frameworks, and AI trends. Required Skills & Qualifications 0-2 years of experience in Machine Learning (ML), Deep Learning (DL), and NLP-based model development. Hands-on experience in developing and deploying conversational AI/chatbots is Plus Strong proficiency in Python and experience with ML/DL frameworks such as TensorFlow, PyTorch, and Hugging Face Transformers. Experience with LLM agent development frameworks like LangChain, LlamaIndex, AutoGen, LangGraph. Knowledge of vector databases (e.g., FAISS, Pinecone, Weaviate, ChromaDB) and embedding models. Understanding of Prompt Engineering and Fine-tuning LLMs. Familiarity with cloud services (AWS, GCP, Azure) for deploying LLMs at scale. Experience in working with APIs, Docker, FastAPI for model deployment. Strong analytical and problem-solving skills. Ability to work independently and collaboratively in a fast-paced environment. Good To Have Experience with Multi-modal AI models (text-to-image, text-to-video, speech synthesis, etc.). Knowledge of Knowledge Graphs and Symbolic AI. Understanding of MLOps and LLMOps for deploying scalable AI solutions. Experience in automated evaluation of LLMs and bias mitigation techniques. Research experience or published work in LLMs, NLP, or Generative AI is a plus.

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

Pune, Maharashtra

On-site

Pune,Maharashtra,India +3 more Job ID 768858 Join our Team About this opportunity: We are seeking a highly skilled, hands-on AI Architect - GenAI to lead the design and implementation of production-grade, cloud-native AI and NLP solutions that drive business value and enhance decision-making processes. The ideal candidate will have a robust background in machine learning, generative AI, and the architecture of scalable production systems. As an AI Architect, you will play a key role in shaping the direction of advanced AI technologies and leading teams in the development of cutting-edge solutions. What you will do: Architect and design AI and NLP solutions to address complex business challenges and support strategic decision-making. Lead the design and development of scalable machine learning models and applications using Python, Spark, NoSQL databases, and other advanced technologies. Spearhead the integration of Generative AI techniques in production systems to deliver innovative solutions such as chatbots, automated document generation, and workflow optimization. Guide teams in conducting comprehensive data analysis and exploration to extract actionable insights from large datasets, ensuring these findings are communicated effectively to stakeholders. Collaborate with cross-functional teams, including software engineers and data engineers, to integrate AI models into production environments, ensuring scalability, reliability, and performance. Stay at the forefront of advancements in AI, NLP, and Generative AI, incorporating emerging methodologies into existing models and developing new algorithms to solve complex challenges. Provide thought leadership on best practices for AI model architecture, deployment, and continuous optimization. Ensure that AI solutions are built with scalability, reliability, and compliance in mind. The skills you bring: Minimum of experience in AI, machine learning, or a similar role, with a proven track record of delivering AI-driven solutions. Hands-on experience in designing and implementing end-to-end GenAI-based solutions, particularly in chatbots, document generation, workflow automation, and other generative use cases. Expertise in Python programming and extensive experience with AI frameworks and libraries such as TensorFlow, PyTorch, scikit-learn, and vector databases. Deep understanding and experience with distributed data processing using Spark. Proven experience in architecting, deploying, and optimizing machine learning models in production environments at scale. Expertise in working with open-source Generative AI models (e.g., GPT-4, Mistral, Code-Llama, StarCoder) and applying them to real-world use cases. Expertise in designing cloud-native architectures and microservices for AI/ML applications. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?

Posted 3 weeks ago

Apply

8.0 - 11.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

JOB DESCRIPTION Roles & responsibilities Here are some of the key responsibilities of Sr Generative AI Engineer : Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Multimodal Model Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Model Development and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the problem domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Leadership: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Strong programming skills in Python and frameworks like PyTorch or TensorFlow. In depth knowledge on Deep Learning - CNN, RNN, LSTM, Transformers LLMs ( BERT, GEPT, etc.) and NLP algorithms. Also, familiarity with frameworks like Langgraph/CrewAI/Autogen to develop, deploy and evaluate AI agents. Ability to test and deploy open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. Ensure scalability and efficiency, handle data tasks, stay current with AI trends, and contribute to model documentation for internal and external audiences. Cloud computing experience, particularly with Google/Azure Cloud Platform, is essential. With strong foundation in understating Data Analytics Services offered by Google or Azure ( BigQuery/Synapse) Hands-on ML platforms offered through GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops Preferred Technical & Functional Skills Strong oral and written communication skills with the ability to communicate technical and non-technical concepts to peers and stakeholders Ability to work independently with minimal supervision, and escalate when needed Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables, not just individual tasks Understand business objectives and functions to support data needs RESPONSIBILITIES Roles & responsibilities Here are some of the key responsibilities of Sr Generative AI Engineer : Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Multimodal Model Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Model Development and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the problem domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Leadership: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Strong programming skills in Python and frameworks like PyTorch or TensorFlow. In depth knowledge on Deep Learning - CNN, RNN, LSTM, Transformers LLMs ( BERT, GEPT, etc.) and NLP algorithms. Also, familiarity with frameworks like Langgraph/CrewAI/Autogen to develop, deploy and evaluate AI agents. Ability to test and deploy open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. Ensure scalability and efficiency, handle data tasks, stay current with AI trends, and contribute to model documentation for internal and external audiences. Cloud computing experience, particularly with Google/Azure Cloud Platform, is essential. With strong foundation in understating Data Analytics Services offered by Google or Azure ( BigQuery/Synapse) Hands-on ML platforms offered through GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops Preferred Technical & Functional Skills Strong oral and written communication skills with the ability to communicate technical and non-technical concepts to peers and stakeholders Ability to work independently with minimal supervision, and escalate when needed Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables, not just individual tasks Understand business objectives and functions to support data needs #KGS QUALIFICATIONS This role is for you if you have the below Educational Qualifications PhD or equivalent degree in Computer Science/Applied Mathematics/Applied Statistics/Artificial Intelligence Preferences to research scholars from IITs, NITs and IIITs ( Research Scholars who are submitted their thesis) Work Experience 8 to 11 Years of experience with strong record of publications in top tier conferences and journals

Posted 3 weeks ago

Apply

8.0 - 11.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

JOB DESCRIPTION Roles & responsibilities Here are some of the key responsibilities of Sr Generative AI Engineer : Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Multimodal Model Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Model Development and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the problem domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Leadership: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Strong programming skills in Python and frameworks like PyTorch or TensorFlow. In depth knowledge on Deep Learning - CNN, RNN, LSTM, Transformers LLMs ( BERT, GEPT, etc.) and NLP algorithms. Also, familiarity with frameworks like Langgraph/CrewAI/Autogen to develop, deploy and evaluate AI agents. Ability to test and deploy open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. Ensure scalability and efficiency, handle data tasks, stay current with AI trends, and contribute to model documentation for internal and external audiences. Cloud computing experience, particularly with Google/Azure Cloud Platform, is essential. With strong foundation in understating Data Analytics Services offered by Google or Azure ( BigQuery/Synapse) Hands-on ML platforms offered through GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops Preferred Technical & Functional Skills Strong oral and written communication skills with the ability to communicate technical and non-technical concepts to peers and stakeholders Ability to work independently with minimal supervision, and escalate when needed Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables, not just individual tasks Understand business objectives and functions to support data needs RESPONSIBILITIES Roles & responsibilities Here are some of the key responsibilities of Sr Generative AI Engineer : Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Multimodal Model Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Model Development and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the problem domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Leadership: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Strong programming skills in Python and frameworks like PyTorch or TensorFlow. In depth knowledge on Deep Learning - CNN, RNN, LSTM, Transformers LLMs ( BERT, GEPT, etc.) and NLP algorithms. Also, familiarity with frameworks like Langgraph/CrewAI/Autogen to develop, deploy and evaluate AI agents. Ability to test and deploy open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. Ensure scalability and efficiency, handle data tasks, stay current with AI trends, and contribute to model documentation for internal and external audiences. Cloud computing experience, particularly with Google/Azure Cloud Platform, is essential. With strong foundation in understating Data Analytics Services offered by Google or Azure ( BigQuery/Synapse) Hands-on ML platforms offered through GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops Preferred Technical & Functional Skills Strong oral and written communication skills with the ability to communicate technical and non-technical concepts to peers and stakeholders Ability to work independently with minimal supervision, and escalate when needed Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables, not just individual tasks Understand business objectives and functions to support data needs #KGS QUALIFICATIONS This role is for you if you have the below Educational Qualifications PhD or equivalent degree in Computer Science/Applied Mathematics/Applied Statistics/Artificial Intelligence Preferences to research scholars from IITs, NITs and IIITs ( Research Scholars who are submitted their thesis) Work Experience 8 to 11 Years of experience with strong record of publications in top tier conferences and journals

Posted 3 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

The Applications Development Senior Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Conduct tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establish and implement new or revised applications systems and programs to meet specific business needs or user areas Design and implement backend systems and APIs to serve generative AI models. Work with LLMs(GPT, LLaMa, Mistral etc) including fine tuning and prompt engineering. Develop orchestration pipelines and agent frameworks for GenAI applications Monitor model performance, governance and ensure ethical and secure use of GenAI systems. Monitor and control all phases of development process and analysis, design, construction, testing, and implementation as well as provide user and operational support on applications to business users Utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, provide evaluation of business process, system process, and industry standards, and make evaluative judgement Recommend and develop security measures in post implementation analysis of business usage to ensure successful system design and functionality Consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems Ensure essential procedures are followed and help define operating standards and processes Serve as advisor or coach to new or lower level analysts Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 8-12 years of relevant experience Experience in systems analysis and programming of software applications Experience in managing and implementing successful projects Working knowledge of consulting/project management techniques/methods Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 3 weeks ago

Apply

0 years

3 - 9 Lacs

Hyderābād

On-site

We are seeking a skilled Agentic AI Developer to design and implement intelligent agent systems powered by Large Language Models (LLMs) . This role involves developing LLM-based pipelines that can ingest transcripts, documents, or business narratives and generate structured artifacts such as workflows, decision trees, action plans, or contextual recommendations. You will collaborate with cross-functional teams to deploy autonomous AI agents capable of reasoning, planning, memory, and tool usage in enterprise environments — primarily within the Microsoft ecosystem (Azure, Power Platform, Copilot, and M365 integrations). Key Responsibilities Build and deploy autonomous agent systems using frameworks such as LangChain, AutoGen, CrewAI, or Semantic Kernel. Develop pipelines to process natural language input and generate structured outputs tailored to business needs. Implement agentic features such as task orchestration, memory storage, tool integration , and feedback loops. Fine-tune LLMs or apply prompt engineering to optimize accuracy, explainability, and responsiveness. Integrate agents with Microsoft 365 services (Teams, Outlook, SharePoint) and Power Platform components (Dataverse, Power Automate). Collaborate with business and product teams to define use cases, test scenarios, and performance benchmarks. Participate in scenario-based UAT testing, risk evaluation, and continuous optimization. Must-Have Skills Proficiency in Python and hands-on experience with ML/AI libraries and frameworks (Transformers, PyTorch, LangChain). Strong understanding of LLMs (e.g., GPT, Claude, LLaMA, Mistral) and prompt engineering principles. Experience developing agent workflows using ReAct, AutoGen, CrewAI, or OpenAI function calling . Familiarity with Vector Databases (FAISS, Pinecone, Qdrant) and RAG-based architectures . Skills in Natural Language Processing (NLP) : summarization, entity recognition, intent classification. Integration experience with APIs, SDKs , and enterprise tools (preferably Microsoft stack). Preferred Certifications (Candidates with the following certifications will have a strong advantage) : ✅ Microsoft Certified: Azure AI Engineer Associate (AI-102) ✅ Microsoft Certified: Power Platform App Maker (PL-100) ✅ Microsoft 365 Certified: Developer Associate (MS-600) ✅ OpenAI Developer Certifications or Prompt Engineering Badge ✅ Google Cloud Certified: Professional Machine Learning Engineer ✅ NVIDIA Deep Learning Institute Certifications ✅ Databricks Generative AI Pathway (optional)

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

🚀 About the Role We are seeking a highly skilled and innovative AI/ML Engineer with 4–6 years of hands-on experience in Machine Learning, Generative AI, and Large Language Models (LLMs). The ideal candidate should have deep expertise in modern AI frameworks, vector databases, open-source LLMs, and prompt engineering. You will play a pivotal role in designing, building, and deploying LLM-powered applications, including RAG pipelines, Agentic AI, and MCP workflows. 🧩 Key Responsibilities ● Design, develop, and deploy AI/ML models for real-world applications. ● Implement Generative AI solutions using platforms like OpenAI, Anthropic Claude, and HuggingFace Transformers. ● Work on LLM fine-tuning using PEFT, LoRA, QLoRA, and similar parameter-efficient methods. ● Develop intelligent agents using LangChain and LangGraph frameworks. ● Build and optimize RAG pipelines integrated with vector databases like Pinecone, Weaviate, etc. ● Apply prompt engineering techniques to improve model performance and outputs. ● Collaborate cross-functionally with product, data, and engineering teams to deliver scalable AI systems. ● Work with cloud infrastructure, especially AWS, for model training, deployment, and orchestration. ● Stay up to date with the latest research and trends in AI, LLMs, and agent-based systems. ● Implement and maintain MCP Protocols and agentic workflows for advanced decision-making pipelines. Must-Have Skills & Experience ● 4–6 years of strong experience in Machine Learning, Deep Learning or Generative AI and Agentic AI ● Hands-on experience with Generative AI, LLM architectures, and open-source models like Mistral, Meta LLaMA, etc. ● Proficiency in LangChain, LangGraph, and prompt engineering ● Familiarity with Vector Databases such as Pinecone, Weaviate, FAISS, etc. ● Experience with OpenAI, HuggingFace, Anthropic Claude, and model APIs ● In-depth understanding of LLM fine-tuning techniques (LoRA, QLoRA, PEFT) ● Strong Python programming skills and experience with ML frameworks (PyTorch, TensorFlow) ● Good understanding of AWS services for model training, hosting, and storage ● Solid understanding of RAG pipelines, agentic AI frameworks, and MCP Protocols ● Experience working in a fast-paced, research-driven environment

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

📍 Location: Ahmedabad (Hybrid Preferred) 🕒 Type: Full-time / Internship About Vakta AI Vakta AI is building a bold new interface to knowledge — through hyper-realistic AI avatars that people can talk to, learn from, and take guidance from in real-time. From intelligent tutors and language coaches to career, legal, and health advisors — we're rethinking how India learns and solves problems, one conversation at a time. We’re assembling a world-class AI team to push the frontiers of LLMs, speech technology, and virtual agent design — blending cutting-edge research with real-world scale. As part of this journey, we’re hiring across core AI/ML roles — a rare chance to shape the future of avatar-based learning and expert access for millions across Bharat. What You’ll Work On Core R&D Fine-tune and optimize LLMs (Mistral, Gemma, Phi, etc.) for interactive, role-based conversations Build voice-driven interfaces using models like Whisper, Bark, OpenVoice, etc. Experiment with LoRA, QLoRA, RAG, quantization, and other performance techniques Engineering & Deployment Convert research into production-grade APIs or on-device models Optimize models for latency, memory, and multilingual support Integrate models with 3D avatar systems and real-time user experiences Internal Knowledge Sharing Maintain clean experiment logs and model documentation Collaborate in internal demos, peer reviews, and quick iteration cycles You Should Have 1–2 years of experience in AI/ML, NLP, or speech tech — or strong project portfolio Proficiency with PyTorch, Transformers, HuggingFace, and model fine-tuning Familiarity with LLM training techniques like LoRA, DPO, RAG, or quantization Hands-on experience with speech technologies (ASR/TTS) is a strong plus Clear communication and the ability to own projects end-to-end Bonus If You’ve Worked On Indian language support / multilingual AI ONNX, model compression, or edge deployment Unity, WebXR, or avatar integration pipelines Open-source contributions or public demos/projects Qualifications Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, Data Science, or related fields Strong academic or project background in NLP, deep learning, or speech processing Alternatively, a solid portfolio of personal/independent work in LLMs or applied ML can substitute for formal education Why Join Vakta AI Build at the intersection of LLMs, speech, and 3D avatars Early-stage team with high ownership and creative freedom Solve real, Bharat-scale education and access problems Work out of Ahmedabad with a hybrid and flexible setup Your work goes live in weeks — not quarters

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Summary We are seeking an experienced AI/ML Engineer with a solid foundation in Machine learning, Artificial Intelligence and at least one year of hands-on experience in Generative AI. The ideal candidate will have strong proficiency in Python, LLM Models, and emerging techniques such as Retrieval-Augmented Generation (RAG), model fine-tuning, and agentic AI systems. Key Responsibilities Design, develop, and deploy AI/ML solutions with an emphasis on generative models and LLMs. Implement and optimize RAG pipelines for knowledge-aware AI systems. Fine-tune and customize models (e.g., LLaMA, Mistral, GPT, etc.) for specific domains or applications. Build and manage agentic AI systems capable of autonomous planning and decision-making. Work closely with cross-functional teams to identify use cases and deliver scalable AI-powered features. Stay up to date with the latest developments in AI/ML and contribute to internal knowledge sharing. Required Skills & Qualifications 3+ years of experience in AI/ML development. Minimum 1 year of hands-on experience in Generative AI projects. Proficient in Python and common ML libraries (e.g., PyTorch, scikit-learn). Strong understanding of Large Language Models (LLMs) and transformer architectures. Experience building RAG pipelines and integrating vector search systems. Hands-on experience with model fine-tuning using LoRA, PEFT, or Hugging Face Transformers. Experience developing agentic AI systems using frameworks like LangChain, AutoGen, or custom orchestration logic. Experience on working with cloud platforms Tools & Technologies Frameworks & Libraries: LangChain, LlamaIndex, AutoGen, PyTorch, TensorFlow Model Providers: OpenAI, Anthropic, Llama, Mistral Vector Stores: FAISS, Pinecone, Milvus APIs & Services: REST, GraphQL DevOps & Infra: Docker, AWS/GCP/Azure

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Title: AI Lead – Generative AI & ML Systems Key Responsibilities Generative AI Development Design and implement LLM-powered solutions and generative AI models for use cases such as predictive analytics, automation workflows, anomaly detection, and intelligent systems. · RAG & LLM Applications Build and deploy Retrieval-Augmented Generation (RAG) pipelines, structured generation systems, and chat-based assistants tailored to business operations. Full AI Lifecycle Management Lead the complete AI lifecycle—from data ingestion and preprocessing to model design, training, testing, deployment, and continuous monitoring. · Optimization & Scalability Develop high-performance AI/LLM inference pipelines, applying techniques like quantization, pruning, batching, and model distillation to support real-time and memory-constrained environments. MLOps & CI/CD Automation Automate training and deployment workflows using Terraform, GitLab CI, GitHub Actions, or Jenkins, integrating model versioning, drift detection, and compliance monitoring. Cloud & Deployment Deploy and manage AI solutions using AWS, Azure, or GCP with containerization tools like Docker and Kubernetes. AI Governance & Compliance Ensure model/data governance and adherence to regulatory and ethical standards in production AI deployments. Stakeholder Collaboration Work cross-functionally with product managers, data scientists, and engineering teams to align AI outputs with real-world business goals. Required Skills & Qualifications Bachelor’s degree (B.Tech or higher) in Computer Science, IT, or a related field is required. 8-12 Year exp- from the Ai team with overall experience in Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) solution development. Minimum 2+ years of hands-on experience in Generative AI and LLM-based solutions, including prompt engineering, fine-tuning, Retrieval-Augmented Generation (RAG) pipelines with full CI/CD integration, monitoring, and observability pipelines, with 100% independent contribution. Proven expertise in both open-source and proprietary Large Language Models (LLMs), including LLaMA, Mistral, Qwen, GPT, Claude, and BERT. Expertise in C/C++ & Python programming with relevant ML/DL libraries including TensorFlow, PyTorch, and Hugging Face Transformers. Experience deploying scalable AI systems in containerized environments using Docker and Kubernetes. Deep understanding of the MLOps/LLMOps lifecycle, including model versioning, deployment automation, performance monitoring, and drift detection. Familiarity with CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins) and DevOps for ML workflows. Working knowledge of Infrastructure-as-Code (IaC) tools like Terraform for cloud resource provisioning and reproducible ML pipelines. Hands-on experience with cloud platforms (AWS, GCP, Azure) and container orchestration (Docker, Kubernetes). Designed and documented High-Level Design (HLD) and Low-Level Design (LLD) for ML/GenAI systems, covering data pipelines, model serving, vector search, and observability layers. Documentation included component diagrams, network architecture, CI/CD workflows, and tabulated system designs. Provisioned and managed ML infrastructure using Terraform, including compute clusters, vector databases, and LLM inference endpoints across AWS, GCP, and Azure. Experience beyond notebooks: shipped models with logging, tracing, rollback mechanisms, and cost control strategies. Hands-on ownership of production-grade LLM workflows, not limited to experimentation. Full CI/CD integration, monitoring, and observability pipelines, with 100% independent contribution. Preferred Qualifications (Good To Have) Experience with LangChain, LlamaIndex, AutoGen, CrewAI, OpenAI APIs, or building modular LLM agent workflows. Exposure to multi-agent orchestration, tool-augmented reasoning, or Autonomous AI agents and agentic communication patterns with orchestration. Experience deploying ML/GenAI systems in regulated environments, with established governance, compliance, and Responsible AI frameworks. Familiarity with AWS data and machine learning services, including Amazon SageMaker, AWS Bedrock, ECS/EKS, and AWS Glue, for building scalable, secure data pipelines and deploying end-to-end AI/ML workflows.

Posted 3 weeks ago

Apply

25.0 years

0 Lacs

Tondiarpet, Tamil Nadu, India

On-site

The Company PayPal has been revolutionizing commerce globally for more than 25 years. Creating innovative experiences that make moving money, selling, and shopping simple, personalized, and secure, PayPal empowers consumers and businesses in approximately 200 markets to join and thrive in the global economy. We operate a global, two-sided network at scale that connects hundreds of millions of merchants and consumers. We help merchants and consumers connect, transact, and complete payments, whether they are online or in person. PayPal is more than a connection to third-party payment networks. We provide proprietary payment solutions accepted by merchants that enable the completion of payments on our platform on behalf of our customers. We offer our customers the flexibility to use their accounts to purchase and receive payments for goods and services, as well as the ability to transfer and withdraw funds. We enable consumers to exchange funds more safely with merchants using a variety of funding sources, which may include a bank account, a PayPal or Venmo account balance, PayPal and Venmo branded credit products, a credit card, a debit card, certain cryptocurrencies, or other stored value products such as gift cards, and eligible credit card rewards. Our PayPal, Venmo, and Xoom products also make it safer and simpler for friends and family to transfer funds to each other. We offer merchants an end-to-end payments solution that provides authorization and settlement capabilities, as well as instant access to funds and payouts. We also help merchants connect with their customers, process exchanges and returns, and manage risk. We enable consumers to engage in cross-border shopping and merchants to extend their global reach while reducing the complexity and friction involved in enabling cross-border trade. Our beliefs are the foundation for how we conduct business every day. We live each day guided by our core values of Inclusion, Innovation, Collaboration, and Wellness. Together, our values ensure that we work together as one global team with our customers at the center of everything we do – and they push us to ensure we take care of ourselves, each other, and our communities. Job Description Summary: What you need to know about the role- Data scientists are highly motivated team players with strong analytical skills who specialize in creating, driving and executing initiatives to mitigate fraud on PayPal’s platform and improve the experience for PayPal’s hundreds of millions of customers, while guaranteeing compliance with regulations. Meet our team Data scientists in the Fraud Risk team are problem solvers suited to approach varied challenges in complex big data environments. Our core goals are to enable seamless and delightful experiences to our customers, while preventing threat actors from accessing customers’ financial instruments and personal information. As part of our day-to-day job, we are collaborating with a wide variety of partners: product owners, data scientists, security experts, legal consults, and engineers, to bring our data science insights to life, impacting the experience and security of millions of customers around the globe. Job Description: Your way to impact Data scientists deeply understand PayPal’s business objectives, as their impact on PayPal’s top and bottom lines is immense. As a data scientist, you will develop key AIML capabilities, tools, and insights with the aim of adapting PayPal’s advanced proprietary fraud prevention and experience mechanisms and enabling growth. Your day to day Day-to-day duties include data analysis, monitoring and forecasting, creating the logic for and implementing risk rules and strategies, providing requirements to data scientists and technology teams on attribute, model and platform requirements, and communicating with global stakeholders to ensure we deliver the best possible customer experience while meeting loss rate targets. What Do You Need To Bring- Strong proficiency in Python for data analysis, machine learning, and automation. Solid understanding of supervised and unsupervised AI/machine learning methods (e.g., XGBoost, LightGBM, Random Forest, clustering, isolation forests, autoencoders, neural networks, transformer-based architectures). Experience in payment fraud, AML, KYC, or broader risk modeling within fintech or financial institutions. Experience developing and deploying ML models in production using frameworks such as scikit-learn, TensorFlow, PyTorch, or similar. Hands-on experience with LLMs (e.g., OpenAI, LLaMA, Claude, Mistral), including use of prompt engineering, retrieval-augmented generation (RAG), and agentic AI to support internal automation and risk workflows. Ability to work cross-functionally with engineering, product, compliance, and operations teams. Proven track record of translating complex ML insights into business actions or policy decisions. BS/BA degree with 5+ years of related professional experience or master’s degree with 4+ years of related experience. We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don't hesitate to apply. For the majority of employees, PayPal's balanced hybrid work model offers 3 days in the office for effective in-person collaboration and 2 days at your choice of either the PayPal office or your home workspace, ensuring that you equally have the benefits and conveniences of both locations. Our Benefits: At PayPal, we’re committed to building an equitable and inclusive global economy. And we can’t do this without our most important asset—you. That’s why we offer benefits to help you thrive in every stage of life. We champion your financial, physical, and mental health by offering valuable benefits and resources to help you care for the whole you. We have great benefits including a flexible work environment, employee shares options, health and life insurance and more. To learn more about our benefits please visit https://www.paypalbenefits.com Who We Are: To learn more about our culture and community visit https://about.pypl.com/who-we-are/default.aspx Commitment to Diversity and Inclusion PayPal provides equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, pregnancy, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state, or local law. In addition, PayPal will provide reasonable accommodations for qualified individuals with disabilities. If you are unable to submit an application because of incompatible assistive technology or a disability, please contact us at paypalglobaltalentacquisition@paypal.com. Belonging at PayPal: Our employees are central to advancing our mission, and we strive to create an environment where everyone can do their best work with a sense of purpose and belonging. Belonging at PayPal means creating a workplace with a sense of acceptance and security where all employees feel included and valued. We are proud to have a diverse workforce reflective of the merchants, consumers, and communities that we serve, and we continue to take tangible actions to cultivate inclusivity and belonging at PayPal. Any general requests for consideration of your skills, please Join our Talent Community. We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don’t hesitate to apply. REQ ID R0127046

Posted 3 weeks ago

Apply

0 years

3 - 9 Lacs

Hyderābād

On-site

We are seeking a skilled Agentic AI Developer to design and implement intelligent agent systems powered by Large Language Models (LLMs) . This role involves developing LLM-based pipelines that can ingest transcripts, documents, or business narratives and generate structured artifacts such as workflows, decision trees, action plans, or contextual recommendations. You will collaborate with cross-functional teams to deploy autonomous AI agents capable of reasoning, planning, memory, and tool usage in enterprise environments — primarily within the Microsoft ecosystem (Azure, Power Platform, Copilot, and M365 integrations). Key Responsibilities Build and deploy autonomous agent systems using frameworks such as LangChain, AutoGen, CrewAI, or Semantic Kernel. Develop pipelines to process natural language input and generate structured outputs tailored to business needs. Implement agentic features such as task orchestration, memory storage, tool integration , and feedback loops. Fine-tune LLMs or apply prompt engineering to optimize accuracy, explainability, and responsiveness. Integrate agents with Microsoft 365 services (Teams, Outlook, SharePoint) and Power Platform components (Dataverse, Power Automate). Collaborate with business and product teams to define use cases, test scenarios, and performance benchmarks. Participate in scenario-based UAT testing, risk evaluation, and continuous optimization. Must-Have Skills Proficiency in Python and hands-on experience with ML/AI libraries and frameworks (Transformers, PyTorch, LangChain). Strong understanding of LLMs (e.g., GPT, Claude, LLaMA, Mistral) and prompt engineering principles. Experience developing agent workflows using ReAct, AutoGen, CrewAI, or OpenAI function calling . Familiarity with Vector Databases (FAISS, Pinecone, Qdrant) and RAG-based architectures . Skills in Natural Language Processing (NLP) : summarization, entity recognition, intent classification. Integration experience with APIs, SDKs , and enterprise tools (preferably Microsoft stack). Preferred Certifications (Candidates with the following certifications will have a strong advantage) : ✅ Microsoft Certified: Azure AI Engineer Associate (AI-102) ✅ Microsoft Certified: Power Platform App Maker (PL-100) ✅ Microsoft 365 Certified: Developer Associate (MS-600) ✅ OpenAI Developer Certifications or Prompt Engineering Badge ✅ Google Cloud Certified: Professional Machine Learning Engineer ✅ NVIDIA Deep Learning Institute Certifications ✅ Databricks Generative AI Pathway (optional)

Posted 3 weeks ago

Apply

0 years

4 - 9 Lacs

Chennai

On-site

Job Summary Strategic & Leadership-Level GenAI Skills 1. AI Solution Architecture Designing scalable GenAI systems (e.g. RAG pipelines multi-agent systems). Choosing between hosted APIs vs open-source models. Architecting hybrid systems (LLMs + traditional software). 2. Model Evaluation & Selection Benchmarking models (e.g. GPT-4 Claude Mistral LLaMA). Understanding trade-offs: latency cost accuracy context length. Using tools like LM Evaluatio Responsibilities Strategic & Leadership-Level GenAI Skills 1. AI Solution Architecture Designing scalable GenAI systems (e.g. RAG pipelines multi-agent systems). Choosing between hosted APIs vs open-source models. Architecting hybrid systems (LLMs + traditional software). 2. Model Evaluation & Selection Benchmarking models (e.g. GPT-4 Claude Mistral LLaMA). Understanding trade-offs: latency cost accuracy context length. Using tools like LM Evaluation Harness OpenLLM Leaderboard etc. 3. Enterprise-Grade RAG Systems Designing Retrieval-Augmented Generation pipelines. Using vector databases (Pinecone Weaviate Qdrant) with LangChain or LlamaIndex. Optimizing chunking embedding strategies and retrieval quality. 4. Security Privacy & Governance Implementing data privacy access control and audit logging. Understanding risks: prompt injection data leakage model misuse. Aligning with frameworks like NIST AI RMF EU AI Act or ISO/IEC 42001. 5. Cost Optimization & Monitoring Estimating and managing GenAI inference costs. Using observability tools (e.g. Arize WhyLabs PromptLayer). Token usage tracking and prompt optimization. Advanced Technical Skills 6. Model Fine-Tuning & Distillation Fine-tuning open-source models using PEFT LoRA QLoRA. Knowledge distillation for smaller faster models. Using tools like Hugging Face Axolotl or DeepSpeed. 7. Multi-Agent Systems Designing agent workflows (e.g. AutoGen CrewAI LangGraph). Task decomposition memory and tool orchestration. 8. Toolformer & Function Calling Integrating LLMs with external tools APIs and databases. Designing tool-use schemas and managing tool routing. Team & Product Leadership 9. GenAI Product Thinking Identifying use cases with high ROI. Balancing feasibility desirability and viability. Leading GenAI PoCs and MVPs. 10. Mentoring & Upskilling Teams Training developers on prompt engineering LangChain etc. Establishing GenAI best practices and code reviews. Leading internal hackathons or innovation sprints.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Noida

On-site

Position: Senior Data Scientist / Data Analyst (Noida) (CE58SF RM 3385) Education Required: Bachelor’s / Masters / PhD: Bachelor’s degree in Computer Science, Data Science, or a related field. Strong understanding of machine learning , deep learning and Generative AI concepts. Must have skills: Experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, Deep Learning stack, NLP using python Strong knowledge and experience in Generative AI/ LLM based development. Strong experience working with key LLM models APIs (e.g. AWS Bedrock, Azure Open AI/ OpenAI) and LLM Frameworks (e.g. LangChain, LlamaIndex). Experience with cloud infrastructure for AI/Generative AI/ML on AWS, Azure. Expertise in building enterprise grade, secure data ingestion pipelines for unstructured data – including indexing, search, and advance retrieval patterns. Knowledge of effective text chunking techniques for optimal processing and indexing of large documents or datasets. Proficiency in generating and working with text embeddings with understanding of embedding spaces and their applications in semantic search and information. retrieval. Experience with RAG concepts and fundamentals (VectorDBs, AWS OpenSearch, semantic search, etc.), Expertise in implementing RAG systems that combine knowledge bases with Generative AI models. Knowledge of training and fine-tuning Foundation Models (Athropic, Claud , Mistral, etc.), including multimodal inputs and outputs. Proficiency in Python, TypeScript, NodeJS, ReactJS (and equivalent) and frameworks. (e.g., pandas, NumPy, scikit-learn), Glue crawler, ETL Experience with data visualization tools (e.g., Matplotlib, Seaborn, Quicksight). Knowledge of deep learning frameworks (e.g., TensorFlow, Keras, PyTorch). Experience with version control systems (e.g., Git, CodeCommit). Good to have: Knowledge and Experience in building knowledge graphs in production. Understanding of multi-agent systems and their applications in complex problem-solving scenarios. Job Description: Develop and implement machine learning models and algorithms. Work closely with project stakeholders to understand requirements and translate them into deliverables. Utilize statistical and machine learning techniques to analyze and interpret complex data sets. Stay updated with the latest advancements in AI/ML technologies and methodologies. Collaborate with cross-functional teams to support various AI/ML initiatives. ******************************************************************************************************************************************* Job Category: Embedded HW_SW Job Type: Full Time Job Location: Noida Experience: 5 - 8 Years Notice period: 0-15 days

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

New Delhi, Delhi, India

Remote

Job Title: AI Research Engineer – Private LLM & Cognitive Systems Location: Delhi Type: Full-Time Experience Level: Senior / Expert Start Date: Immediate About Brainwave Science: Brainwave Science is a leader in cognitive technologies, specializing in solutions for the security and intelligence sectors. Our flagship product, iCognative™ , leverages real-time cognitive response analysis using Artificial Intelligence and Machine Learning techniques to redefine the future of investigations, defense, counterterrorism, and counterintelligence operations. Beyond security, Brainwave Science is at the forefront of healthcare innovation, applying our cutting-edge technology to identifying diseases, various neurological conditions, and mental health challenges in advance and identification of stress and anxiety in real time and providing non-medical, science-backed interventions. Together, we are shaping a future where advanced technology strengthens security, promotes wellness, and creates a healthier, safer world for individuals and communities worldwide. About the Role We are seeking an experienced and forward-thinking AI/ML Engineer – LLM & Deep Learning Expert to design, develop, and deploy Large Language Models (LLMs) and intelligent AI systems. You will work on cutting-edge projects at the intersection of natural language processing , edge AI , and biosignal intelligence , helping drive innovation across defense, security, and healthcare use cases. This role is ideal for someone who thrives in experimental environments, understands private and local LLM deployments, and is passionate about solving real-world challenges using advanced AI. Responsibilities Design, train, fine-tune, and deploy Large Language Models using frameworks like PyTorch , TensorFlow , or Hugging Face Transformers Integrate LLMs for local/edge deployment using tools like Ollama , LangChain , LM Studio , or llama.cpp Build NLP applications for intelligent automation , investigative analytics , and biometric interpretation Optimize models for low-latency , token efficiency , and on-device performance Work on prompt engineering , embedding tuning , and vector search integration (FAISS, Qdrant, Weaviate) Collaborate with technical and research teams to deliver scalable AI-driven features Stay current with developments in open-source and closed-source LLM ecosystems (e.g., Meta, OpenAI, Mistral) Must-Have Requirements B.Tech/M.Tech in Computer Science (CSE) , Electronics & Communication (ECE) , or Electrical & Electronics (EEE) from IIT, NIT, or BITS Minimum 3 - 4 years of hands-on experience in AI/ML, deep learning , and LLM development Deep experience with Transformer models (e.g., GPT, LLaMA, Mistral, Falcon, Claude) Hands-on with tools like LangChain , Hugging Face , Ollama , Docker , or Kubernetes Proficiency in Python , and strong knowledge of Linux environments Strong understanding of NLP , attention mechanisms , and model fine-tuning Preferred Qualifications Experience with biosignals , especially EEG or time-series data Experience deploying custom-trained LLMs on proprietary datasets Familiarity with RAG pipelines and multi-modal models (e.g., CLIP, LLaVA) Knowledge of cloud platforms (AWS, GCP, Azure) for scalable model training and serving Published research, patents, or open-source contributions in AI/ML communities Excellent communication, analytical, and problem-solving skills What We Offer Competitive compensation based on experience Flexible working hours and remote-friendly culture Access to high-performance compute infrastructure Opportunities to work on groundbreaking AI projects in healthcare, security, and defense

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Bhubaneswar, Odisha, India

Remote

Current Openings Lead Engineer - AI/ML Lead Engineer - AI/ML Experience : 8 - 12 years Bhubaneswar, Delhi - NCR, Remote Working Apply About The Job Featured As a Lead AI/ML Engineer, you spearhead the design, development, and implementation of advanced AI and machine learning models. Your role involves guiding a team of engineers ensuring the successful deployment of projects that leverage AI/ML technologies to solve complex problems. You collaborate closely with stakeholders to understand business needs, translate them into technical requirements, and drive innovation. Your responsibilities include optimizing model performance, conducting rigorous testing, and maintaining up-to-date knowledge of the latest industry trends. Additionally, you mentor team members, promote best practices, and contribute to strategic decision-making within the organization. Core Responsibilities Client Interaction: Discuss client requirements and develop proposals tailored to their needs. Demonstrations and Workshops: Conduct solution/product demonstrations, POC/Proof of Technology workshops, and prepare effort estimates in line with customer budgets and organizational financial guidelines. Model Oversight: Oversee the development and deployment of AI models, especially those generating content such as text, images, or other media. AI Solutions: Engage in coding, designing, developing, implementing, and deploying advanced AI solutions. Expertise Utilization: Utilize your expertise in NLP, Python programming, LLMs, Deep Learning, and AI principles to drive the development of transformative technologies. Leadership and Initiative: Actively lead projects and contribute to both unit-level and organizational initiatives to provide high-quality, value-adding solutions to customers. Strategic Development: Develop value-creating strategies and models to help clients innovate, drive growth, and increase profitability. Technology Awareness: Stay informed about the latest technologies and industry trends. Problem-Solving and Collaboration: Employ logical thinking and problem-solving skills, and collaborate effectively. Client Interfacing: Demonstrate strong client interfacing skills. Project and Team Management: Manage projects and teams efficiently. Required Skills Skills: Hands-on expertise in NLP, Computer Vision, programming, and related concepts. Leadership: Capable of leading and mentoring a team of AI engineers and researchers, setting strategies for AI model development and deployment, and ensuring these align with business goals. Technical Proficiency: Proficient in implementing and optimizing advanced AI solutions using Deep Learning and NLP, with tools such as TensorFlow, PyTorch, Spark, and Keras. LLM Experience: Experience with Large language Models like GPT 3.5, GPT 4, Llama, Gemini, Mistral, etc. along with experience in LLM integration frameworks like LangChain, Llamaindex, AgentGPT, etc. Deep Learning OCR: Extensive experience implementing solutions using Deep Learning OCR algorithms. Neural Networks: Working knowledge of Artificial Neural Networks (ANNs), Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Generative Adversarial Networks (GANs). Python Expertise: Strong coding skills in Python, including related frameworks, best practices, and design patterns. Preferred Knowledge: Familiarity with word embeddings, transformer models, and image/text generation and processing. Deployment: Experience deploying AI/ML solutions as a service or REST API endpoints on Cloud or Kubernetes. Development Methodologies: Proficient in development methodologies and writing unit tests in Python. Cloud: Knowledge of cloud computing platforms and services, such as AWS, Azure, or Google Cloud. Experience with information security and secure development best practices. Qualifications Bachelor's or higher degree in Computer Science, Engineering, Mathematics, Statistics, Physics, or a related field. 8+ years in IT with a focus on AI/ML practices and background. Apply

Posted 3 weeks ago

Apply

2.0 - 4.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Current Openings Software Engineer - AI/ML Software Engineer - AI/ML Experience : 2 - 4 years Bhubaneswar Apply About The Job Featured As an AI/ML Engineer, you will be responsible for designing, validating, and integrating cutting-edge machine learning models and algorithms. Collaborate closely with cross-functional teams, including data scientists, to recognize and establish project objectives. Oversee data infrastructure maintenance, ensuring streamlined and scalable data operations. Stay updated with advancements in AI and propose their integration for operational enhancement. Effectively convey detailed data insights to non-technical stakeholders. Uphold stringent data privacy and security protocols. Engage in the full lifecycle of AI projects, spanning from ideation through deployment and continuous upkeep. Core Responsibilities Develop, validate, and implement machine learning models and algorithms. Collaborate with data scientists and other stakeholders to understand and define project goals. Maintain data infrastructure and ensure scalability and efficiency of data-related operations. Stay abreast of the latest developments in the field of AI/Client and recommend ways to implement them in our operations. Communicate complex data findings in a clear and understandable manner to non-technical stakeholders. Adhere to data privacy and security guidelines. Participate in the entire AI project lifecycle, from concept to deployment and maintenance. Required Skills Strong grasp of computer architecture, data structures, system software, and machine learning fundamentals. Solid theoretical understanding of machine learning. Experience with mapping NLP models (BERT and GPT) to accelerators and awareness of trade-offs across memory, bandwidth, and compute. Experience with Vector databases like Chroma Db, Pinecone, PGVector or similar Experience with Large language Models like GPT 3.5, GPT 4, GPT 4.o, Llama, Gemini, Mistral etc. Experience in LLM integration framework like langchain, Llamaindex, AgentGPT etc Experience with ML Models from definition to deployment, including training, quantization, sparsity, model preprocessing, and deployment. Proficiency in Python development in a Linux environment and using standard development tools. Experience with deep learning frameworks (such as PyTorch, Tensorflow, Keras, Spark). Working knowledge of Artificial Neural Networks (ANNs), Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Generative Adversarial Networks (GANs). Experience in training, tuning, and deploying ML models for Computer Vision (e.g., ResNet), and/or Recommendation Systems (e.g., DLRM). Experience deploying ML workloads on distributed systems. Self-motivated team player with a strong sense of ownership and leadership. Strong verbal, written, and organizational skills for effective communication and documentation. Research background with a publication record. Work experience at a cloud provider or AI compute/sub-system company. Knowledge of cloud computing platforms and services, such as AWS, Azure, or Google Cloud. Experience with information security and secure development best practices. Qualifications Bachelor's or higher degree in Computer Science, Engineering, Mathematics, Statistics, Physics, or a related field. 2-4 yrs of hands-on experience in AI/ML Apply

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

GE Healthcare Healthcare Information Technology Category Digital Technology / IT Mid-Career Job Id R4026614 Relocation Assistance No Location Bengaluru, Karnataka, India, 560066 Job Description Summary We are seeking a highly skilled and innovative AI Engineer with expertise in both traditional Artificial Intelligence and emerging Generative AI technologies. In this role, you will be responsible for designing, developing, and deploying intelligent systems that leverage machine learning, deep learning, and generative models to solve complex problems. You will work across the AI lifecycle—from data engineering and model development to deployment and monitoring—while also exploring GenAI applications, Agentic AI and developing agentic platforms. The ideal candidate combines strong technical acumen with a passion for experimentation, rapid prototyping, and delivering scalable AI solutions in real-world environments. GE HealthCare is a leading global medical technology and digital solutions innovator. Our purpose is to create a world where healthcare has no limits. Unlock your ambition, turn ideas into world-changing realities, and join an organization where every voice makes a difference, and every difference builds a healthier world. Job Description Job Description Roles and Responsibilities In this role, you will: Develop and fine-tune Generative AI models (e.g., LLMs, diffusion models). Design and implement machine learning models for classification, regression, clustering, and recommendation tasks. Build and maintain scalable AI pipelines for data ingestion, training, evaluation, and deployment. Collaborate with cross-functional teams to understand business needs and translate them into AI solutions. Ensure model performance, fairness, and explainability through rigorous testing and validation. Deploy models to production using MLOps tools and monitor their performance over time. Stay current with the latest research and trends in AI/ML and GenAI and evaluate their applicability to business problems. Document models, experiments, and workflows for reproducibility and knowledge sharing. Technical Skill Set Cloud & Infrastructure (AWS) Amazon SageMaker – Model training, tuning, deployment, and MLOps. Amazon Bedrock – Serverless GenAI model access and orchestration. SageMaker JumpStart – pre-trained models and GenAI templates. Prompt engineering and fine-tuning of LLMs using SageMaker or Bedrock. Programming & Scripting Python – Primary language for AI/ML development, data processing, and automation. Education Qualification Bachelor’s degree in engineering with minimum four years of experience in relevant technologies. Desired Characteristics Technical Expertise: GenAI Platforms & Models Familiarity with LLMs: like Claude (Anthropic), LLaMA (Meta), Gemini (Google), Mistral, Falcon Experience with APIs: Amazon Bedrock. Understanding of model types: encoder-decoder, decoder-only, diffusion models Design, develop, and deploy agent-based AI systems that exhibit autonomous decision-making. Integrate Generative AI (LLMs, diffusion models) into real-world applications. Prompt Engineering & Fine-Tuning Prompt design for zero-shot, few-shot, and chain-of-thought reasoning Fine-tuning and parameter-efficient tuning (LoRA, PEFT) Retrieval-Augmented Generation (RAG) design and implementation System Integration & Architecture Event-driven and serverless architectures (e.g., AWS Lambda, EventBridge) Development Frameworks LangChain, LlamaIndex. Vector databases: FAISS, Pinecone, Weaviate, Amazon OpenSearch Langgraph, Langchain Cloud & DevOps AWS (Bedrock, SageMaker, Lambda, S3), Azure (OpenAI, Functions), GCP (Vertex AI) CI/CD pipelines for GenAI workflows Security & Compliance Data privacy and governance (GDPR, HIPAA) Model safety: content filtering, moderation, hallucination control Monitoring & Optimization Model performance tracking (latency, cost, accuracy) Logging and observability (CloudWatch, Prometheus, Grafana) Cost optimization strategies for GenAI inference Collaboration & Business Alignment Working with product, legal, and compliance teams Translating business requirements into GenAI use cases Creating PoCs and scaling to production Business Acumen: Demonstrates the initiative to explore alternate technology and approaches to solving problems Skilled in breaking down problems, documenting problem statements and estimating efforts Has the ability to analyze impact of technology choices Skilled in negotiation to align stakeholders and communicate a single synthesized perspective to the scrum team. Balances value propositions for competing stakeholders. Demonstrates knowledge of the competitive environment Demonstrates knowledge of technologies in the market to help make buy vs build recommendations, scope MVPs, and to drive market timing decisions Leadership: Influences through others; builds direct and "behind the scenes" support for ideas. Pre-emptively sees downstream consequences and effectively tailors influencing strategy to support a positive outcome. Able to verbalize what is behind decisions and downstream implications. Continuously reflecting on success and failures to improve performance and decision-making. Understands when change is needed. Participates in technical strategy planning. Personal Attributes: Able to effectively direct and mentor others in critical thinking skills. Proactively engages with cross-functional teams to resolve issues and design solutions using critical thinking and analysis skills and best practices. Finds important patterns in seemingly unrelated information. Influences and energizes other toward the common vision and goal. Maintains excitement for a process and drives to new directions of meeting the goal even when odds and setbacks render one path impassable. Innovates and integrates new processes and/or technology to significantly add value to GE Healthcare. Identifies how the cost of change weighs against the benefits and advises accordingly. Proactively learns new solutions and processes to address seemingly unanswerable problems. Inclusion and Diversity GE Healthcare is an Equal Opportunity Employer where inclusion matters. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law. We expect all employees to live and breathe our behaviors: to act with humility and build trust; lead with transparency; deliver with focus, and drive ownership – always with unyielding integrity. Our total rewards are designed to unlock your ambition by giving you the boost and flexibility you need to turn your ideas into world-changing realities. Our salary and benefits are everything you’d expect from an organization with global strength and scale, and you’ll be surrounded by career opportunities in a culture that fosters care, collaboration and support. #LI-MA6 Additional Information Relocation Assistance Provided: No

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Job Title: AI Engineer / Developer Location: Gurugram Job Type: Full-Time Shift: EST shift (7pm IST-4am IST) Company Overview: Serigor Inc is Maryland based, CMMI L3, Woman Owned Small Business (WOSB) specializing in IT Services, IT Staff Augmentation, Government Solutions and Global Delivery. Founded in 2009, we are a leading IT services firm that delivers deep expertise, objective insights, a tailored approach and unparalleled collaboration to help US government agencies and Fortune 500 companies confidently face the future while increasing the efficiency of their current operations. Our professional services primarily focus on ITS services portfolio including but not limited to Managed IT Services, Enterprise Application Development, Testing and Management Consulting, Salesforce, Cloud and Infrastructure Consulting, DevOps Consulting, Migration Consulting, Service Management, Custom Implementation and IT Operations & Maintenance, Remote Application & Infrastructure Monitoring and Management practices. Position Overview: We are seeking a talented and self-driven AI Engineer / Developer to join our team and contribute to cutting-edge projects involving large language models (LLMs) and document intelligence. This role offers the flexibility to work remotely and can be structured as full-time or part-time, depending on your availability and interest. You will play a critical role in leveraging generative AI to extract structured insights from unstructured content, refine prompt engineering strategies, and build functional prototypes that bridge AI outputs with real-world applications. If you're passionate about NLP, LLMs, and building AI-first solutions, we want to hear from you. Key Responsibilities: Document Intelligence : Leverage large language models (e.g., OpenAI GPT, Anthropic Claude) to analyze and extract meaningful information from various types of documents, including PDFs, contracts, compliance records, and reports. Data Structuring : Convert natural language outputs into structured data formats such as JSON, tables, custom templates, or semantic tags for downstream integration. Prompt Engineering : Design, write, and iterate on prompts to ensure high-quality, repeatable, and reliable responses from AI models. Tooling & Prototyping : Develop lightweight tools, scripts, and workflows (using Python or similar) to automate, visualize, and test AI interactions. Model Evaluation : Run controlled experiments to evaluate the performance of AI-generated outputs, identifying gaps, edge cases, and potential improvements. Pipeline Integration : Collaborate with software engineers and product teams to integrate LLM pipelines into broader applications and systems. Traceability & Transparency : Ensure each piece of extracted information can be traced back to its original source within the document for auditing and validation purposes. Required Skills & Qualifications: Experience : Minimum of 3 years in AI/ML development, with a strong focus on natural language processing (NLP) , document analysis , or conversational AI . LLM Expertise : Hands-on experience working with large language models (e.g., GPT-4, Claude, Mistral) and prompt-based interactions. Programming Skills : Proficient in Python and experienced with modern AI frameworks such as LangChain , Hugging Face Transformers , or spaCy . Document Processing : Knowledge of embeddings , chunking strategies , and vectorization techniques for efficient document indexing and retrieval. Vector Databases : Familiarity with FAISS , Chroma , Pinecone , or similar vector DBs for storing and querying embedding data. Analytical Mindset : Strong ability to design, run, and interpret structured tests to measure and enhance the accuracy of AI outputs. Preferred Qualifications: RAG Workflows : Experience implementing Retrieval-Augmented Generation (RAG) systems for dynamic document querying and synthesis. Domain Exposure : Familiarity with legal , regulatory , or compliance-based documents and the unique challenges they pose. LLMOps & Deployment : Exposure to deploying AI models or pipelines, including experience with web APIs , LLMOps tooling , or cloud-native AI environments .

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary Strategic & Leadership-Level GenAI Skills AI Solution Architecture Designing scalable GenAI systems (e.g. RAG pipelines multi-agent systems). Choosing between hosted APIs vs open-source models. Architecting hybrid systems (LLMs + traditional software). Model Evaluation & Selection Benchmarking models (e.g. GPT-4 Claude Mistral LLaMA). Understanding trade-offs: latency cost accuracy context length. Using tools like LM Evaluatio Responsibilities Strategic & Leadership-Level GenAI Skills AI Solution Architecture Designing scalable GenAI systems (e.g. RAG pipelines multi-agent systems). Choosing between hosted APIs vs open-source models. Architecting hybrid systems (LLMs + traditional software). Model Evaluation & Selection Benchmarking models (e.g. GPT-4 Claude Mistral LLaMA). Understanding trade-offs: latency cost accuracy context length. Using tools like LM Evaluation Harness OpenLLM Leaderboard etc. Enterprise-Grade RAG Systems Designing Retrieval-Augmented Generation pipelines. Using vector databases (Pinecone Weaviate Qdrant) with LangChain or LlamaIndex. Optimizing chunking embedding strategies and retrieval quality. Security Privacy & Governance Implementing data privacy access control and audit logging. Understanding risks: prompt injection data leakage model misuse. Aligning with frameworks like NIST AI RMF EU AI Act or ISO/IEC 42001. Cost Optimization & Monitoring Estimating and managing GenAI inference costs. Using observability tools (e.g. Arize WhyLabs PromptLayer). Token usage tracking and prompt optimization. Advanced Technical Skills Model Fine-Tuning & Distillation Fine-tuning open-source models using PEFT LoRA QLoRA. Knowledge distillation for smaller faster models. Using tools like Hugging Face Axolotl or DeepSpeed. Multi-Agent Systems Designing agent workflows (e.g. AutoGen CrewAI LangGraph). Task decomposition memory and tool orchestration. Toolformer & Function Calling Integrating LLMs with external tools APIs and databases. Designing tool-use schemas and managing tool routing. Team & Product Leadership GenAI Product Thinking Identifying use cases with high ROI. Balancing feasibility desirability and viability. Leading GenAI PoCs and MVPs. Mentoring & Upskilling Teams Training developers on prompt engineering LangChain etc. Establishing GenAI best practices and code reviews. Leading internal hackathons or innovation sprints.

Posted 3 weeks ago

Apply

5.0 years

9 - 15 Lacs

Ahmedabad, Gujarat, India

On-site

Job Overview We are seeking a highly experienced and innovative Senior AI Engineer with a strong background in Generative AI , including LLM fine-tuning and prompt engineering . This role requires hands-on expertise across NLP , Computer Vision , and AI agent-based systems , with the ability to build, deploy, and optimize scalable AI solutions using modern tools and frameworks. Required Skills & Qualifications Bachelor’s or Master’s in Computer Science, AI, Machine Learning, or related field. 5+ years of hands-on experience in AI/ML solution development. Proven expertise in fine-tuning LLMs (e.g., LLaMA, Mistral, Falcon, GPT-family) using techniques like LoRA, QLoRA, PEFT. Deep experience in prompt engineering, including zero-shot, few-shot, and retrieval-augmented generation (RAG). Proficient in key AI libraries and frameworks: LLMs & GenAI: Hugging Face Transformers, LangChain, LlamaIndex, OpenAI API, Diffusers NLP: SpaCy, NLTK. Vision: OpenCV, MMDetection, YOLOv5/v8, Detectron2 MLOps: MLflow, FastAPI, Docker, Git Familiarity with vector databases (Pinecone, FAISS, Weaviate) and embedding generation. Experience with cloud platforms like AWS, GCP, or Azure, and deployment on in house GPU-backed infrastructure. Strong communication skills and ability to convert business problems into technical solutions. Preferred Qualifications Experience building multimodal systems (text + image, etc.) Practical experience with agent frameworks for autonomous or goal-directed AI. Familiarity with quantization, distillation, or knowledge transfer for efficient model deployment. Key Responsibilities Design, fine-tune, and deploy generative AI models (LLMs, diffusion models, etc.) for real-world applications. Develop and maintain prompt engineering workflows, including prompt chaining, optimization, and evaluation for consistent output quality. Build NLP solutions for Q&A, summarization, information extraction, text classification, and more. Develop and integrate Computer Vision models for image processing, object detection, OCR, and multimodal tasks. Architect and implement AI agents using frameworks such as LangChain, AutoGen, CrewAI, or custom pipelines. Collaborate with cross-functional teams to gather requirements and deliver tailored AI-driven features. Optimize models for performance, cost-efficiency, and low latency in production. Continuously evaluate new AI research, tools, and frameworks and apply them where relevant. Mentor junior AI engineers and contribute to internal AI best practices and documentation. Skills:- Artificial Intelligence (AI), Generative AI, Machine Learning (ML), Large Language Models (LLM), Prompt engineering, Retrieval Augmented Generation (RAG), Natural Language Processing (NLP), Computer Vision, AI Agents, Vector database, Python, Docker and API

Posted 3 weeks ago

Apply

3.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Job Title: Generative AI Developer Job Summary: We are looking for a Generative AI Developer with hands-on experience to design, develop and deploy AI and Generative AI models that generate high quality content, such as text, images, chatbots, etc. The Ideal candidate will have expertise in deep learning, natural language processing, and computer vision. Key Responsibilities: Deliver large-scale AI/Gen AI projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements and project requirements Work with a team of Data Engineers, ML/AI Engineers, Prompt engineers, and other Data & AI professionals to deliver projects from inception to implementation Brainstorm, build & improve AI/Gen AI models developed by the team & identify scope for model improvements & best practices Assist and participate in pre-sales, client pursuits and proposals Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications: 3-6 years of relevant hands-on experience in Generative AI, Deep Learning, or NLP Bachelor’s or Master’s degree in a quantitative field. Must have strong hands-on experience with programming languages like Python, Cuda and SQL, and frameworks such as TensorFlow, PyTorch and Keras Hands-on experience with top LLM models like OpenAI GPT-3.5/4, Google Gemini, AWS Bedrock, LLaMA 3.0, and Mistral, along with RAG and Agentic workflows Well versed with GANs and Transformer architecture, knows about Diffusion models, up to date with new research/progress in the field of Gen AI Should follow research papers, comprehend and innovate/present the best approaches/solutions related to Generative AI components Knowledge of hyperscaler offerings (NVIDIA, AWS, Azure, GCP, Oracle) and Gen AI tools (Copilot, Vertex AI). Knowledge of Vector DB, Neo4J/relevant Graph DBs Familiar with Docker containerization, GIT, etc. AI/Cloud certification from a premier institute is preferred. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303628

Posted 3 weeks ago

Apply

3.0 - 6.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Job Title: Generative AI Developer Job Summary: We are looking for a Generative AI Developer with hands-on experience to design, develop and deploy AI and Generative AI models that generate high quality content, such as text, images, chatbots, etc. The Ideal candidate will have expertise in deep learning, natural language processing, and computer vision. Key Responsibilities: Deliver large-scale AI/Gen AI projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements and project requirements Work with a team of Data Engineers, ML/AI Engineers, Prompt engineers, and other Data & AI professionals to deliver projects from inception to implementation Brainstorm, build & improve AI/Gen AI models developed by the team & identify scope for model improvements & best practices Assist and participate in pre-sales, client pursuits and proposals Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications: 3-6 years of relevant hands-on experience in Generative AI, Deep Learning, or NLP Bachelor’s or Master’s degree in a quantitative field. Must have strong hands-on experience with programming languages like Python, Cuda and SQL, and frameworks such as TensorFlow, PyTorch and Keras Hands-on experience with top LLM models like OpenAI GPT-3.5/4, Google Gemini, AWS Bedrock, LLaMA 3.0, and Mistral, along with RAG and Agentic workflows Well versed with GANs and Transformer architecture, knows about Diffusion models, up to date with new research/progress in the field of Gen AI Should follow research papers, comprehend and innovate/present the best approaches/solutions related to Generative AI components Knowledge of hyperscaler offerings (NVIDIA, AWS, Azure, GCP, Oracle) and Gen AI tools (Copilot, Vertex AI). Knowledge of Vector DB, Neo4J/relevant Graph DBs Familiar with Docker containerization, GIT, etc. AI/Cloud certification from a premier institute is preferred. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303628

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

DXFactor is looking for AI/ML Engineer (4 to 6 Yrs) DXFactor is a US-based tech company working with fortune 500 customers across the globe. We have our presence in: US India (Ahmedabad, Bangalore) Web-site: www.DXFactor.com Location: SG Highway, Ahmedabad (Work from Office) Employment Type: Full Time Job Summary: We are seeking an experienced AI/ML Engineer with a solid foundation in Machine learning, Artificial Intelligence and at least one year of hands-on experience in Generative AI . The ideal candidate will have strong proficiency in Python , LLM Models , and emerging techniques such as Retrieval-Augmented Generation (RAG) , model fine-tuning , and agentic AI systems . You will be responsible for building innovative solutions leveraging the latest in AI technology to solve real-world problems. Key Responsibilities: Design, develop, and deploy AI/ML solutions with an emphasis on generative models and LLMs . Implement and optimize RAG pipelines for knowledge-aware AI systems. Fine-tune and customize models (e.g., LLaMA, Mistral, GPT, etc.) for specific domains or applications. Build and manage agentic AI systems capable of autonomous planning and decision-making. Work closely with cross-functional teams to identify use cases and deliver scalable AI-powered features. Stay up-to-date with the latest developments in AI/ML and contribute to internal knowledge sharing. Required Skills & Qualifications: 4+ years of experience in AI/ML development. Minimum 1 year of hands-on experience in Generative AI projects. Proficient in Python and common ML libraries (e.g., PyTorch, scikit-learn). Strong understanding of Large Language Models (LLMs) and transformer architectures. Experience building RAG pipelines and integrating vector search systems. Hands-on experience with model fine-tuning using LoRA, PEFT, or Hugging Face Transformers. Experience developing agentic AI systems using frameworks like LangChain, AutoGen, or custom orchestration logic. Experience on working with cloud platforms Tools & Technologies: Frameworks & Libraries: LangChain, LlamaIndex, AutoGen, PyTorch, TensorFlow Model Providers: OpenAI, Anthropic, Llama, Mistral Vector Stores: FAISS, Pinecone, Milvus APIs & Services: REST, GraphQL DevOps & Infra: Docker, AWS/GCP/Azure

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies