Jobs
Interviews

8 Bleu Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

0 Lacs

haryana

On-site

LuminAI is seeking an Experienced AI Engineer to join their team on a Full-Time basis with an Immediate Start. As a healthcare-focused AI startup in stealth mode, our mission is to enhance the accessibility and universality of healthcare through cutting-edge AI technology. We are dedicated to developing the next generation of AI models that will redefine the standards and revolutionize the healthcare industry. Based in Germany with offices in India, we are now expanding our operations in India and looking for a skilled AI Engineer to join us promptly. If you possess 4-5 years of experience, have the ability to construct transformer models hands-on, and have experience in agentic workflows, we would like to connect with you! In this role, you will be responsible for developing state-of-the-art transformer architectures that drive innovation in AI within the healthcare sector. Initially, you will be deeply involved in coding, debugging, and optimizing models, as this is a hands-on position. As we advance, you will have the opportunity to recruit, mentor, and lead a high-caliber team of engineers and researchers, fostering a culture of innovation and excellence. Your contributions will establish the company as a trailblazer in AI-driven healthcare solutions, shaping the future of the industry. **Tasks:** - Develop cutting-edge transformer architectures to advance AI innovation in healthcare. - Engage in hands-on tasks such as coding, debugging, and model optimization in the initial stages. - Lead and mentor a team of engineers and researchers to promote innovation and excellence. - Contribute to positioning the company as a pioneer in AI-driven healthcare solutions. **Requirements:** **MUST HAVE** - Practical experience in fine-tuning LLMs, such as models like LLaMA, and adapting them for specific use cases. - Proficiency in vector databases, tokenization, embeddings, transformer blocks, and creating LLM heads. - Ability to optimize models using techniques like quantization and distillation for efficient deployment. - Proficiency in PyTorch and TensorFlow. - Familiarity with metrics like BLEU, ROUGE, etc., for assessing model performance. - Expertise in building solid Retrieval-Augmented Generation (RAG) systems through fine-tuned models. **BONUS:** - Experience in distributed training, working with healthcare datasets, or deploying models in production environments. **Benefits:** - Work on the forefront of AI technology and be a part of a mission-driven team reshaping healthcare. - Join an international startup in its early stages with ambitious growth plans. - Potential opportunities for travel or relocation to support global expansion. - Competitive compensation and benefits package. - Hybrid work flexibility allowing a blend of remote and in-office work. We are seeking candidates with strong technical abilities who are available to start immediately. If you do not meet our mandatory requirements, we kindly request that you refrain from applying. Our recruitment process moves swiftly and includes a qualifying call, business discussion, and in-depth technical assessment.,

Posted 1 week ago

Apply

7.0 - 12.0 years

20 - 27 Lacs

hyderabad

Work from Office

NLP & LLM Specialis (Immediate Joiners only)- transformer-based models (GPT, BERT, T5, RoBERTa) and NLP tasks(focus on text gen, summarization, ques ans, classification) Python NLP Libraries (Hugging Face, SpaCy, NLTK) CoT Prompt engg Prompt Template Required Candidate profile fine-tuning pre-trained models Model Expertise Prompt Engineering Instruction-Based Prompting Zero-shot, Few-shot, Many-shot Learning Model Deployment CoT Model Evaluation Bias Awareness Collaboration

Posted 2 weeks ago

Apply

7.0 - 9.0 years

0 Lacs

hyderabad, telangana, india

On-site

Position: Senior LLM Engineer Experience: 7+ Years (with minimum 2+ years in transformer-based models & NLP) Location: Hyderabad Job Type :: Contract 2 Hire Job Description We are seeking a highly skilled Senior LLM Engineer with strong expertise in Large Language Models (LLMs) and advanced Natural Language Processing (NLP) tasks. The ideal candidate will have hands-on experience working with transformer-based architectures and the ability to design, fine-tune, and deploy state-of-the-art NLP solutions. Key Responsibilities Design, develop, and optimize LLM-powered applications for text generation, summarization, question answering, classification, and translation. Fine-tune and adapt pre-trained transformer models (GPT, BERT, T5, RoBERTa, etc.) to domain-specific datasets . Implement prompt engineering techniques including zero-shot, few-shot, and many-shot learning to maximize model accuracy. Create reusable prompt templates for diverse use cases and ensure adaptability across contexts. Apply chain-of-thought (CoT) prompting for complex reasoning and problem-solving tasks. Continuously evaluate, test, and refine prompts to ensure output consistency and reliability. Mitigate issues related to biases, hallucinations, and knowledge cutoffs in model outputs. Collaborate with cross-functional teams to design and deploy scalable NLP/LLM solutions into production. Ensure high-quality deliverables using standard NLP evaluation metrics (BLEU, ROUGE, etc.). Must-Have Skills 7+ years of overall experience, with at least 2+ years in transformer-based LLMs & NLP tasks . Deep expertise in transformer architectures, attention mechanisms, tokenization, embedding layers, and context windows . Proficiency in Python and strong hands-on experience with NLP libraries/frameworks (Hugging Face, SpaCy, NLTK). Strong understanding of instruction-based prompting, zero/few/many-shot learning techniques . Proven track record in training, fine-tuning, and deploying LLMs for production use cases. Experience with text evaluation metrics (BLEU, ROUGE, etc.). Show more Show less

Posted 2 weeks ago

Apply

7.0 - 23.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Generative AI Lead, you will be responsible for spearheading the design, development, and implementation of cutting-edge GenAI solutions within enterprise-grade applications. Your role will encompass leveraging your expertise in Large Language Models (LLMs), prompt engineering, and scalable AI system architecture, coupled with hands-on experience in MLOps, cloud technologies, and data engineering. Your primary responsibilities will include designing and deploying scalable and secure GenAI solutions utilizing LLMs such as GPT, Claude, LLaMA, or Mistral. You will lead the architecture of Retrieval-Augmented Generation (RAG) pipelines using tools like LangChain, LlamaIndex, Weaviate, FAISS, or ElasticSearch. Additionally, you will be involved in prompt engineering, evaluation frameworks, and collaborating with cross-functional teams to integrate GenAI into existing workflows and applications. Moreover, you will develop reusable GenAI modules for various functions like summarization, Q&A bots, and document chat, while leveraging cloud-native platforms such as AWS Bedrock, Azure OpenAI, and Vertex AI for deployment and optimization. You will ensure robust monitoring and observability across GenAI deployments and apply MLOps practices for CI/CD, model versioning, validation, and research into emerging GenAI trends. To be successful in this role, you must possess at least 8 years of overall AI/ML experience, with a focus of at least 3 years on LLMs/GenAI. Strong programming skills in Python and proficiency in cloud platforms like AWS, Azure, and GCP are essential. You should also have experience in designing and deploying RAG pipelines, summarization engines, and chat-based applications, along with familiarity with MLOps tools and evaluation metrics for GenAI systems. Preferred qualifications include experience with fine-tuning open-source LLMs, knowledge of multi-modal AI, familiarity with domain-specific LLMs, and a track record of published work or contributions in the GenAI field. In summary, as a Generative AI Lead, you will play a pivotal role in driving innovation and excellence in the development and deployment of advanced GenAI solutions, making a significant impact on enterprise applications and workflows.,

Posted 1 month ago

Apply

3.0 - 5.0 years

20 - 35 Lacs

Pune

Hybrid

Role Overview :- Monitor, evaluate, and optimize AI/LLM workflows in production environments. Ensure reliable, efficient, and high-quality AI system performance by building out an LLM Ops platform that is self-serve for the engineering and data science departments. Key Responsibilities:- Collaborate with data scientists and software engineers to integrate an LLM Ops platform (Opik by CometML) for existing AI workflows Identify valuable performance metrics (accuracy, quality, etc) for AI workflows and create on-going sampling evaluation processes using the LLM Ops platform that alert when metrics drop below thresholds Cross-team collaboration to create datasets and benchmarks for new AI workflows Run experiments on datasets and optimize performance via model changes and prompt adjustments Debug and troubleshoot AI workflow issues Optimize inference costs and latency while maintaining accuracy and quality Develop automations for LLM Ops platform integration to empower data scientists and software engineers to self-serve integration with the AI workflows they build Requirements:- Strong Python programming skills Experience with generative AI models and tools (OpenAI, Anthropic, Bedrock, etc) Knowledge of fundamental statistical concepts and tools in data science such as: heuristic and non-heuristic measurements in NLP (BLEU, WER, sentiment analysis, LLM-as-judge, etc), standard deviation, sampling rate, and a high level understanding of how modern AI models work (knowledge cutoffs, context windows, temperature, etc) Familiarity with AWS Understanding of prompt engineering concepts People skills: you will be expected to frequently collaborate with other teams to help to perfect their AI workflows Experience Level 3-5 years of experience in LLM/AI Ops, MLOps, Data Science, or MLE

Posted 2 months ago

Apply

7.0 - 12.0 years

4 - 8 Lacs

Bengaluru, Karnataka, India

On-site

Roles and Responsibilities: Model Expertise : Work with transformer models such as GPT , BERT , T5 , RoBERTa , and others for a variety of NLP tasks, including text generation, summarization, classification, and translation. Model Fine-Tuning : Fine-tune pre-trained models on domain-specific datasets to improve performance for specific applications such as summarization, text generation, and question answering. Prompt Engineering : Craft clear, concise, and contextually relevant prompts to guide transformer-based models towards generating desired outputs for specific tasks. Iterate on prompts to optimize model performance. Instruction-Based Prompting : Implement instruction-based prompting to guide the model toward achieving specific goals, ensuring that the outputs are contextually accurate and aligned with task objectives. Zero-shot, Few-shot, Many-shot Learning : Utilize zero-shot , few-shot , and many-shot learning techniques to improve model performance without the need for full retraining. Chain-of-Thought (CoT) Prompting : Implement Chain-of-Thought (CoT) prompting to guide models through complex reasoning tasks, ensuring that the outputs are logically structured and provide step-by-step explanations. Model Evaluation : Use evaluation metrics such as BLEU , ROUGE , and other relevant metrics to assess and improve the performance of models for various NLP tasks. Model Deployment : Support the deployment of trained models into production environments and integrate them into existing systems for real-time applications. Bias Awareness : Be aware of and mitigate issues related to bias , hallucinations , and knowledge cutoffs in LLMs, ensuring high-quality and reliable outputs. Collaboration : Collaborate with cross-functional teams including engineers, data scientists, and product managers to deliver efficient and scalable NLP solutions. Must Have Skill Overall 7 years with at least 5+ years of experience working with transformer-based models and NLP tasks , with a focus on text generation , summarization , question answering , classification , and similar tasks. Expertise in transformer models like GPT (Generative Pre-trained Transformer) , BERT (Bidirectional Encoder Representations from Transformers) , T5 (Text-to-Text Transfer Transformer) , RoBERTa , and similar models. Familiarity with model architectures, attention mechanisms, and self-attention layers that enable LLMs to generate human-like text. Experience in fine-tuning pre-trained models on domain-specific datasets for tasks such as text generation , summarization , question answering , classification , and translation . Familiarity with concepts like attention mechanisms , context windows , tokenization , and embedding layers . Awareness of biases , hallucinations , and knowledge cutoffs that can affect LLM performance and output quality. Expertise in crafting clear, concise, and contextually relevant prompts to guide LLMs towards generating desired outputs. Experience in instruction-based prompting Use of zero-shot , few-shot , and many-shot learning techniques for maximizing model performance without retraining. Experience in iterating on prompts to refine outputs, test model performance, and ensure consistent results. Crafting prompt templates for repetitive tasks, ensuring prompts are adaptable to different contexts and inputs. Expertise in chain-of-thought (CoT) prompting to guide LLMs through complex reasoning tasks by encouraging step-by-step breakdowns. Proficiency in Python and experience with NLP libraries (e.g., Hugging Face, SpaCy, NLTK). Experience with transformer-based models (e.g., GPT, BERT, T5) for text generation tasks. Experience in training, fine-tuning, and deploying machine learning models in an NLP context. Understanding of model evaluation metrics (e.g., BLEU, ROUGE) Qualification: BE/B.Tech or Equivalent degree in Computer Science or related field. Excellent communication skills in English, both verbal and written

Posted 2 months ago

Apply

3.0 - 5.0 years

9 - 12 Lacs

Bengaluru

Work from Office

Responsibilities: * Collaborate with dev team on API testing using GIT and CI/CD pipeline. * Develop automated tests with Python, PyTest, and frameworks.

Posted 3 months ago

Apply

5.0 - 10.0 years

6 - 16 Lacs

bengaluru

Work from Office

Role & responsibilities Overall 7 years with at least 5+ years of experience working with transformer-based models and NLP tasks, with a focus on text generation, summarization, question answering, classification, and similar tasks. Expertise in transformer models like GPT (Generative Pre-trained Transformer), BERT (Bidirectional Encoder Representations from Transformers), T5 (Text-to-Text Transfer Transformer), RoBERTa, and similar models. Familiarity with model architectures, attention mechanisms, and self-attention layers that enable LLMs to generate human-like text. Experience in fine-tuning pre-trained models on domain-specific datasets for tasks such as text generation, summarization, question answering, classification, and translation. Familiarity with concepts like attention mechanisms, context windows, tokenization, and embedding layers. Awareness of biases, hallucinations, and knowledge cutoffs that can affect LLM performance and output quality. Expertise in crafting clear, concise, and contextually relevant prompts to guide LLMs towards generating desired outputs. Experience in instruction-based prompting Use of zero-shot, few-shot, and many-shot learning techniques for maximizing model performance without retraining. Experience in iterating on prompts to refine outputs, test model performance, and ensure consistent results. Crafting prompt templates for repetitive tasks, ensuring prompts are adaptable to different contexts and inputs. Expertise in chain-of-thought (CoT) prompting to guide LLMs through complex reasoning tasks by encouraging step-by-step breakdowns. Proficiency in Python and experience with NLP libraries (e.g., Hugging Face, SpaCy, NLTK). Experience with transformer-based models (e.g., GPT, BERT, T5) for text generation tasks. Experience in training, fine-tuning, and deploying machine learning models in an NLP context. Understanding of model evaluation metrics (e.g., BLEU, ROUGE)

Posted Date not available

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies