Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
15 - 25 Lacs
hyderabad
Work from Office
AI/ML Engineer with expertise in Google LLMs, prompt design, fine-tuning, RLHF, and A2A orchestration. Experience Level: 8 + years Responsibilities: Design and optimize prompt engineering strategies. Fine-tune LLMs using domain-specific datasets. Apply RLHF to improve agent decision-making. Implement agents using Vertex AI, PaLM, Gemini. Collaborate on agent architecture and evaluation. Use MCP to standardize agent interaction. Proficient in Python, TensorFlow, JAXRLHF, PyTorch. Hands-on with Vertex AI, Gemini, AutoML. Knowledge of transformers, LoRA, quantization, PEFT. Experience with ADK, A2A, MCP protocols. Prompt tuning and chain-of-thought expertise.
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
bangalore, karnataka
On-site
You are a Machine Learning & Generative AI Engineer responsible for designing, building, and deploying advanced ML and GenAI solutions. This role presents an exciting opportunity to work on cutting-edge AI systems, such as LLM fine-tuning, Transformer architectures, and RAG pipelines, while also contributing to traditional ML model development for decision-making and automation. With a minimum of 3 years of experience in Machine Learning, Deep Learning, and AI model development, you are expected to demonstrate a strong proficiency in Python, PyTorch, TensorFlow, Scikit-Learn, and MLflow. Your expertise should encompass Transformer architectures (such as BERT, GPT, T5, LLaMA, Falcon) and attention mechanisms. Additionally, experience with Generative AI, including LLM fine-tuning, instruction tuning, and prompt optimization is crucial. You should be familiar with RAG (Retrieval-Augmented Generation) embeddings, vector databases (FAISS, Pinecone, Weaviate, Chroma), and retrieval workflows. A solid foundation in statistics, probability, and optimization techniques is essential for this role. You should have experience working with cloud ML platforms like Azure ML / Azure OpenAI, AWS SageMaker / Bedrock, or GCP Vertex AI. Familiarity with Big Data & Data Engineering tools like Spark, Hadoop, Databricks, and SQL/NoSQL databases is required. Proficiency in CI/CD, MLOps, and automation pipelines (such as Airflow, Kubeflow, MLflow) is expected, along with hands-on experience with Docker and Kubernetes for scalable ML/LLM deployment. It would be advantageous if you have experience in NLP & Computer Vision areas, including Transformers, BERT/GPT models, YOLO, and OpenCV. Exposure to vector search & embeddings for enterprise-scale GenAI solutions, multimodal AI, Edge AI / federated learning, RLHF (Reinforcement Learning with Human Feedback) for LLMs, real-time ML applications, and low-latency model serving is considered a plus. Your responsibilities will include designing, building, and deploying end-to-end ML pipelines covering data preprocessing, feature engineering, model training, and deployment. You will develop and optimize LLM-based solutions for enterprise use cases, leveraging Transformer architectures. Implementing RAG pipelines using embeddings and vector databases to integrate domain-specific knowledge into LLMs will also be part of your role. Fine-tuning LLMs on custom datasets for domain-specific tasks and ensuring scalable deployment of ML & LLM models on cloud environments are critical responsibilities. Collaborating with cross-functional teams comprising data scientists, domain experts, and software engineers to deliver AI-driven business impact is expected from you.,
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
mumbai, maharashtra, india
On-site
Location - Airoli, Mumbai Contract Duration - 9 months (Extendable to a year) Key Responsibilities: Design and implement scalable data intelligence platforms that integrate structured and unstructured data sources. Develop and fine-tune Generative AI models (LLMs, GANs, Diffusion Models) for tasks such as text generation, synthetic data creation, and content automation. Collaborate with data engineers, ML engineers, and product teams to deploy AI solutions into production environments. Build and maintain data pipelines , feature stores, and model monitoring systems. Apply prompt engineering techniques to optimize LLM performance for business use cases. Conduct experiments and A/B testing to evaluate model performance and business impact. Ensure compliance with data governance, privacy, and ethical AI standards. Expertise in AKS. Should be able to integrate the solution in current landscape. Required Qualifications: Bachelors or Masters degree in Computer Science, Data Science, AI, or related field. 3-5 years of experience in data science, machine learning, or AI engineering. Hands-on experience with LLMs (e.g., GPT, Claude, LLaMA) and frameworks like Hugging Face, LangChain, or OpenAI APIs. Proficiency in Python , SQL, and cloud platforms (AWS, Azure, or GCP). Strong understanding of data modeling , ETL processes , and MLOps . Experience with vector databases , embedding models , and retrieval-augmented generation (RAG) is a plus. Preferred Skills: Experience with knowledge graphs , semantic search , or data fabric architectures . Familiarity with AutoML , RLHF , or multi-modal AI . Excellent communication and stakeholder management skills. Show more Show less
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
haryana
On-site
We are seeking a skilled Machine Learning Engineer / Data Scientist with specialized knowledge in ML, NLP, and Large Language Models (LLMs). The perfect candidate should possess over 4 years of experience in ML/NLP and a minimum of 1 year of practical exposure to LLMs, particularly in fine-tuning, RAG, and AI-driven agents. Your primary responsibilities will include refining and optimizing LLMs such as GPT, LLaMA, and Mistral, as well as creating AI agents utilizing CrewAI, LangChain, and AutoGPT. Additionally, you will be tasked with constructing and improving RAG pipelines using FAISS, Pinecone, and Weaviate, working on various NLP models like text classification, summarization, and NER, and staying abreast of the latest advancements in LLM technology. To excel in this role, you should have at least 4 years of experience in ML/NLP and a minimum of 1 year in LLMs, be proficient in Python, PyTorch, and TensorFlow, possess expertise in LLM fine-tuning methods such as LoRA, PEFT, and QLoRA, and have practical experience with AI agents and RAG tools. Desirable qualifications for this position include familiarity with multi-modal AI applications like text-to-image and speech-to-text, knowledge of contrastive learning and RLHF, and a background in open-source contributions or AI publications. This opportunity has been presented by Aarushi Madhani on behalf of FiftyFive Technologies.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
As a skilled AI Engineer, you will demonstrate proficiency in Generative AI & LLMs including GPT-3, Langchain, and LLMs. Your experience in prompt engineering, few-shot/zero-shot learning, RLHF, and model optimization will be essential in developing and implementing generative AI models. Your expertise in Machine Learning & NLP, particularly in Python utilizing frameworks like Hugging Face, TensorFlow, Keras, and PyTorch, will play a crucial role in optimizing LLM performance through fine-tuning and hyperparameter evaluation. A strong understanding of embeddings and data modeling is required for this role. Experience with various databases such as vector databases, RDBMS, MongoDB, and NoSQL databases (HBase, Elasticsearch), as well as data integration from multiple sources, will be beneficial. Your knowledge in Software Engineering, encompassing data structures, algorithms, system architecture, and distributed systems, will enable you to build and maintain robust distributed systems. Hands-on experience with Platform & Cloud technologies like Kubernetes, Spark ML, Databricks, and developing scalable platforms at scale is necessary. Proficiency in API Development, including API design, RESTful services, and data mapping to schemas, will be essential for integrating AI models into scalable web applications. Your responsibilities will include designing, developing, and implementing generative AI models, optimizing LLM performance, collaborating with ML and integration teams, building and maintaining distributed systems, integrating data from diverse sources, and enhancing response quality and performance using advanced techniques. To qualify for this role, you should hold a B.Tech / B.E. /M.Tech / M.E. degree and have a proven track record in building and deploying AI/ML models, particularly LLMs. Prior experience in developing public cloud services or open-source ML software would be advantageous. Knowledge of big-data tools will be considered a strong asset for this position.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Lead Machine Learning Engineer at Ramboll Tech, you will play a pivotal role in creating cutting-edge AI solutions while nurturing a collaborative and growth-oriented culture. In this position, you will work closely with product owners, Chapter leads, and global ML Engineers to define technical roadmaps, implement best practices, and deliver impactful AI solutions. Your responsibilities will include defining architectural patterns for scalable LLM pipelines, integrating external knowledge bases, researching and developing effective RAG architectures, exploring state-of-the-art LLMs, evaluating and optimizing models, and driving continuous improvement through experimentation. Additionally, you will establish coding standards, mentor ML engineers, and ensure the success of ML-Ops for your Pod. To excel in this role, you should possess a Bachelor's or Master's degree in Computer Science or a related field, along with a minimum of 5 years of experience in implementing machine learning projects. Strong leadership skills, expertise in LLM and RAG architectures, proficiency in modern Transformer-based LLMs, and experience with programming tools like Python, PyTorch, TensorFlow, and deployment tools such as Docker and Kubernetes are essential. At Ramboll Tech, our team culture values curiosity, optimism, ambition, and empathy. We prioritize open communication, collaboration, and continuous learning, while fostering diversity and inclusivity within our workplace. As part of a global architecture, engineering, and consultancy firm, we are committed to sustainability, innovation, and creating a livable world where individuals can thrive in harmony with nature. If you are passionate about driving innovation, leading technical projects, and contributing to a dynamic team environment, we encourage you to apply for this role. Submit your CV and a cover letter through our application tool to begin the conversation about joining our team at Ramboll Tech. For any inquiries or requests regarding the application process, please contact us at [email protected],
Posted 2 weeks ago
0.0 years
0 Lacs
india
On-site
WHAT YOU DO AT AMD CHANGES EVERYTHING We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences - the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world's most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives. AMD together we advance_ THE ROLE: T he AI Models team is looking for exceptional machine learning scientists and engineers to explore and innovate on training and inference techniques for large language models (LLMs), large multimodal models (LMMs), image/video generation and other foundation models . You will be part of a world-class research and development team focussing on efficient and scalable pre-training, instruction tuning, alignment and optimization . As an early member of the team, you can help us shap e the direction and strategy to fulfill this important charter. THE PERSON: This role is for you if you are passionate about reading through the latest literature, coming up with novel ideas, and implementing those through high quality code to push the boundaries on scale and performance. The ideal candidate will have both theoretical expertise and hands-on experience with developing LLMs, LMMs, and/or diffusion models. We are looking for someone who is familiar with hyper-parameter tuning methods, data preprocessing & encoding techniques and distributed training approaches for large models. KEY RESPONSIBILITIES: Pre-train and post-train models over large GPU clusters while optimizing for various trade-offs . Improve upon the state-of-the- art in G enerat ive AI model architectures, data and training techniques. Accelerate the training and inference speed across AMD accelerators . Build agentic frameworks to solve various kinds of problems Publish your research at top-tier conferences, workshops and/ or through technical blogs. Engage with academia and open-source ML communities. Drive continuous improvement of infrastructure and development ecosystem. PREFERRED EXPERIENCE: Strong development and debugging skills in Python. Experience in deep learning frameworks ( like PyTorch or TensorFlow ) and distributed training tools (like DeepSpeed or Pytorch Distributed ) . Experience with fine-tuning methods (like RLHF & DPO) as well as parameter efficient techniques (like LoRA & DoRA). Solid understanding o f various types of transformer s and state space models . Strong publication record in top-tier conferences, workshops or journals. S olid communication and problem-solving skills. Passionate about learning new stuffs in this domain as well as innovating on top of it ACADEMIC CREDENTIALS: Advanced degree ( Master's or PhD) in machine learning, computer science, artificial intelligence, or a related field is expected. Exceptional Bachelor's degree candidates may also be considered . #LI-MK1 Benefits offered are described: . AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants needs under the respective laws throughout all stages of the recruitment and selection process.
Posted 2 weeks ago
5.0 - 7.0 years
0 Lacs
bengaluru, karnataka, india
On-site
ABOUT US While the world races to automate the future, were focused on preparing the generation that will lead it. At ADIIVA, we believe the most powerful time to shape a childs mind is early on, when curiosity is boundless and values are just beginning to form. We offer parents a way to nurture critical thinking, resilience and independence from the very start. Founded by Kalika Vajpayee, committed to rethinking early learning, ADIIVA is a lean, high-impact startup working at the intersection of two major technological innovations - LLMs and Robotics. With our first office in Bangalore, we are looking to hire an AI Engineering Lead that will help us build a revolutionary product that transforms how children learn. ABOUT THE ROLE Working closely with the founder, you will define and build the technical foundation of the company from the ground up, influencing every aspect of architecture, tooling and product direction. This role offers unparalleled autonomy, significant equity and the chance to co-create a category-defining AI device for children in a 0?1 environment. This is a full time opportunity based in Bengaluru (on-site). WHAT YOU WILL DO Design the system architecture , covering cloud services, databases, and DevOps workflows Design data architectures specifically optimized for AI agent operations and large language model integration Fine-tune and optimize large language models (LLMs) for child-safe interactions Train LLMs on proprietary multimodal datasets, including voice/text transcripts, behavioral cues and developmental content Evaluate open-source and proprietary models for performance, safety and scale, across cloud and edge environments Prototype and iterate on multi-step conversational behaviors, memory and alignment strategies Implement trustworthy AI practices , including hallucination mitigation, toxicity filtering and age-appropriate guardrails Design systems with data security/ privacy by design , ensuring compliance with regulations Build production-grade ML pipelines with MLOps best practices : CI/CD, monitoring, versioning and feedback loops Collaborate cross-functionally with product, child development experts and design teams Manage and mentor a team of 2-3 engineers WHAT WE LOOK FOR Bachelors degree in Computer Science, or an equivalent discipline Ability to work independently in a fast-paced, ambiguous early-stage environment Growth mindset, strong communication skills and a collaborative spirit 5+ years of experience in AI/ML/NLP , with 1+ years focused on LLMs Proven track record of fine-tuning, training and optimizing LLMs Deep understanding of transformer architectures, attention mechanisms and generative models Strong Python skills with experience in PyTorch, Hugging Face, TensorFlow, Weights & Biases Familiarity with LLM orchestration frameworks ( LangGraph, LangChain, or Haystack ) Experience with cloud platforms (AWS, GCP , or Azure) and their AI/ML services Understanding of privacy-preserving techniques (differential privacy, PII redaction) Exposure to alignment techniques ( RLHF, prompt tuning ) and safety filters Comfort with MLOps tooling, containerization ( Docker ) and orchestration ( Kubernetes, Airflow) Knowledge of RAG and vector databases (e.g., FAISS, Weaviate) Ability to explain complex AI concepts to non-technical stakeholders Passion for building safe, magical AI experiences for children and families BONUS POINTS FOR Experience building conversational agents, interactive storytelling tools or child-facing applications Familiarity with speech-to-text, voice synthesis , or multimodal learning Understanding of regulatory frameworks such as COPPA, GDPR, and responsible AI principles WHY JOIN US Define the future of child-safe AI from day one Join at ground zero and influence every aspect of the product and technical stack Collaborate with a small, passionate, and visionary team Opportunity for ownership, and long-term impact Show more Show less
Posted 2 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
mumbai, delhi / ncr, bengaluru
Work from Office
Hiring a Python Code Reviewer for a 6-month remote contractual position The ideal candidate should have 4-8 years of experience in Python development, QA, or code review, with a strong grasp of Python syntax, edge cases, debugging, and testing The role involves reviewing annotator evaluations of AI-generated Python code to ensure quality, functional accuracy, and alignment with prompt instructions Experience with Docker, code execution tools, and structured QA workflows is mandatory Strong written communication skills and adherence to quality assurance guidelines (Project Atlas) are required Familiarity with LLM evaluation, RLHF pipelines, or annotation platforms is a plus Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
You should have at least 3+ years of relevant experience with the following skills: - Proficiency in Python, machine learning, deep learning, and NLP processing. - Experience in developing and implementing generative AI models, with a strong understanding of deep learning techniques such as GPT, VAE, and GANs. - Proficiency in Langchain, LLM. - Ability to prompt and optimize few-shot techniques to enhance LLM's performance on specific tasks. - Evaluate LLM's zero-shot and few-shot capabilities, fine-tuning hyperparameters, ensuring task generalization, and exploring model interpretability for robust web app integration. - Collaborate with ML and Integration engineers to leverage LLM's pre-trained potential, delivering contextually appropriate responses in a user-friendly web app. - Solid understanding of data structures, algorithms, and principles of software engineering. - Experience with vector databases RDBMS, MongoDB, and NoSQL databases. - Proficiency in working with embeddings. - Strong distributed systems skills and system architecture skills. - Experience in building and running a large platform at scale. - Hands-on experience with Python, Hugging Face, TensorFlow, Keras, PyTorch, Spark, or similar statistical tools. - Experience as a data modeling ML/NLP scientist, including performance tuning, fine-tuning, RLHF, and performance optimization. - Proficient with the integration of data from multiple sources and API design. - Good knowledge of Kubernetes and RESTful design. - Prior experience in developing public cloud services or open-source ML software is an advantage. You should also have a validated background with ML toolkits such as PyTorch, TensorFlow, Keras, Langchain, Llamadindex, SparkML, or Databricks. Your experience and strong knowledge of using AI/ML and particularly LLMs will be beneficial in this role.,
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description: We are looking for a Lead Generative AI Engineer with 35 years of experience to spearhead development of cutting-edge AI systems involving Large Language Models (LLMs) , Vision-Language Models (VLMs) , and Computer Vision (CV) . You will lead model development, fine-tuning, and optimization for text, image, and multi-modal use cases. This is a hands-on leadership role that requires a deep understanding of transformer architectures, generative model fine-tuning, prompt engineering, and deployment in production environments. Roles and Responsibilities: Lead the design, development, and fine-tuning of LLMs for tasks such as text generation, summarization, classification, Q&A, and dialogue systems. Develop and apply Vision-Language Models (VLMs) for tasks like image captioning, VQA, multi-modal retrieval, and grounding. Work on Computer Vision tasks including image generation, detection, segmentation, and manipulation using SOTA deep learning techniques. Leverage frameworks like Transformers, Diffusion Models, and CLIP to build and fine-tune multi-modal models. Fine-tune open-source LLMs and VLMs (e.g., LLaMA, Mistral, Gemma, Qwen, MiniGPT, Kosmos, etc.) using task-specific or domain-specific datasets. Design data pipelines , model training loops, and evaluation metrics for generative and multi-modal AI tasks. Optimize model performance for inference using techniques like quantization, LoRA, and efficient transformer variants. Collaborate cross-functionally with product, backend, and ML ops teams to ship models into production. Stay current with the latest research and incorporate emerging techniques into product pipelines. Requirements: Bachelors or Masters degree in Computer Science, Artificial Intelligence, Machine Learning, or related field. 35 years of hands-on experience in building, training, and deploying deep learning models, especially in LLM, VLM , and/or CV domains. Strong proficiency with Python , PyTorch (or TensorFlow), and libraries like Hugging Face Transformers, OpenCV, Datasets, LangChain, etc. Deep understanding of transformer architecture , self-attention mechanisms , tokenization , embedding , and diffusion models . Experience with LoRA , PEFT , RLHF , prompt tuning , and transfer learning techniques. Experience with multi-modal datasets and fine-tuning vision-language models (e.g., BLIP, Flamingo, MiniGPT, Kosmos, etc.). Familiarity with MLOps tools , containerization (Docker), and model deployment workflows (e.g., Triton Inference Server, TorchServe). Strong problem-solving, architectural thinking, and team mentorship skills. Show more Show less
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description: We are looking for a Lead Generative AI Engineer with 35 years of experience to spearhead development of cutting-edge AI systems involving Large Language Models (LLMs) , Vision-Language Models (VLMs) , and Computer Vision (CV) . You will lead model development, fine-tuning, and optimization for text, image, and multi-modal use cases. This is a hands-on leadership role that requires a deep understanding of transformer architectures, generative model fine-tuning, prompt engineering, and deployment in production environments. Roles and Responsibilities: Lead the design, development, and fine-tuning of LLMs for tasks such as text generation, summarization, classification, Q&A, and dialogue systems. Develop and apply Vision-Language Models (VLMs) for tasks like image captioning, VQA, multi-modal retrieval, and grounding. Work on Computer Vision tasks including image generation, detection, segmentation, and manipulation using SOTA deep learning techniques. Leverage frameworks like Transformers, Diffusion Models, and CLIP to build and fine-tune multi-modal models. Fine-tune open-source LLMs and VLMs (e.g., LLaMA, Mistral, Gemma, Qwen, MiniGPT, Kosmos, etc.) using task-specific or domain-specific datasets. Design data pipelines , model training loops, and evaluation metrics for generative and multi-modal AI tasks. Optimize model performance for inference using techniques like quantization, LoRA, and efficient transformer variants. Collaborate cross-functionally with product, backend, and ML ops teams to ship models into production. Stay current with the latest research and incorporate emerging techniques into product pipelines. Requirements: Bachelors or Masters degree in Computer Science, Artificial Intelligence, Machine Learning, or related field. 35 years of hands-on experience in building, training, and deploying deep learning models, especially in LLM, VLM , and/or CV domains. Strong proficiency with Python , PyTorch (or TensorFlow), and libraries like Hugging Face Transformers, OpenCV, Datasets, LangChain, etc. Deep understanding of transformer architecture , self-attention mechanisms , tokenization , embedding , and diffusion models . Experience with LoRA , PEFT , RLHF , prompt tuning , and transfer learning techniques. Experience with multi-modal datasets and fine-tuning vision-language models (e.g., BLIP, Flamingo, MiniGPT, Kosmos, etc.). Familiarity with MLOps tools , containerization (Docker), and model deployment workflows (e.g., Triton Inference Server, TorchServe). Strong problem-solving, architectural thinking, and team mentorship skills. Show more Show less
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Machine Learning Engineer at Skyfall, you will play a crucial role in deploying and optimizing large language models (LLMs) in production environments. Your responsibilities will include deploying post-trained LLMs, optimizing inference for cost and latency, and building scalable training pipelines using cutting-edge technologies like DeepSpeed, Accelerate, and Ray. The Skyfall team, comprising of industry pioneers from Maluuba, is committed to revolutionizing the AI ecosystem by creating the first world model for the enterprise. By overcoming the limitations of existing LLMs, the Enterprise World Model aims to provide enterprises with a comprehensive understanding of the intricate relationships between data, people, and processes within organizations. You will be part of a dynamic team spread across New York, Toronto, and Bangalore, working on designing distributed training infrastructure, managing multi-cloud ML deployments, and implementing advanced model compression techniques. Additionally, you will develop internal tools to facilitate multi-GPU training and large-scale experimentation, ensuring efficient resource allocation and continuous model evaluation. To excel in this role, you should have a minimum of 3 years of experience in ML engineering, model deployment, and large-scale training. Proficiency in vector databases such as FAISS, Pinecone, and Weaviate for retrieval-augmented generation (RAG) is essential. Experience with multi-cloud ML deployment, hands-on deployment of large-scale models in production, and expertise in multi-GPU training and inference optimizations are key requirements. Your strong knowledge of ML system performance tuning, latency optimization, and cost reduction strategies will be instrumental in developing cluster management tools for external compute infrastructure and implementing state-of-the-art model compression techniques. By staying abreast of LLM fine-tuning techniques, RLHF, and model evaluation metrics, you will contribute to Skyfall's mission of disrupting the AI landscape and providing enterprises with significant value through innovative solutions.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
delhi
On-site
You will be joining a technology-driven publishing and legal-tech organization that is currently enhancing its capabilities through advanced AI initiatives. As an experienced AI Solution Architect, your role will involve leading the design, development, and deployment of transformative AI projects. You will work closely with a proficient internal team skilled in .NET, MERN, MEAN stacks, and recent chatbot development using OpenAI and LLaMA. Your responsibilities will include architecting, planning, and delivering enterprise-grade AI solutions utilizing Vector Databases like FAISS and Chroma, RAG Pipelines (Retrieval-Augmented Generation), and LLM techniques such as Pre-training, Fine-tuning, and RLHF (Reinforcement Learning with Human Feedback). You will lead AI solution design discussions, collaborate with engineering teams to integrate AI modules into systems, and guide junior AI engineers and developers on technical aspects. To excel in this role, you should hold a Bachelors or Masters in Computer Science, Engineering, AI/ML, or related fields and have at least 5 years of experience in software architecture with a focus on AI/ML system design and project delivery. Hands-on expertise in Vector Search Engines, LLM Tuning & Deployment, RAG systems, RLHF techniques, and working with multi-tech teams is essential. Strong command over architecture principles, scalability, microservices, API-based integration, and excellent team leadership skills are also required. It would be advantageous if you have experience with tools like MLflow, Weights & Biases, Docker, Kubernetes, Azure/AWS AI services, exposure to data pipelines, MLOps, AI governance, familiarity with enterprise software lifecycle, and DevSecOps practices. Prior experience in delivering a production-grade AI/NLP project with measurable impact is a plus. Stay updated on emerging AI trends and assess their relevance to business objectives to drive AI adoption across products and internal processes.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
delhi
On-site
You will be joining a technology-driven publishing and legal-tech organization that is currently expanding its capabilities through advanced AI initiatives. With a strong internal team proficient in .NET, MERN, MEAN stacks, and recent chatbot development using OpenAI and LLaMA, we are seeking an experienced AI Solution Architect to lead the design, development, and deployment of transformative AI projects. As an AI Solution Architect, you will play a critical role in designing and delivering enterprise-grade AI solutions. Your responsibilities will include architecting, planning, and implementing AI solutions using Vector Databases, RAG Pipelines, and LLM techniques. You will lead AI solution design discussions, collaborate with engineering teams to integrate AI modules, evaluate AI frameworks, and mentor junior AI engineers and developers. To excel in this role, you should have a Bachelor's or Master's degree in Computer Science, Engineering, AI/ML, or related fields, along with at least 5 years of experience in software architecture with a focus on AI/ML system design and project delivery. Hands-on expertise in Vector Search Engines, LLM Tuning & Deployment, RAG systems, and RLHF techniques is essential. You should also have experience working with multi-tech teams, strong command over architecture principles, and excellent team leadership skills. Experience with tools like MLflow, Weights & Biases, Docker, Kubernetes, and Azure/AWS AI services would be a plus. Exposure to data pipelines, MLOps, and AI governance, as well as familiarity with enterprise software lifecycle and DevSecOps practices, are also desirable. If you are passionate about AI technology, have a proven track record of delivering AI projects, and enjoy leading and mentoring technical teams, we encourage you to apply for this exciting opportunity to drive AI adoption across products and internal processes.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
The ideal candidate for this role should have a solid understanding of the technical aspects of Large Language Models (LLMs) and possess proficient skills in prompt engineering. You will be responsible for running tests, fine-tuning various LLM models and data to enhance model performance and output quality. Additionally, you will be required to explore cutting-edge AI/GenAI technology stacks under the guidance of the manager to enhance LLUMO's product offerings. A basic understanding of data engineering is also essential for this position. Your expertise should include a deep understanding of the architecture and functioning of LLMs such as GPT, LLAMA, BISON, among others. You should be familiar with application development frameworks like LangChain, LlamaIndex, and have knowledge of vector index and vector databases. A strong grasp of underlying architecture and algorithms like RLHF, transformers, attention mechanism, and DNNs is crucial. Proficiency in Python and frameworks like Hugging Face transformers, PyTorch, etc., is required. Prior experience in prompt engineering techniques to customize the behavior of LLMs for specific tasks is preferred. You should be skilled in the fine-tuning process to adapt models to particular domains or tasks. Knowledge of tools and techniques for interpreting the behavior of LLMs and proficiency in version control systems such as Git for tracking code and model configuration changes are necessary. Moreover, familiarity with ethical and legal considerations in language model development, including bias-hallucination mitigation, is essential. The qualifications for this position include a Bachelor's or advanced degree in Computer Science, Natural Language Processing (NLP), or a related field. This is a full-time job with a work schedule from Monday to Friday, and the work location is in-person. Job Type: Full-time Schedule: Monday to Friday Work Location: In person,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Senior Applied Researcher at Genloop, you will lead cutting-edge work in small language model (SLM) training, LLM customization, and domain adaptation. We believe that domain intelligence is crucial to bring GenAI into enterprise production, especially in cases where a 1-year employee outperforms a 1st-day hire significantly. Your responsibilities will include designing and conducting experiments, customizing LLMs for specific enterprise domains, evaluating model performance, collaborating with engineering and product teams, contributing to research documentation and open-source releases, and staying updated on LLM architectures and pretraining objectives. Additionally, you will mentor junior researchers and provide guidance to the broader team based on insights from the frontier. To qualify for this role, you should have at least 5 years of experience in ML research or applied deep learning, with a focus on NLP, foundation models, or multi-modal systems. A deep understanding of transformers, language modeling, and sequence generation is required, along with hands-on experience in pretraining, SFT, RLHF, and efficient fine-tuning methods. Strong publications or open-source contributions would be advantageous. Graduation from a Tier 1 institution or demonstrating exceptional skills are preferred. Genloop is a research-first AI company that specializes in building customized, continuously learning AI systems. Our team comprises researchers and engineers from prestigious institutions and tech companies, working on models with real-world, production-grade impact. In terms of compensation and benefits, we offer a competitive salary, meaningful startup equity, and industry-leading benefits. The exact compensation will be based on your experience, expertise, and location. Genloop is an Equal Opportunity Employer that values diversity and is dedicated to creating an inclusive and respectful workplace for all.,
Posted 2 months ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Data Scientist at Objectways located in Chennai, you will have the opportunity to be part of a team that is driving AI innovation and solving real-world problems by leveraging cutting-edge machine learning and reasoning technologies. Our projects are ambitious and diverse, ranging from agent trajectory prediction to complex reasoning systems, multimodal intelligence, and preference-based learning. We are looking for a talented individual like you who is eager to explore the boundaries of applied AI. In this role, you will be responsible for designing and developing machine learning models for both structured and unstructured data. You will work on agent trajectory prediction, complex reasoning, preference ranking, and reinforcement learning. Additionally, you will handle multimodal datasets and develop reasoning pipelines across text, image, and audio modalities. Your responsibilities will also include validating and optimizing prompts for large language model performance and translating research into scalable, production-level implementations. Collaboration with cross-functional teams such as Engineering, Product, and Research is essential to ensure the success of our projects. To qualify for this position, you should have a minimum of 4 years of hands-on experience in Data Science or Machine Learning roles. Proficiency in Python, PyTorch/TensorFlow, scikit-learn, and ML lifecycle tools is required. You should also demonstrate expertise in at least one of the following areas: trajectory modeling, preference ranking, or multimodal systems. Experience with LLM prompt engineering, complex reasoning algorithms, graph-based methods, and causal inference is highly beneficial. Strong problem-solving, analytical thinking, and communication skills are essential for success in this role. Preferred skills for this position include familiarity with tools like LangChain, Hugging Face, or OpenAI APIs, exposure to RLHF (Reinforcement Learning from Human Feedback) or prompt-tuning, and experience with deploying ML models in production environments using technologies such as Docker and MLflow. By joining Objectways, you will have the opportunity to work on high-impact, next-generation AI challenges, collaborate with top talent from various domains, and enjoy a competitive salary with benefits and a learning budget.,
Posted 2 months ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
The Implementation Technical Architect role focuses on designing, developing, and deploying cutting-edge Generative AI (GenAI) solutions using the latest Large Language Models (LLMs) and frameworks. Your responsibilities include creating scalable and modular architecture for GenAI applications, leading Python development for GenAI applications, building tools for automated data curation, integrating solutions with cloud platforms like Azure, GCP, and AWS, applying advanced fine-tuning techniques to optimize LLM performance, establishing LLMOps pipelines, ensuring ethical AI practices, implementing Reinforcement Learning with Human Feedback and Retrieval-Augmented Generation techniques, collaborating with front-end developers, and more. Key Responsibilities: - Design and Architecture: Create scalable and modular architecture for GenAI applications using frameworks like Autogen, Crew.ai, LangGraph, LlamaIndex, and LangChain. - Python Development: Lead the development of Python-based GenAI applications, ensuring high-quality, maintainable, and efficient code. - Data Curation Automation: Build tools and pipelines for automated data curation, preprocessing, and augmentation to support LLM training and fine-tuning. - Cloud Integration: Design and implement solutions leveraging Azure, GCP, and AWS LLM ecosystems, ensuring seamless integration with existing cloud infrastructure. - Fine-Tuning Expertise: Apply advanced fine-tuning techniques such as PEFT, QLoRA, and LoRA to optimize LLM performance for specific use cases. - LLMOps Implementation: Establish and manage LLMOps pipelines for continuous integration, deployment, and monitoring of LLM-based applications. - Responsible AI: Ensure ethical AI practices by implementing Responsible AI principles, including fairness, transparency, and accountability. - RLHF and RAG: Implement Reinforcement Learning with Human Feedback (RLHF) and Retrieval-Augmented Generation (RAG) techniques to enhance model performance. - Modular RAG Design: Develop and optimize Modular RAG architectures for complex GenAI applications. - Open Source Collaboration: Leverage Hugging Face and other open-source platforms for model development, fine-tuning, and deployment. - Front-End Integration: Collaborate with front-end developers to integrate GenAI capabilities into user-friendly interfaces. Required Skills: - Python Programming: Deep expertise in Python for building GenAI applications and automation tools. - LLM Frameworks: Proficiency in frameworks like Autogen, Crew.ai, LangGraph, LlamaIndex, and LangChain. - Large-Scale Data Handling & Architecture: Design and implement architectures for handling large-scale structured and unstructured data. - Multi-Modal LLM Applications: Familiarity with text chat completion, vision, and speech models. - Fine-tune SLM(Small Language Model) for domain specific data and use cases. - Prompt injection fallback and RCE tools such as Pyrit and HAX toolkit etc. - Anti-hallucination and anti-gibberish tools such as Bleu etc. - Cloud Platforms: Extensive experience with Azure, GCP, and AWS LLM ecosystems and APIs. - Fine-Tuning Techniques: Mastery of PEFT, QLoRA, LoRA, and other fine-tuning methods. - LLMOps: Strong knowledge of LLMOps practices for model deployment, monitoring, and management. - Responsible AI: Expertise in implementing ethical AI practices and ensuring compliance with regulations. - RLHF and RAG: Advanced skills in Reinforcement Learning with Human Feedback and Retrieval-Augmented Generation. - Modular RAG: Deep understanding of Modular RAG architectures and their implementation. - Hugging Face: Proficiency in using Hugging Face and similar open-source platforms for model development. - Front-End Integration: Knowledge of front-end technologies to enable seamless integration of GenAI capabilities. - SDLC and DevSecOps: Strong understanding of secure software development lifecycle and DevSecOps practices for LLMs.,
Posted 2 months ago
4.0 - 8.0 years
6 - 10 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Hiring a Python Code Reviewer for a 6-month remote contractual position The ideal candidate should have 4-8 years of experience in Python development, QA, or code review, with a strong grasp of Python syntax, edge cases, debugging, and testing The role involves reviewing annotator evaluations of AI-generated Python code to ensure quality, functional accuracy, and alignment with prompt instructions Experience with Docker, code execution tools, and structured QA workflows is mandatory Strong written communication skills and adherence to quality assurance guidelines (Project Atlas) are required Familiarity with LLM evaluation, RLHF pipelines, or annotation platforms is a plus Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 2 months ago
4.0 - 8.0 years
4 - 15 Lacs
Pune, Maharashtra, India
Remote
About Us: We are a new-age AI solutions firm with an ultra-high-quality talent pool out of India for RLHF Our experts in the specific domain of their expertise help curate hyper-specific LLM datasets We have operations across the US and India and are working with the top AI firms globally to take their large language models (LLMs) to the next level of quality and consistency We are looking for skilled freelance translators who are proficient in Malayalam This role involves translation and content creation If you are fluent in reading, writing/typing, and communicating in Malayalam and are seeking a flexible, remote role, this opportunity is for you! Key Responsibilities: Translate content to Malayalam Create high-quality, culturally relevant written content as needed Maintain consistency, accuracy, and context in all tasks Requirements: Fluency in reading, writing/typing, and understanding in Malayalam Strong translation, and communication skills Previous experience in content creation, transcription, or translation is a plus
Posted 2 months ago
2.0 - 5.0 years
4 - 8 Lacs
Kolkata, West Bengal, India
Remote
Soul AI is a pioneering company founded by IIT Bombay and IIM Ahmedabad alumni, with a strong founding team from IITs, NITs, and BITS. We specialize in delivering high-quality human-curated data, AI-first scaled operations services, and more. We are seeking a SME Mathematics (Freelancer) to join us and contribute to impactful AI training projects. If you have a deep understanding of mathematics and a passion for problem-solving, we want you to be part of our team!. Key Responsibilities:. Soul AI is a pioneering company founded by IIT Bombay and IIM Ahmedabad alumni, with a strong founding team from IITs, NITs, and BITS. We specialize in delivering high-quality human-curated data, AI-first scaled operations services, and more. Who you are & how you can contribute. We are seeking a Physics expert with a MSc degree. The role involves working on data annotation and model refinement for AI through RLHF and SFT techniques. This fully remote position allows for flexible work hours while contributing to AIs advancement. What you will be doing. Annotate Physics-related data to improve AI comprehension. Provide feedback on AI-generated responses to ensure scientific accuracy. Assist in developing problem sets and solutions for AI training. Collaborate with AI researchers to refine AI capabilities in the field of Physics. Must Required Traits:. Strong academic background in Physics. Experience in Physics education, research, or content creation preferred. Familiarity with AI, machine learning, or data annotation is a plus. Excellent problem-solving and analytical skills.
Posted 2 months ago
4.0 - 8.0 years
2 - 6 Lacs
Bengaluru, Karnataka, India
Remote
Who you are & how you can contribute. We are seeking a Chemistry expert with a PhD degree. The role involves working on data annotation and model refinement for AI through RLHF and SFT techniques. This fully remote position allows for flexible work hours while contributing to AIs advancement. What you will be doing. Annotate Chemistry-related data to improve AI comprehension. Provide feedback on AI-generated responses to ensure scientific accuracy. Assist in developing problem sets and solutions for AI training. Collaborate with AI researchers to refine AI capabilities in the field of Chemistry. Must Required Traits: Strong academic background in Chemistry. Experience in Chemistry education, research, or content creation preferred. Familiarity with AI, machine learning, or data annotation is a plus. Excellent problem-solving and analytical skills.
Posted 2 months ago
1.0 - 4.0 years
4 - 8 Lacs
Delhi, India
Remote
Job Summary Who you are & how you can contribute We are seeking a Physics expert with a MSc degree The role involves working on data annotation and model refinement for AI through RLHF and SFT techniques This fully remote position allows for flexible work hours while contributing to AIs advancement What you will be doing Annotate Physics-related data to improve AI comprehension Provide feedback on AI-generated responses to ensure scientific accuracy Assist in developing problem sets and solutions for AI training Collaborate with AI researchers to refine AI capabilities in the field of Physics Must Required Traits: Strong academic background in Physics Experience in Physics education, research, or content creation preferred Familiarity with AI, machine learning, or data annotation is a plus Excellent problem-solving and analytical skills
Posted 2 months ago
4.0 - 8.0 years
4 - 8 Lacs
Hyderabad, Telangana, India
Remote
Job Summary Who you are & how you can contribute We are seeking a Physics expert with a MSc degree The role involves working on data annotation and model refinement for AI through RLHF and SFT techniques This fully remote position allows for flexible work hours while contributing to AIs advancement What you will be doing Annotate Physics-related data to improve AI comprehension Provide feedback on AI-generated responses to ensure scientific accuracy Assist in developing problem sets and solutions for AI training Collaborate with AI researchers to refine AI capabilities in the field of Physics Must Required Traits: Strong academic background in Physics Experience in Physics education, research, or content creation preferred Familiarity with AI, machine learning, or data annotation is a plus Excellent problem-solving and analytical skills
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |