Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3 - 5 years
5 - 8 Lacs
Hyderabad
Work from Office
About Team Kensho Link is a machine learning service that allows users to map entities in their datasets with unique ID numbers drawn from S&P Globals world-class company database with precision and speed. Link started as an internal Kensho project to help S&P Global Market Intelligence and CapIQ integrate datasets more quickly into their platform. It uses ML based algorithm trained to return high quality links, even when the data inputs are incomplete or contain errors. Link leverages a variety of NLP and ML techniques to process and link millions of company entities in hours. As a team, we have expertise in classical ML algorithms and modern LLM-based tech stacks like RAG systems. About The Role As a Machine Learning Engineer, you will have end-to-end ownership of the Link application, driving its development and success. This role is perfect for you if you thrive on creating ML models and have a keen interest in the software engineering aspects of machine learning ( MLOps ). For example, y our responsibilities will include: Model deployment and optimization Debugging performance discrepancies between online and offline environments Ensuring feature synchronization Optimizing memory and compute resources Scaling ML systems for maximum efficiency Why Join Us? We are continuously expanding our portfolio of projects and are eager to find talented engineers who are excited about building and deploying state-of-the-art ML systems. We are seeking a mid-level Machine Learning Engineer who can help accelerate and refine our ML development cycle, setting new standards for prototyping, building, and maintaining cutting-edge ML solutions. What Youll Do Develop Advanced ML Models: Create innovative machine learning models to address complex business challenges and drive value. Enhance Model Performance: Identify and resolve performance gaps in existing models with creative and effective solutions. Leverage Unique Data: Work with proprietary unstructured and structured datasets, applying advanced NLP techniques to extract insights and build impactful solutions. Optimize Application Scaling: Efficiently scale ML applications to maximize compute resource utilization and meet high customer demand. Address Technical Debt: Proactively identify and propose solutions to reduce technical debt within the tech stack. Drive the ML Lifecycle: Engage in all phases of the ML lifecycle, from problem framing and data exploration to model deployment and production monitoring, ensuring continuous improvement. Collaborate Across Teams: Partner with cross-functional teams, including Data, Product Management, Design, and Engineering, to ensure smooth operations and contribute to the future product vision. Lead ML Projects: Oversee the development of core capabilities by scoping and planning ML projects effectively. Enhance User Experiences: Collaborate with Product and Design teams to develop ML-based solutions that enhance user experiences and align with business goals. Who You Are Bachelor's degree or higher in Computer Science, Engineering, or a related field. 3+ years of significant, hands-on industry experience with machine learning, natural language processing (NLP), and information retrieval systems, including designing, shipping, and maintaining production systems. Strong proficiency in Python. Proven experience building ML pipelines for data processing, training, inference, maintenance, evaluation, versioning, and experimentation. Demonstrated effective coding, documentation, collaboration, and communication habits. Strong problem-solving skills and a proactive approach to addressing challenges. Ability to adapt to a fast-paced and dynamic work environment. [optional] Experience developing search or recommender systems [optional] Experience working with databases and other datastores [optional] Experience working with machine learning libraries/frameworks for Large Language Model (LLM) orchestration, such as Langchain, Semantic Kernel, LLamaIndex, etc. Technologies We Love Traditional ML: SKLearn, XGBoost, LightGBM ML/Deep Learning: PyTorch, Transformers, HuggingFace, LangChain Deployment: Airflow, Docker, Kubernetes, Jenkins, AWS EDA/Visualization : Pandas, Matplotlib, Jupyter, Weights & Biases Tools/Toolkits: DVC, MosaicML, NVIDIA NeMo, LabelBox Techniques : RAG, Prompt Engineering, Information Retrieval, Data Embedding Datastores : Postgres, OpenSearch, SQLite, S3
Posted 2 months ago
13 - 15 years
50 - 55 Lacs
Chennai, Bengaluru, Noida
Work from Office
Dear Candidate, We are looking for an experienced Machine Learning Engineer to develop and deploy ML models for real-world applications. If you have expertise in Python, TensorFlow, and data science pipelines, wed love to hear from you! Key Responsibilities: Design, develop, and deploy machine learning models for predictive analytics. Work with large datasets to train and fine-tune models. Optimize model performance and scalability. Implement MLOps practices for model deployment and monitoring. Collaborate with data scientists and engineers on AI/ML projects. Required Skills & Qualifications: Strong programming skills in Python and ML libraries (TensorFlow, PyTorch, Scikit-learn). Experience with deep learning, NLP, or computer vision. Knowledge of cloud-based ML services (AWS SageMaker, Azure ML, Google AI). Familiarity with data preprocessing, feature engineering, and model evaluation. Understanding of MLOps and CI/CD for machine learning pipelines. Soft Skills: Strong analytical and problem-solving abilities. Ability to work with cross-functional teams. Excellent communication and presentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies
Posted 2 months ago
2 - 6 years
25 - 40 Lacs
Bengaluru
Hybrid
About Position As a crucial member of our team, you'll play a pivotal role across the entire machine learning lifecycle, contributing to our conversational AI bots, RAG system and traditional ML problem solving for our observability platform. Your tasks will encompass both operational and engineering aspects, including building production-ready inference pipelines, deploying and versioning models, and implementing continuous validation processes. On the LLM side you'll fine-tune generative AI models, design agentic language chains, and prototype recommender system experiments. Role : AL ML Engineer Location: Bengaluru, Hyderabad Experience: 2-6 years What You'll Do Fine-tuning generative AI models to enhance performance. Designing AI Agents for conversational AI applications. Experimenting with new techniques to develop models for observability use cases Building and maintaining inference pipelines for efficient model deployment. Managing deployment and model versioning pipelines for seamless updates. Developing tooling to continuously validate models in production environments. What we're looking for: 2-6 Years Demonstrated proficiency in software engineering design practices. Bachelor's or advanced degree in Computer Science, Engineering, Mathematics, or a related field. Advanced degree (Master's or Ph.D.) preferred. Experience working with transformer models and text embeddings. Proven track record of deploying and managing ML models in production environments. Familiarity with common ML/NLP libraries such as PyTorch, Tensorflow, HuggingFace Transformers, and SpaCy. Preferred experience developing production-grade applications in Python. Proficiency in Kubernetes and containers. Familiarity with concepts/libraries such as sklearn, kubeflow, argo, and seldon. Expertise in Python, C++, Kotlin, or similar programming languages. Experience designing, developing, and testing scalable distributed systems. Familiarity with message broker systems (e.g., Kafka, RabbitMQ). Knowledge of application instrumentation and monitoring practices. Experience with ML workflow management, like AirFlow, Sagemaker, etc. Familiarity with the AWS ecosystem. Past projects involving the construction of agentic language chains. Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards, Annual health, check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Note: We are Preferring candidates from premium institutes and product firms
Posted 2 months ago
2 - 6 years
25 - 40 Lacs
Hyderabad
Hybrid
About Position As a crucial member of our team, you'll play a pivotal role across the entire machine learning lifecycle, contributing to our conversational AI bots, RAG system and traditional ML problem solving for our observability platform. Your tasks will encompass both operational and engineering aspects, including building production-ready inference pipelines, deploying and versioning models, and implementing continuous validation processes. On the LLM side you'll fine-tune generative AI models, design agentic language chains, and prototype recommender system experiments. Role : AL ML Engineer Location: Bengaluru, Hyderabad Experience: 2-6 years What You'll Do Fine-tuning generative AI models to enhance performance. Designing AI Agents for conversational AI applications. Experimenting with new techniques to develop models for observability use cases Building and maintaining inference pipelines for efficient model deployment. Managing deployment and model versioning pipelines for seamless updates. Developing tooling to continuously validate models in production environments. What we're looking for: 2-6 Years Demonstrated proficiency in software engineering design practices. Bachelor's or advanced degree in Computer Science, Engineering, Mathematics, or a related field. Advanced degree (Master's or Ph.D.) preferred. Experience working with transformer models and text embeddings. Proven track record of deploying and managing ML models in production environments. Familiarity with common ML/NLP libraries such as PyTorch, Tensorflow, HuggingFace Transformers, and SpaCy. Preferred experience developing production-grade applications in Python. Proficiency in Kubernetes and containers. Familiarity with concepts/libraries such as sklearn, kubeflow, argo, and seldon. Expertise in Python, C++, Kotlin, or similar programming languages. Experience designing, developing, and testing scalable distributed systems. Familiarity with message broker systems (e.g., Kafka, RabbitMQ). Knowledge of application instrumentation and monitoring practices. Experience with ML workflow management, like AirFlow, Sagemaker, etc. Familiarity with the AWS ecosystem. Past projects involving the construction of agentic language chains. Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards, Annual health, check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Note: We are Preferring candidates from premium institutes and product firms
Posted 2 months ago
2 - 6 years
14 - 20 Lacs
Bengaluru
Work from Office
We are hiring for our client, who is an Indian multinational technology services company based in Pune. It is primarily engaged in cloud computing, internet of things, endpoint security, big data analytics and software product engineering services. About Position : As a crucial member of our team, you'll play a pivotal role across the entire machine learning lifecycle, contributing to our conversational AI bots, RAG system and traditional ML problem solving for our observability platform. Your tasks will encompass both operational and engineering aspects, including building production-ready inference pipelines, deploying and versioning models, and implementing continuous validation processes. On the LLM side you'll fine-tune generative AI models, design agentic language chains, and prototype recommender system experiments. Role: AL ML Engineer Location: Bengaluru Experience : 2-6 years What You'll Do Fine-tuning generative AI models to enhance performance. Designing AI Agents for conversational AI applications. Experimenting with new techniques to develop models for observability use cases Building and maintaining inference pipelines for efficient model deployment. Managing deployment and model versioning pipelines for seamless updates. Developing tooling to continuously validate models in production environments. What we're looking for: 2-6 Years Demonstrated proficiency in software engineering design practices. Bachelor's or advanced degree in Computer Science, Engineering, Mathematics, or a related field. Advanced degree (Master's or Ph.D.) preferred. Experience working with transformer models and text embeddings. Proven track record of deploying and managing ML models in production environments. Familiarity with common ML/NLP libraries such as PyTorch, TensorFlow, HuggingFace Transformers, and SpaCy. Preferred experience developing production-grade applications in Python. Proficiency in Kubernetes and containers. Familiarity with concepts/libraries such as sklearn, kubeflow, argo, and seldon. • Expertise in Python, C++, Kotlin, or similar programming languages. Experience designing, developing, and testing scalable distributed systems. Familiarity with message broker systems (e.g., Kafka, RabbitMQ). Knowledge of application instrumentation and monitoring practices. Experience with ML workflow management, like AirFlow, Sagemaker, etc. Familiarity with the AWS ecosystem. Past projects involving the construction of agentic language chains. Benefits : Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents .
Posted 2 months ago
5 - 8 years
1 - 2 Lacs
Bengaluru
Hybrid
We are hiring an Generative AI Specialist with 5+ years of experience! Develop and implement Generative AI solutions using Python, Hugging Face, and LLMs to drive AI adoption in healthcare and finance. JD: https://tinyurl.com/tencysai Perks and benefits Annual bonus Life insurance Performance bonus
Posted 3 months ago
7 - 12 years
20 - 35 Lacs
Bengaluru
Work from Office
Location: Bangalore / Hybrid Department: Data & AI Company: Resolve Tech Solutions / Juno Labs About Juno Labs: Juno Labs is at the forefront of AI-driven cloud solutions, helping businesses unlock the power of data with scalable, intelligent, and high-performance architectures. We specialize in building next-gen data platforms, leveraging cloud technologies, AI/ML, vector databases, and advanced frameworks to drive real-time insights and intelligent decision-making. Job Description: We are looking for an experienced MLOps Engineer to join our Data & AI team. This role will focus on building, deploying, and optimizing end-to-end machine learning systems with an emphasis on LLMOps (Large Language Models operationalization). The ideal candidate will have strong expertise in MLOps , LLMOps , and DevOps , with hands-on experience managing and deploying large-scale models, particularly LLMs , in both cloud and on-premise environments. The role involves not only building robust MLOps pipelines but also self-hosting models, optimizing GPU usage, and performing quantization to reduce the cost of deployment. Key Responsibilities: Design and implement scalable MLOps pipelines to deploy, monitor, and manage machine learning models, with a particular focus on LLMOps . Integrate, fine-tune, and optimize Hugging Face models (e.g., Transformers , BART , GPT-2/3 ) for diverse NLP tasks such as text generation , text classification , and NER , and deploy them for production-scale systems. Use LangChain to build sophisticated LLM-driven applications , enabling seamless model workflows for NLP and decision-making tasks. Optimize and manage LLMOps pipelines for large-scale models using technologies such as OpenAI API , Amazon Bedrock , DeepSpeed , and Hugging Face Hub . Develop and scale self-hosted LLM solutions (e.g., fine-tuning and serving models on-premises or in a hybrid cloud environment) to meet performance, reliability, and cost-effectiveness goals. Leverage cloud-native tools such as Amazon SageMaker , Vertex AI , GCP , AWS for scaling large language models, and ensure their optimization in distributed cloud environments. Use GPU-based optimization for large-scale model training and deployment, ensuring high performance and efficient resource allocation in the cloud or on- premises environments. Deploy models via containerized solutions using Docker , Kubernetes , and Helm , allowing for seamless scaling and management in both cloud and on- premise infrastructures. Implement model quantization and pruning techniques to reduce the resource footprint of deployed models while maintaining high performance. Monitor model performance in production using Prometheus , Grafana , ELK Stack , and other observability tools to track metrics such as inference latency, accuracy, and throughput. Automate the end-to-end workflow of model development and deployment via CI/CD pipelines with tools like GitLab CI , Jenkins , and CircleCI . Integrate vector databases (e.g., Pinecone , FAISS , Milvus ) for efficient storage, retrieval, and querying of model-generated embeddings. Stay up to date with the latest advancements in MLOps , LLMOps , and machine learning technologies, ensuring the adoption of best practices in model development, deployment, and optimization. Required Skills & Qualifications: Bachelors or Masters degree in Computer Science, Engineering, or a related field. 5+ years of experience in MLOps , LLMOps , DevOps , or related roles, with a focus on deploying and managing machine learning models in production environments. Experience with cloud platforms such as AWS , GCP , Azure , and services like Amazon SageMaker , Vertex AI , TensorFlow Serving , DeepSpeed , and Amazon Bedrock . Expertise in Hugging Face models and the Transformers library, including model fine-tuning , deployment , and optimizing NLP models for large-scale production. Experience with LangChain for building and deploying LLM-based applications that handle dynamic and real-time tasks. Strong experience with self-hosting LLMs in cloud or on-premises environments using GPU-based infrastructure for training and inference (e.g., NVIDIA GPUs , CUDA ). Expertise in GPU utilization and optimization for large-scale model training, inference, and cost-effective deployment. Hands-on experience in model quantization techniques to reduce the memory footprint and inference time, such as TensorFlow Lite , ONNX , or DeepSpeed . Familiarity with distributed ML frameworks like Kubeflow , Ray , Dask , MLflow , for managing end-to-end ML workflows and large-scale model training and evaluation. Proficiency with containerization and orchestration tools such as Kubernetes , Docker , Helm , and Terraform for infrastructure automation. Knowledge of vector databases like Pinecone , Milvus , or FAISS to facilitate fast and scalable retrieval of model-generated embeddings. Expertise in setting up and managing CI/CD pipelines for model training, validation, testing, and deployment with tools like Jenkins , GitLab CI , and CircleCI . Strong programming skills in Python , Bash , and Shell scripting . Solid understanding of monitoring and logging tools such as Prometheus , Grafana , and ELK Stack to ensure high system performance, error detection, and model health tracking. Preferred Qualifications: Proven experience in deploying and managing large-scale LLMs like GPT-3 , BERT , T5 , and BLOOM in production environments using cloud-native solutions and on-premises hosting. Deep expertise in quantization , model compression , and pruning to optimize deployed models for lower latency and reduced resource consumption. Strong understanding of NLP tasks and deep learning concepts such as transformers, attention mechanisms, and pretrained model fine-tuning. Experience with Kedro for building reproducible ML pipelines with a focus on data engineering, workflow orchestration, and modularity. Familiarity with Apache Spark and Hadoop for handling big data processing needs, especially in real-time AI workloads . Familiarity with advanced data engineering pipelines and data lakes for the effective management of large datasets required for training LLMs. Why Join Us: Work with cutting-edge technologies in AI , MLOps , and LLMOps , including self- hosting and optimizing large-scale language models. Be part of an innovative, fast-growing team working on the future of AI-driven cloud solutions. Flexibility in work style with a hybrid work environment that promotes work-life balance. Competitive salary and benefits package, with opportunities for personal and professional growth.
Posted 3 months ago
5 - 10 years
50 - 90 Lacs
Chennai
Hybrid
Short Description: Ford Artificial Intelligence Advancement Center is looking for professionals experienced in NLP/LLM/GenAI, who are hands-on and can employ many NLP/Prompt engineering techniques from traditional statistical/ML NLP to DL-based sequence models and transformers in their day-to-day work. Description: You'll be working alongside leading technical experts from all around the world, on a variety of products involving Sequence/token classification, QA/chatbots, translation, semantic/search and summarization, among others. Responsibilities: Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools. Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities. Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Qualifications: Education : Bachelors or master’s degree in computer science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed. Technical Requirements : Soft Skills : Strong communication skills and do excellent teamwork through Git/slack/email/call with multiple team members across geographies. GenAI Skills : Experience in LLM models like PaLM, GPT4, Mistral (open-source models), Work through the complete lifecycle of Gen AI model development, from training and testing to deployment and performance monitoring. Developing and maintaining AI pipelines with multimodalities like text, image, audio etc. Have implemented in real-world Chat bots or conversational agents at scale handling different data sources. Experience in developing Image generation/translation tools using any of the latent diffusion models like stable diffusion, Instruct pix2pix. Expertise in handling large scale structured and unstructured data. Efficiently handled large-scale generative AI datasets and outputs. ML/DL Skills : High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others NLP Skills : Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment. Python Project Management Skills Familiarity in the use of Docker tools, pipenv/conda/poetry env Comfort level in following Python project management best practices (use of setup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) Cloud Skills and Computing : Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open-source packages to benchmark and derive summary. Experience in using GPU/CPU of cloud and on-prem infrastructures. Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Deployment Skills : Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. UI : Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Miscellaneous Skills : Data Engineering: Skillsets to perform distributed computing (specifically parallelism and scalability in Data Processing, Modeling and Inferencing through Spark, Dask, RapidsAI or RapidscuDF) Ability to build python-based APIs (e.g.: use of FastAPIs/ Flask/ Django for APIs) Experience in Elastic Search and Apache Solr is a plus, vector databases.
Posted 3 months ago
5 - 10 years
5 - 11 Lacs
Gurgaon
Work from Office
Role & responsibilities Key Responsibilities Develop and maintain backend applications using Node.js and TypeScript Build and optimize RESTful APIs and microservices using Express, Fastify, or AdonisJS Design and manage MongoDB databases, ensuring efficiency and scalability Work with Linux and Bash for server management, automation, and deployment tasks Implement LLM LoRA fine-tuning and optimize LLM models using Hugging Face Transformer Trainer or other advanced techniques (Preferred) Set up and manage CI/CD pipelines , ensuring smooth deployment and integration processes Deploy and optimize cloud infrastructure using AWS Beanstalk or Kubernetes-based infra Would be a plus. Work on AI-driven coding workflows , integrating Vibe Coding methodologies Utilize Generative AI agents like Cursor or Cline to enhance development efficiency Required Skills and Experience 5+ years of experience in backend development with Node.js and TypeScript Proficiency in Express, Fastify, or AdonisJS for building scalable applications Strong expertise in MongoDB , including schema design, indexing, and optimization Hands-on experience with Linux and Bash scripting for system automation Familiarity with Hugging Face Transformer Trainer or LoRA fine-tuning for LLMs (Preferred) Experience in setting up CI/CD pipelines using GitHub Actions, Jenkins, or similar tools Knowledge of AWS Beanstalk, Kubernetes (K8s), or cloud infrastructure deployment Exposure to Vibe Coding methodologies and AI-assisted development tools like Cursor or Cline (Huge Plus) Strong debugging and troubleshooting skills for backend applications Preferred Qualifications Experience with GraphQL or tRPC for efficient API communication Background in distributed systems and microservices architecture Familiarity with serverless computing and cloud-native technologies
Posted 3 months ago
4 - 9 years
7 - 17 Lacs
Hyderabad
Work from Office
About the role: Wells Fargo is seeking a Senior Business Execution Consultant. We believe in the power of working together because great ideas can come from anyone. Through collaboration, any employee can have an impact and make a difference for the entire company. Explore opportunities with us for a career in a supportive environment where you can learn and grow. In this role, you will: Lead support functions or operations for multiple business groups and contribute to large scale strategic initiatives Ensure efficiency, quality, cost effectiveness of solutions, and pipeline management relating to assigned operations Research moderately complex business, operational, and strategic initiatives that require analytical skills, basic knowledge of organizational strategy and Business Execution, and understanding of international business Work independently to make recommendations for support function by providing support and leadership Assist in the planning and execution of a variety of programs and initiatives that may include risk mitigation, efficiency, and customer experience Collaborate and consult with team leaders in developing project plans, policies and procedures Provide leadership in management of relationships and implementation of programs, services, and initiatives with cross functional business partners Required Qualifications: 4+ years of Business Execution, Implementation, or Strategic Planning experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Expertise in Python and key/major frameworks like TensorFlow, PyTorch, and HuggingFace. Experience in using ML Ops tools (e.g., MLflow, Kubeflow) for scaling and deploying models in production. Experience working with on Google Cloud Platform and expertise in using the GCP services. Expertise in Scala for processing large-scale data sets for use cases. Proficiency in Java, JavaScript (Node.js) for back-end integrations and for building interactive AI-based web applications or APIs. Experience with SQL and NoSQL languages for managing structured, unstructured, and semi-structured data for AI and Generative AI applications. Critical thinking and strong problem-solving skills. Ability to learn the latest technologies and keep up with the trends in the Gen-AI space and apply to the business problems quickly. Ability to multi-task and prioritize between projects and able to work independently and as part of a team. Graduate degree from a top tier university (e.g., IIT, ISI, IIIT, IIM, etc.,) is preferred. Required to work individually or as part of a team on multiple AI and Generative AI projects and work closely with business partners across the organization. Mentor and coach budding Data Scientists on developing and implementing AI solutions. Perform various complex activities related to neural networks and transformer-based models. Provide analytical support for developing, evaluating, implementing, monitoring, and executing models across business verticals using emerging technologies. Expert knowledge on working on large datasets using SQL or NoSQL and present conclusions to key stakeholders. Establish a consistent and collaborative framework with the business and act as a primary point of contact in delivering the solutions. Experience in building quick prototypes to check feasibility and value to business. Expert in developing and maintaining modular codebase for reusability. Review and validate models and help improve the performance of the model under the preview of regulatory requirements. Work closely with technology teams to deploy the models to production. Prepare detailed documentations for projects for both internal and external use that comply with regulatory and internal audit requirements.
Posted 3 months ago
5 - 8 years
5 - 8 Lacs
Navi Mumbai, Thane, Mumbai (All Areas)
Work from Office
Develop & enhance OCR systems,object detection, DocTR by Mindee for end-to-end document analysis,LayoutLM and YOLOv8 integrating visual,Design and apply Neural Networks ,Develop tactical automations using python Robo framework,Employ sklearn,pytorch
Posted 3 months ago
5 - 8 years
8 - 15 Lacs
Noida
Work from Office
Assist in Fine-Tuning AI Models, Data Preparation, Research & Development. Support the design and implementation of scalable pipelines for AI model training and deployment. integrate AI models into end-user applications
Posted 3 months ago
5 - 8 years
8 - 15 Lacs
Noida
Work from Office
Assist in Fine-Tuning AI Models, Data Preparation, Research & Development. Support the design and implementation of scalable pipelines for AI model training and deployment. integrate AI models into end-user applications
Posted 3 months ago
5 - 7 years
15 - 18 Lacs
Pune
Work from Office
Summary of the Role: We are seeking an AI Prompt Engineer with expertise in Large Language Models (LLMs) and prompt engineering to support legal tech solutions, specifically in contract lifecycle management and contract-related AI implementations. The ideal candidate will have a strong background in AI, NLP, and structured prompt engineering, with a keen understanding of legal tech applications. Prior experience in the legal domain is a plus but not mandatory. What you will do: Design, develop, and refine AI prompts for legal tech applications, ensuring accuracy and contextual relevance in contract-related workflows. Work with Large Language Models (LLMs) to enhance AI-driven solutions for contract analysis, review, and automation. Optimize prompt structures and techniques to improve AI-generated outputs in contract drafting, compliance, and negotiations. Research, test, and implement best practices in prompt engineering to maximize efficiency and accuracy in contract delivery. Evaluate and refine AI-generated legal documents to align with compliance standards and client requirements. Stay updated with advancements in LLMs, prompt engineering methodologies, and AI-driven legal tech solutions. What you bring: Bachelor's or Master's degree in Computer Science, AI, Data Science, or a related field. Minimum 5 years of overall professional experience, with at least 2 years in AI prompt engineering. Strong understanding of LLMs (GPT, Claude, Gemini, Llama, etc.) and their application in legal or enterprise use cases. Proven expertise in prompt design, optimization, and fine-tuning for AI-driven text generation. Hands-on experience with AI tools, frameworks, and APIs (OpenAI, Hugging Face, LangChain, etc.). Strong problem-solving skills and ability to work in a fast-paced environment. Excellent communication and collaboration skills to work with cross-functional teams. Bonus Points: Basic understanding of contracts, legal terminology, and compliance standards. Applications must be submitted exclusively through Execo's official job postings located on the following platforms: Execo Careers Website: https://www.execo.com/careers LinkedIn: https://www.linkedin.com/company/execogroup/jobs/ Indeed: US & Kenya: https://www.indeed.com/cmp/Execo-Group-Inc India: https://in.indeed.com/cmp/Execo-Group-Inc UK: https://uk.indeed.com/cmp/Execo-Group-Inc Philippines: https://ph.indeed.com/cmp/Execo-Group-Inc Singapore: https://sg.indeed.com/cmp/Execo-Group-Inc Naukri: https://www.naukri.com/
Posted 3 months ago
4 - 7 years
35 - 40 Lacs
Mumbai
Remote
Designation : Full Stack Developer with Gen AI Deployment Experience : 4-7 Years Work Mode: Remote Interview Process : L1 - External Technical Interview | L2 - Internal Techno Managerial Interview | L3 - Client Interview Notice Period: Immediate Joiners / Serving Notice Period Job Description : Job Title : Full Stack Developer with Gen AI Deployment Experience (Primary focus on back-end, secondary focus on front-end/UI) Job Summar y: We are seeking a highly skilled Full Stack Developer with a strong emphasis on backend development and direct experience deploying enterprise production systems. The primary focus will be on putting into production an enterprise GenAI platform, with additional responsibilities in UI development using Angular and React. The ideal candidate will have 4-5 years of experience in designing, developing, and deploying large-scale applications, particularly with Python and Gen AI frameworks, and a preference for Azure cloud deployment experience. Key Responsibilities : - Design, develop, and deploy scalable and reliable backend services using Python, Flask/Django, and other relevant frameworks. - Collaborate with data scientists to integrate Gen AI models into production environments using frameworks such as Hugging Face Transformers, PyTorch, and TensorFlow. - Develop and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, and Docker to deploy Gen AI models. - Ensure high-quality code through unit tests, integration tests, and code reviews. - Work with data engineering teams to design and implement data pipelines using technologies like Apache Beam and Apache Spark. - Collaborate with DevOps teams to ensure smooth deployment and operation of backend services, with a preference for Azure cloud environments. - Contribute to front-end development using Angular and React for UI components. - Troubleshoot and resolve complex technical issues, providing technical guidance to junior team members. - Stay updated with industry trends and emerging Gen AI technologies to enhance our software systems. Requirements : - 4-5 years of experience in software development, with a focus on backend development using Python. - Strong experience with Gen AI frameworks such as Hugging Face Transformers, PyTorch, and TensorFlow. - Experience with DevOps tools like Jenkins, GitLab CI/CD, Docker, and cloud technologies, preferably Azure. - Proficiency in database concepts, including data modeling, normalization, and query optimization, with experience in systems like MySQL, PostgreSQL, or MongoDB. - Understanding of software design patterns, principles, and best practices. - Excellent problem-solving skills and the ability to troubleshoot complex technical issues. - Strong communication and collaboration skills for effective teamwork. - Bachelor's degree in Computer Science, Engineering, or a related field. Sincerely, Varsha L TS
Posted 3 months ago
2 - 6 years
25 - 35 Lacs
Gurgaon
Work from Office
Research, design, and implement advanced NLP models to enhance platform's multilingual capabilities. Develop, fine-tune statistical, neural models for optimal performance. Build scalable pipelines to train, evaluate, deploy multilingual NLP solutions Required Candidate profile RAG frameworks integrating contextual data into models. Skilled in Python/R and ML frameworks - PyTorch, TensorFlow, Hugging Face, Strong foundation in machine learning deep learning principles.
Posted 3 months ago
3 - 7 years
14 - 18 Lacs
Bengaluru
Work from Office
As an Associate Data Scientist at IBM, you will work to solve business problems using leading edge and open-source tools such as Python, R, and TensorFlow, combined with IBM tools and our AI application suites. You will prepare, analyze, and understand data to deliver insight, predict emerging trends, and provide recommendations to stakeholders. In your role, you may be responsible for: Implementing and validating predictive and prescriptive models and creating and maintaining statistical models with a focus on big data & incorporating machine learning. techniques in your projects Writing programs to cleanse and integrate data in an efficient and reusable manner Working in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors Communicating with internal and external clients to understand and define business needs and appropriate modelling techniques to provide analytical solutions. Evaluating modelling results and communicating the results to technical and non-technical audiences. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proof of Concept (POC) Development: Develop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Collaborate with development teams to implement and iterate on POCs, ensuring alignment with customer requirements and expectations. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another, particularly COBOL to JAVA through rapid prototypes/ PoC Document solution architectures, design decisions, implementation details, and lessons learned. Create technical documentation, white papers, and best practice guides Preferred technical and professional experience Strong programming skills, with proficiency in Python and experience with AI frameworks such as TensorFlow, PyTorch, Keras or Hugging Face. Understanding in the usage of libraries such as SciKit Learn, Pandas, Matplotlib, etc. Familiarity with cloud platforms Experience and working knowledge in COBOL & JAVA would be preferred Experience in python and pyspark will be added advantage
Posted 3 months ago
5 - 10 years
6 - 16 Lacs
Bengaluru
Work from Office
Hiring for Gen AI Min exp 5yrs Loc:-Blr CTC:-20lpa For more details 9205018536(prabhsimer) prabhsimer.imaginators@gmail.com Required Candidate profile skills:- Proficiency in Python, Node.Js, C#, HTML, and JavaScript. Exp with AI Libraries/Packages such as TensorFlow, Keras, Pytorch,Generative AI frameworks.
Posted 3 months ago
2 - 7 years
0 - 2 Lacs
Gurgaon, Jaipur
Work from Office
We are looking for a Senior Machine Learning Engineer with deep expertise in Transformers, Large Language Models (LLMs), and Natural Language Processing (NLP). This role involves designing, training, and fine-tuning state-of-the-art AI models for real-world applications. The ideal candidate will have a strong research background and hands-on experience in deploying scalable NLP solutions. Key Responsibilities Research, develop, and optimize Transformer-based architectures (e.g., BERT, GPT, T5, LLaMA) for various NLP tasks. Fine-tune LLMs on domain-specific datasets to improve accuracy and performance. Work on text generation, summarization, named entity recognition (NER), and semantic search applications. Implement and optimize embedding techniques for retrieval-augmented generation (RAG). Apply self-supervised and reinforcement learning techniques to enhance model performance. Deploy and scale ML models using cloud platforms (AWS, GCP, Azure) and containerized solutions like Docker and Kubernetes. Improve inference efficiency using quantization, distillation, and model optimization techniques. Collaborate with data engineers, software developers, and research scientists to integrate ML models into production. Stay updated with the latest advancements in AI, NLP, and Deep Learning, applying innovative techniques to solve business challenges. Required Skills & Qualifications Expertise in NLP & LLMs: Strong understanding of transformer-based models (e.g., BERT, GPT, T5, LLaMA). Programming Skills: Proficiency in Python and deep learning frameworks like PyTorch, TensorFlow, and Hugging Face Transformers. Model Optimization: Experience with quantization, pruning, and distillation to improve model efficiency. Data Handling: Strong experience in preprocessing, tokenization, and vectorization of large text datasets. Deployment & Scalability: Hands-on experience with MLOps, API development, cloud services (AWS, GCP, Azure), and containerization (Docker, Kubernetes). Information Retrieval & RAG: Knowledge of vector databases (FAISS, Pinecone, Weaviate) and embedding techniques. Mathematical Foundation: Strong background in linear algebra, probability, and deep learning architectures. Collaboration: Ability to work with cross-functional teams and communicate technical concepts effectively. Preferred Qualifications Experience in low-rank adaptation (LoRA) and fine-tuning LLMs with limited resources. Exposure to multimodal learning (text, images, audio). Research publications or contributions to open-source NLP projects. Familiarity with prompt engineering and fine-tuning for AI assistants. What We Offer Opportunity to work on cutting-edge AI and NLP projects with a talented team. Ability to shape the development of next-generation AI applications. Access to latest ML research, conferences, and learning resources. Flexible work arrangements (remote/hybrid options available). Competitive salary and performance-based incentives.
Posted 3 months ago
4 - 9 years
7 - 17 Lacs
Mumbai
Work from Office
We are seeking a proficient AI/ML Engineer responsible for testing, deploying, and hosting various Large Language Models (LLMs). The role involves monitoring deployed agentic services, ensuring their optimal performance, and staying abreast of the latest advancements in AI technologies. Key Responsibilities: LLM Deployment & Hosting Evaluate, test, and deploy diverse LLMs on cloud-based and on-premise infrastructures. Optimize model performance for scalability and efficiency. Implement secure API endpoints and model-serving pipelines. Strong CI/CD skills and knowledge of CI/CD tools like Jenkins, GitHub Actions, CircleCI etc. Agentic AI Services Deploy and maintain AI agents using frameworks such as CrewAI, Agnos(Phi Data), AutoGen etc. Integrate LLMs into business workflows and automation tools. Design, monitor, and enhance agentic services for real-world applications. Monitoring & Optimization Utilize advanced observability tools to monitor model performance, latency, and cost efficiency. Implement tools like AgentOps, OpenLIT, Langfuse, and Langtrace for comprehensive monitoring and debugging of LLM applications. Develop logging, tracing, and alerting systems for deployed models and AI agents. Conduct A/B testing and gather user feedback to refine AI behavior. Address model drift and retrain models as necessary. Best Practices & Research Stay updated with the latest advancements in AI, LLMs, and agentic systems. Implement best practices for prompt engineering, reinforcement learning from human feedback (RLHF), and fine-tuning methodologies. Optimize compute costs and infrastructure usage for AI applications. Collaborate with researchers and ML engineers to integrate state-of-the-art AI techniques. Strong knowledge on version control tools like GitHub Qualifications & Skills: Proficiency with LLM frameworks such as Hugging Face Transformers, OpenAI API, or Meta AI models. Strong programming skills in Python and experience with deep learning libraries (PyTorch, TensorFlow, JAX). Experience with cloud platforms (AWS, Azure, GCP) and model deployment tools (Docker, Kubernetes, FastAPI, Ray Serve). Familiarity with vector databases (FAISS, Pinecone, Weaviate) and retrieval-augmented generation (RAG) techniques. Hands-on experience with monitoring tools such as AgentOps, OpenLIT, Langfuse, and Langtrace. Understanding of prompt engineering, LLM fine-tuning, and agent-based automation. Excellent problem-solving skills and the ability to work in a dynamic AI research and deployment team. Preferred Qualifications: Experience in reinforcement learning, fine-tuning LLMs, or training custom models. Knowledge of security best practices for AI applications. Contributions to open-source AI/ML projects or research publications in the field.
Posted 1 month ago
5 - 10 years
25 - 30 Lacs
Mumbai, Navi Mumbai, Chennai
Work from Office
We are looking for an AI Engineer (Senior Software Engineer). Interested candidates email me resumes on mayura.joshi@lionbridge.com OR WhatsApp on 9987538863 Responsibilities: Design, develop, and optimize AI solutions using LLMs (e.g., GPT-4, LLaMA, Falcon) and RAG frameworks. Implement and fine-tune models to improve response relevance and contextual accuracy. Develop pipelines for data retrieval, indexing, and augmentation to improve knowledge grounding. Work with vector databases (e.g., Pinecone, FAISS, Weaviate) to enhance retrieval capabilities. Integrate AI models with enterprise applications and APIs. Optimize model inference for performance and scalability. Collaborate with data scientists, ML engineers, and software developers to align AI models with business objectives. Ensure ethical AI implementation, addressing bias, explainability, and data security. Stay updated with the latest advancements in generative AI, deep learning, and RAG techniques. Requirements: 8+ years experience in software development according to development standards. Strong experience in training and deploying LLMs using frameworks like Hugging Face Transformers, OpenAI API, or LangChain. Proficiency in Retrieval-Augmented Generation (RAG) techniques and vector search methodologies. Hands-on experience with vector databases such as FAISS, Pinecone, ChromaDB, or Weaviate. Solid understanding of NLP, deep learning, and transformer architectures. Proficiency in Python and ML libraries (TensorFlow, PyTorch, LangChain, etc.). Experience with cloud platforms (AWS, GCP, Azure) and MLOps workflows. Familiarity with containerization (Docker, Kubernetes) for scalable AI deployments. Strong problem-solving and debugging skills. Excellent communication and teamwork abilities Bachelors or Masters degree in computer science, AI, Machine Learning, or a related field.
Posted 1 month ago
2 - 5 years
15 - 18 Lacs
Pune
Work from Office
Summary of the Role: We are seeking an AI Prompt Engineer with expertise in Large Language Models (LLMs) and prompt engineering to support legal tech solutions, specifically in contract lifecycle management and contract-related AI implementations. The ideal candidate will have a strong background in AI, NLP, and structured prompt engineering, with a keen understanding of legal tech applications. Prior experience in the legal domain is a plus but not mandatory. What you will do: Design, develop, and refine AI prompts for legal tech applications, ensuring accuracy and contextual relevance in contract-related workflows. Work with Large Language Models (LLMs) to enhance AI-driven solutions for contract analysis, review, and automation Optimize prompt structures and techniques to improve AI-generated outputs in contract drafting, compliance, and negotiations Research, test, and implement best practices in prompt engineering to maximize efficiency and accuracy in contract delivery Evaluate and refine AI-generated legal documents to align with compliance standards and client requirements Stay updated with advancements in LLMs, prompt engineering methodologies, and AI-driven legal tech solutions. What you bring: Bachelor's or Master's degree in Computer Science, AI, Data Science, or a related field. Minimum 5 years of overall professional experience, with at least 2 years in AI prompt engineering Strong understanding of LLMs (GPT, Claude, Gemini, Llama, etc.) and their application in legal or enterprise use cases Proven expertise in prompt design, optimization, and fine-tuning for AI-driven text generation Hands-on experience with AI tools, frameworks, and APIs (OpenAI, Hugging Face, LangChain, etc.) Strong problem-solving skills and ability to work in a fast-paced environment Excellent communication and collaboration skills to work with cross-functional teams. Bonus Points: Basic understanding of contracts, legal terminology, and compliance standards. Applications must be submitted exclusively through Execo's official job postings located on the following platforms: Execo Careers Website: https://www.execo.com/careers LinkedIn: https://www.linkedin.com/company/execogroup/jobs/ Indeed: US & Kenya: https://www.indeed.com/cmp/Execo-Group-Inc India: https://in.indeed.com/cmp/Execo-Group-Inc UK: https://uk.indeed.com/cmp/Execo-Group-Inc Philippines: https://ph.indeed.com/cmp/Execo-Group-Inc Singapore: https://sg.indeed.com/cmp/Execo-Group-Inc Naukri: https://www.naukri.com/
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2