Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a Senior Data Scientist at AuxoAI, you will have the opportunity to lead the design, deployment, and scaling of production-grade ML systems, focusing on AI, machine learning engineering (MLE), and generative AI. Your role will involve a combination of cutting-edge research and production rigor, where you will be responsible for mentoring while building impactful AI applications. Key Responsibilities: - Own the full ML lifecycle, including model design, training, evaluation, and deployment - Design production-ready ML pipelines integrating CI/CD, testing, monitoring, and drift detection - Fine-tune Large Language Models (LLMs) and implement retrieval-augmented generation (RAG) pipelines - Build agentic workflows for reasoning, planning, and decision-making - Develop real-time and batch inference systems using Docker, Kubernetes, and Spark - Utilize state-of-the-art architectures such as transformers, diffusion models, RLHF, and multimodal pipelines - Collaborate with product and engineering teams to integrate AI models into business applications - Mentor junior team members and advocate for MLOps, scalable architecture, and responsible AI best workflows Qualifications Required: - 5+ years of experience in designing, deploying, and scaling ML/DL systems in production - Proficiency in Python and deep learning frameworks like PyTorch, TensorFlow, or JAX - Experience with LLM fine-tuning, LoRA/QLoRA, vector search (Weaviate/PGVector), and RAG pipelines - Familiarity with agent-based development, including ReAct agents, function-calling, and orchestration - Strong understanding of MLOps tools such as Docker, Kubernetes, Spark, model registries, and deployment workflows - Solid software engineering background with expertise in testing, version control, and APIs - Demonstrated ability to balance innovation with scalable deployment - B.S./M.S./Ph.D. in Computer Science, Data Science, or a related field Join AuxoAI and be part of a team that values innovation, scalability, and responsible AI practices to make a significant impact in the field of AI and machine learning.,
Posted 4 days ago
7.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
As a Manager AI/ML Engineering at Coschool located in Nanakramguda, Hyderabad, you will have the opportunity to join a team dedicated to revolutionizing learning through AI technology. Coschool is committed to creating a next-generation EdTech platform that empowers educators and students to achieve their best through intelligent learning systems. Your role will involve leading and mentoring a high-performing team of AI/ML engineers, researching, prototyping, and developing robust ML models, overseeing the full AI/ML project lifecycle from data preprocessing to deployment and monitoring, and guiding the team in training, fine-tuning, and deploying LLMs using various methodologies. To excel in this position, you should have 7 to 12 years of experience in building AI/ML solutions, with at least 2 years in a leadership or managerial role. A strong foundation in machine learning, deep learning, and computer science fundamentals is essential, along with hands-on experience deploying AI models in production using frameworks like PyTorch, TensorFlow, and Scikit-learn. Proficiency in cloud ML platforms such as AWS SageMaker, Google AI, and Azure ML is also required, as well as familiarity with MLOps tools and practices. In addition to technical skills, excellent problem-solving, communication, and people management skills are crucial for success in this role. A proactive mindset, passion for innovation, and mentorship are qualities that will make you a great fit for the team at Coschool. Preferred skills for this position include experience working with generative AI frameworks, a portfolio of creative AI/ML applications, and familiarity with tools for LLM orchestration and retrieval-augmented generation (RAG). At Coschool, you will have the opportunity to make a real impact by building solutions that directly affect the lives of millions of students. You will enjoy autonomy to innovate and execute with a clear mission in mind, work in a fast-paced, learning-focused environment with top talent, and be part of a purpose-driven company that combines educational excellence with cutting-edge AI technology.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
bangalore, karnataka
On-site
You are a Machine Learning & Generative AI Engineer responsible for designing, building, and deploying advanced ML and GenAI solutions. This role presents an exciting opportunity to work on cutting-edge AI systems, such as LLM fine-tuning, Transformer architectures, and RAG pipelines, while also contributing to traditional ML model development for decision-making and automation. With a minimum of 3 years of experience in Machine Learning, Deep Learning, and AI model development, you are expected to demonstrate a strong proficiency in Python, PyTorch, TensorFlow, Scikit-Learn, and MLflow. Your expertise should encompass Transformer architectures (such as BERT, GPT, T5, LLaMA, Falcon) and attention mechanisms. Additionally, experience with Generative AI, including LLM fine-tuning, instruction tuning, and prompt optimization is crucial. You should be familiar with RAG (Retrieval-Augmented Generation) embeddings, vector databases (FAISS, Pinecone, Weaviate, Chroma), and retrieval workflows. A solid foundation in statistics, probability, and optimization techniques is essential for this role. You should have experience working with cloud ML platforms like Azure ML / Azure OpenAI, AWS SageMaker / Bedrock, or GCP Vertex AI. Familiarity with Big Data & Data Engineering tools like Spark, Hadoop, Databricks, and SQL/NoSQL databases is required. Proficiency in CI/CD, MLOps, and automation pipelines (such as Airflow, Kubeflow, MLflow) is expected, along with hands-on experience with Docker and Kubernetes for scalable ML/LLM deployment. It would be advantageous if you have experience in NLP & Computer Vision areas, including Transformers, BERT/GPT models, YOLO, and OpenCV. Exposure to vector search & embeddings for enterprise-scale GenAI solutions, multimodal AI, Edge AI / federated learning, RLHF (Reinforcement Learning with Human Feedback) for LLMs, real-time ML applications, and low-latency model serving is considered a plus. Your responsibilities will include designing, building, and deploying end-to-end ML pipelines covering data preprocessing, feature engineering, model training, and deployment. You will develop and optimize LLM-based solutions for enterprise use cases, leveraging Transformer architectures. Implementing RAG pipelines using embeddings and vector databases to integrate domain-specific knowledge into LLMs will also be part of your role. Fine-tuning LLMs on custom datasets for domain-specific tasks and ensuring scalable deployment of ML & LLM models on cloud environments are critical responsibilities. Collaborating with cross-functional teams comprising data scientists, domain experts, and software engineers to deliver AI-driven business impact is expected from you.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
Techvantage Analytics is a fast-growing AI services and product engineering company specializing in Analytics, Machine Learning, and AI-based solutions. We are seeking a forward-thinking and highly experienced Gen AI Architect to design and implement cutting-edge AI solutions. The role involves leveraging the latest advancements in artificial intelligence and data science, including foundation models, multimodal AI, federated learning, and emerging AI frameworks. The ideal candidate will drive the development and deployment of scalable AI systems that power innovative applications across diverse domains. In this role, you will be responsible for AI Architecture Design, where you will develop robust architectures for generative AI solutions, leveraging the latest advancements in foundation models, large language models (LLMs), and multimodal AI. You will also be involved in Model Development, building and fine-tuning advanced generative models, such as GPT-based models, diffusion models, and transformers, for applications in text, image, video, and audio generation. Framework Integration is another key aspect of the role, where you will integrate modern AI frameworks, such as LangChain and LlamaIndex, to enhance the capabilities of generative models in production environments. Additionally, you will be tasked with implementing Data Strategy, including synthetic data generation, federated learning, and secure data sharing protocols for effective model training. You will design and oversee the deployment of scalable AI systems using the latest technologies, such as MLOps pipelines, container orchestration (Docker, Kubernetes), and serverless architectures. Ensuring Ethics & Compliance is also crucial, where you will make sure AI models comply with ethical guidelines, including responsible AI principles, addressing fairness, transparency, and mitigating bias. Staying updated with the latest AI trends, such as reinforcement learning with human feedback (RLHF), AI explainability, and continual learning, and applying them to real-world problems is essential. Collaboration is key in this role, as you will work closely with cross-functional teams, including data engineers, scientists, and product managers, to align AI systems with business goals. Leadership & Mentorship are also important aspects, as you will provide guidance to junior AI practitioners and foster a culture of innovation within the team. Performance Monitoring is another critical responsibility, where you will define KPIs for model performance, monitor systems in production, and implement mechanisms for continuous improvement and retraining. The ideal candidate for this role should have a Bachelors, Masters, or Ph.D. in Computer Science, Artificial Intelligence, Data Science, or related fields, along with 5+ years of experience in AI/ML, with a focus on generative AI technologies. Proven expertise in deploying large-scale generative models in production environments, hands-on experience with AI/ML frameworks such as PyTorch, TensorFlow, and Hugging Face Transformers, and proficiency in Python, R, or Julia for data science and AI model development are required. Additionally, expertise in cloud-native AI solutions, familiarity with advanced data strategies like synthetic data creation and federated learning, and hands-on experience with MLOps practices are preferred. Strong analytical skills, the ability to thrive in a dynamic, fast-paced environment, and a proven track record of working effectively within cross-functional teams are also desired qualities in the ideal candidate.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
kochi, kerala
On-site
We are looking for a Senior AI/Machine Learning Engineer to join our prestigious client's team as a key member. With at least 8 years of experience, you will be based in Cochin or work in a hybrid model. The focus will be on Advanced AI Model Development with a competitive salary package based on your experience. As a Senior AI/ML Engineer, you will play a crucial role in leading the development, fine-tuning, and deployment of advanced LLMs (Large Language Models). You should excel in end-to-end AI solution development, encompassing tasks from data engineering to model optimization and API integration. Your responsibilities will include training and fine-tuning LLMs such as Llama for various business applications, building automated AI training pipelines for text and web-based datasets, managing AI infrastructure on Mac Pro, GPU-enabled servers, and cloud platforms, developing AI assistants for customer support, compliance, automation, and more, optimizing and deploying models for high-performance computing environments, and implementing API integrations to enhance AI-driven task automation. The ideal candidate should have expertise in Python, TensorFlow, PyTorch, MLX (MacOS), LLM fine-tuning, AI model training, GPU computing, MLOps best practices, web UI, front-end integration with AI models, containerization, API development, cloud deployment, and experience with open-source AI tools like Llama, Ollama, and OpenWEB. A Master's degree in Computer Science, Machine Learning, or a related field, a proven track record in successfully deploying AI models, and strong problem-solving and communication skills are preferred qualifications. Joining this opportunity will allow you to lead AI-driven transformation and work with the latest advancements in LLMs, automation, and AI integrations.,
Posted 2 weeks ago
3.0 - 5.0 years
0 Lacs
bengaluru, karnataka, india
On-site
AI Engineer Position We are seeking a highly skilled and experienced AI Engineer to join our innovative team in Bengaluru. This role is perfect for someone with a passion for artificial intelligence and machine learning, looking to apply their skills in llm, rag, gpt, and llama within a dynamic work environment. With an experience range of 3-5 years, the ideal candidate will have a proven track record in developing and implementing AI models that drive efficiency and innovation. Key Responsibilities Design, develop, and deploy AI models using core skills such as llm, rag, gpt, and llama. Collaborate with cross-functional teams to understand business requirements and integrate AI solutions that meet those needs. Stay abreast of developments in AI technologies and continuously evaluate them for potential application within our projects. Analyze large data sets to identify patterns, trends, and insights that can be leveraged through AI. Ensure the scalability and reliability of AI systems while adhering to best practices in data security and privacy. Provide technical guidance and mentorship to junior team members on AI-related projects. Troubleshoot complex issues arising during the development lifecycle of AI models. Required Skills Expertise in LLM Finetuning Expertise in artificial intelligence methodologies with specific skills in llm, rag, gpt, and llama. Strong programming skills in Python or other relevant languages used in AI development. Demonstrated experience with machine learning libraries and frameworks. A solid understanding of data structures, algorithms, and software engineering principles relevant to AI. The ability to work effectively in a fast-paced environment while managing multiple priorities. Experience Range 2-5 years Job Timing This is a full-time position Job Type The role is an in-office position located in Bengaluru. Candidates should be prepared for a dynamic work environment that fosters creativity, innovation, and growth. If you are passionate about leveraging artificial intelligence to solve complex problems and drive innovation, we would love to hear from you. Please submit your resume along with any relevant project examples or portfolios for consideration. Join us to embark on exciting projects that push the boundaries of what&aposs possible with AI! LLM Finetuning Show more Show less
Posted 2 weeks ago
1.0 - 5.0 years
0 Lacs
pune, maharashtra
On-site
As a part of Cowbell's innovative team in the field of cyber insurance, you will play a crucial role in designing and implementing RAG-based systems, integrating LLMs with vector databases, search pipelines, and knowledge retrieval frameworks. Your responsibilities will include developing intelligent AI agents that automate tasks, retrieve relevant information, and enhance user interactions. You will work with APIs, embeddings, and multi-modal retrieval techniques to improve the performance of AI applications. Additionally, you will be tasked with optimizing inference pipelines and enhancing LLM serving, fine-tuning, and distillation for efficiency. Staying abreast of the latest advancements in generative AI and retrieval techniques will be essential, along with collaborating with stakeholders and cross-functional teams to address business needs and develop impactful ML models and AI-driven automation solutions. The ideal candidate for this position should hold a Master's degree in Computer Science, Data Science, AI, Machine Learning, or a related field (or a Bachelor's degree with significant experience). You should have at least 5 years of experience in machine learning, deep learning, and NLP for real-world applications, as well as a minimum of 1 year of hands-on experience with LLMs and generative AI. Expertise in RAG architectures, vector search, and retrieval methods is required, along with proficiency in Python and experience with LLM APIs such as OpenAI, Hugging Face, Anthropic, etc. Experience in integrating LLMs into real-world applications, solid foundation in machine learning, statistical modeling, and AI-driven software development, as well as knowledge of prompt engineering, few-shot learning, and prompt chaining techniques are also preferred qualifications. Strong software engineering skills, including experience with cloud platforms like AWS, and excellent problem-solving abilities, communication skills, and the capacity to work independently are crucial for this role. Preferred qualifications for the position include proficiency in PyTorch or TensorFlow for deep learning model development, experience in LLM fine-tuning, model compression, and optimization, familiarity with frameworks like LangChain, LlamaIndex, or Ollama, experience with multi-modal retrieval systems (text, image, structured data), and contributions to open-source AI projects or published research in AI/ML. At Cowbell, employees are offered an equity plan, wealth enablement plan for select customer-facing roles, comprehensive wellness program, meditation app subscriptions, lunch and learn sessions, a book club, happy hours, and more for professional development and growth opportunities. The company is committed to fostering a collaborative and dynamic work environment where transparency and resilience are valued, and every employee is encouraged to contribute and thrive. Cowbell is an equal opportunity employer, promoting diversity and inclusivity in the workplace and providing competitive compensation, comprehensive benefits, and continuous opportunities for professional development. To learn more about Cowbell and its mission in the cyber insurance industry, please visit https://cowbell.insure/.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
Reimagine Travel with AI At our organization, we are actively involved in shaping the future of travel by utilizing AI-driven innovation. We are leveraging LLMs and AI agents to provide personalized and seamless travel experiences for our customers. By joining our team, you will have the opportunity to contribute towards enhancing Myra, our conversational AI chatbot, and play a pivotal role in redefining how travelers plan their journeys. The ideal candidate for the Senior Data Scientist position should possess 4-6 years of relevant experience. In this role, you will be responsible for developing and fine-tuning LLMs and AI agents tailored for trip planning and dynamic personalization. Additionally, you will work on building generative AI models to enhance intelligent travel assistants, virtual experiences, and pricing engines. You will be tasked with optimizing real-time decision-making processes by leveraging advanced NLP techniques and high-quality data. Collaboration with cross-functional teams to deploy scalable AI solutions for millions of travelers will also be a key aspect of this role. We are seeking individuals with a strong background in PyTorch, TensorFlow, and transformer models such as GPT-3/4, BERT, T5. Proficiency in NLP, LLM fine-tuning, and data-centric ML is highly desirable. A proven track record in areas like search relevance, personalization engines, and scalable AI pipelines will be advantageous. The ability to approach problem-solving proactively and thrive in fast-paced environments is crucial for success in this role. By joining our team, you will become part of a dynamic group that is revolutionizing the travel industry through the application of cutting-edge AI and Generative AI technologies. Your work will have a direct impact on millions of users worldwide, offering you a unique opportunity to make a difference in the field of travel technology. Candidates applying for this position are required to have an educational background in BE/BTech from Tier 1 institutes (IITs/IIITs/NITs). A preferred qualification would be an MS or Ph.D. in CS/ECE/AI/ML or equivalent fields, demonstrating a strong academic foundation in relevant areas.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
noida, uttar pradesh
On-site
As an AI Full Stack Developer at CodeSpire Solutions India Pvt. Ltd., you will play a crucial role in designing, developing, and maintaining scalable full-stack applications using the MERN stack, which includes React.js, Node.js, MongoDB, and Express.js. Your responsibilities will also involve integrating and optimizing AI/LLM solutions such as OpenAI SDK, LLaMA, and RAG for enhanced performance. You will be tasked with building and managing cloud-native applications on AWS/Azure, implementing CI/CD pipelines, and ensuring backend performance optimization, API efficiency, and AI token usage. Additionally, you will participate in client-facing sales calls and technical discussions with international clients, primarily from the US and EU, while collaborating with cross-functional teams to develop AI-powered solutions for various sectors including finance, ed-tech, and enterprise SaaS. To excel in this role, you should possess strong technical skills in frontend technologies like React.js, TypeScript, Tailwind, and Bootstrap, as well as backend technologies such as Node.js, Express.js, RESTful APIs, and JWT authentication. Proficiency in databases like MongoDB and Mongoose, along with experience in AI/ML tools like OpenAI SDK, LLM fine-tuning, RAG pipelines, and Prompt Engineering, will be highly beneficial. You are expected to leverage your expertise in cloud and DevOps platforms like AWS (EC2, S3, API Gateway), Azure, GitHub Actions, and Terraform, while utilizing tools like JIRA, Confluence, GitHub, and Agile workflow for efficient project management. Strong communication skills are essential for engaging with clients and conducting technical demos effectively. The ideal candidate for this position should have a minimum of 2 years of full-stack development experience with exposure to AI/LLM-based projects, a solid understanding of integrating AI models into real-world applications, and prior experience in handling international clients, preferably from the US. A bachelor's degree in Computer Science, IT, or a related field is required, along with a willingness to relocate to Noida and work from the office at least 4 days a week. Independent work capability and the ability to lead client discussions are also valued traits. At CodeSpire, we offer a competitive salary package with performance-based bonuses, an opportunity to work on cutting-edge AI projects for global clients, exposure to US/EU client interactions and international SaaS markets, and a fast-track career growth in a scaling AI startup. If you are ready to take on this exciting challenge, apply now by sending your resume to hr@codespiresolutions.com.,
Posted 3 weeks ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
The role requires you to design, develop, and maintain complex, high-performance, and scalable MLOps systems that interact with AI models and systems. You will collaborate with cross-functional teams, including data scientists, AI researchers, and AI/ML engineers, to comprehend requirements, define project scope, and ensure alignment with business goals. Your expertise will be crucial in selecting, evaluating, and implementing software technologies, tools, and frameworks within a cloud-native (Azure + AML) environment. Troubleshooting and resolving intricate software issues to ensure optimal performance and reliability when interfacing with AI/ML systems is an essential part of your responsibilities. Additionally, you will contribute to software development project planning and estimation, ensuring efficient resource allocation and timely solution delivery. Your role involves contributing to the development of continuous integration and continuous deployment (CI/CD) pipelines, high-performance data pipelines, storage systems, and data processing solutions. You will drive the integration of GenAI models, such as LLMs and foundation models, into production workflows, including overseeing orchestration and evaluation pipelines. Moreover, you will provide support for edge deployment use cases through model optimization, conversion (e.g., to ONNX, TFLite), and containerization for edge runtimes. Your contribution to creating and maintaining technical documentation, including design specifications, API documentation, data models, data flow diagrams, and user manuals, will be vital for effective communication within the team. **Required Qualifications:** - Bachelor's degree in software engineering/computer science or related discipline - Minimum of 6 years of experience in machine learning operations or software/platform development - Strong familiarity with Azure ML, Azure DevOps, Blob Storage, and containerized model deployments on Azure - Proficiency in programming languages commonly used in AI/ML, such as Python, R, or C++ - Experience with Azure cloud platform, machine learning services, and industry best practices **Preferred Qualifications:** - Experience with machine learning frameworks like TensorFlow, PyTorch, or Keras - Familiarity with version control systems like Git and CI/CD tools such as Jenkins, GitLab CI/CD, or Azure DevOps - Knowledge of containerization technologies such as Docker and Kubernetes, along with infrastructure-as-code tools like Terraform or Azure Resource Manager (ARM) templates - Exposure to Generative AI workflows, including prompt engineering, LLM fine-tuning, or retrieval-augmented generation (RAG) - Understanding of GenAI frameworks like LangChain, LlamaIndex, Hugging Face Transformers, and OpenAI API integration - Experience in deploying optimized models on edge devices using ONNX Runtime, TensorRT, OpenVINO, or TFLite - Hands-on experience with monitoring LLM outputs, feedback loops, or LLMOps best practices - Familiarity with edge inference hardware such as NVIDIA Jetson, Intel Movidius, or ARM Cortex-A/NPU devices This is a permanent position requiring in-person work.,
Posted 3 weeks ago
3.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
As a Founding Principal Engineer on the new Applied AI team within Autodesk's Data and Process Management (DPM) group, you play a crucial role in designing, building, and scaling AI-powered experiences that provide essential Product Lifecycle Management (PLM) and Product Data Management (PDM) capabilities to customers. Your work will involve creating production-grade AI applications that are scalable, resilient, and secure, while also shaping the AI strategy for DPM by identifying opportunities, evaluating emerging technologies, and guiding long-term direction. You will be responsible for fine-tuning, evaluating, and deploying large language models (LLMs) in production environments, while balancing performance, cost, and user experience with real-world data and constraints. Additionally, you will collaborate with other engineering teams to define best practices for AI experimentation, evaluation, and optimization, as well as design frameworks and tools to facilitate the development of AI-powered experiences by other teams. To be successful in this role, you must hold a Masters in computer science, AI, Machine Learning, Data Science, or a related field, and have at least 10 years of experience building scalable cloud-native applications, with a focus on production AI/ML systems for at least 3 years. Deep understanding of LLMs, VLMs, foundation models, and related technologies, along with experience with AWS cloud services and SageMaker Studio is essential. Proficiency in Python or TypeScript, a passion for tackling complex challenges, and the ability to communicate technical concepts clearly to both technical and non-technical audiences are also required. Preferred qualifications include experience in the CAD or manufacturing domain, building AI applications, designing evaluation pipelines for LLM-based systems, and familiarity with tools and frameworks for LLM fine-tuning and orchestration. A passion for mentoring engineering talent, experience with emerging Agentic AI solutions, and contributions to open-source AI projects or publications in the field are considered a plus. Join Autodesk's innovative team to shape the future of AI applications and contribute to building a better world through technology.,
Posted 1 month ago
12.0 - 16.0 years
0 Lacs
karnataka
On-site
At CommBank, we are dedicated to enhancing the financial well-being of individuals and businesses by assisting them in making informed financial decisions and achieving their goals and aspirations. Regardless of your role within our organization, your initiative, talent, ideas, and energy are all valuable contributions to the positive impact we strive to make through our work. Together, we can accomplish remarkable things. We are currently seeking a Principal Software Engineer - GenAI to join our team in Bengaluru at Manyata Tech Park. As part of the Gen AI Domain, you will play a pivotal role in developing tools and capabilities that leverage Generative AI technology to address key needs within the group. Specifically, you will be involved in leading the Transformer Banking initiative. In this role, your primary responsibility will be to utilize your advanced technical expertise in engineering principles and practices within the Gen AI platform to drive business outcomes. By providing core technology and domain knowledge, you will support the technical strategy of the team and lead the design of solutions for complex challenges within Gen AI. This position offers the opportunity to be at the forefront of AI innovation within Australia's largest bank and fintech sector, shaping the future of banking with cutting-edge Gen AI solutions. Key Responsibilities: - Act as a thought and technology leader, providing technical guidance and overseeing engineering projects across AWS, Azure, AI, and ML development. - Champion strategic practice development within the Gen AI domain and mentor team members on design and technical aspects. - Build Gen AI capability among engineers through Knowledge Engineering, Prompt Engineering, and Platform Engineering. - Stay updated on advancements in generative AI, suggest best practices for infrastructure design, and ensure clear documentation of AI/ML/LLM processes. - Collaborate with other teams to integrate AI solutions into existing workflows and systems, ensuring scalability, reliability, and high availability. - Implement security best practices, monitoring systems, and Responsible AI guardrails to safeguard the platform and ensure data privacy and governance. Essential Skills: - Minimum 12 years of experience with a strong background in tech delivery and cross-cultural communication. - Proficiency in IT SDLC processes, written documentation, and key capabilities such as Prompt Engineering, Platform Engineering, and Knowledge Engineering. - Track record of building and leading engineering teams and exposure to cloud solutions, Gen AI frameworks, databases, and machine learning. Education Qualifications: - Bachelors/Masters degree in Engineering in Computer Science/Information Technology. If you are part of the Commonwealth Bank Group and interested in this opportunity, please apply through Sidekick. We are committed to supporting you in advancing your career. For any accessibility support, please contact HR Direct at 1800 989 696. Join us in shaping the future of banking through innovative Gen AI solutions!,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a Senior Data Scientist (Gen AI Developer) with 5 to 7+ years of experience, you will be based in Hyderabad and employed full-time in a hybrid work mode, spending 4 days in the office and 1 day working from home. Your primary responsibility will be to tackle a Conversational AI challenge for our client by utilizing your expertise in Speech-to-Text and Text Generation technologies. Your role will involve developing and fine-tuning Automatic Speech Recognition (ASR) models, implementing language models for industry-specific terminology, and incorporating speaker diarization to distinguish multiple voices in a conversation. Additionally, you will build conversation summarization models, apply Named Entity Recognition (NER), and use Large Language Models (LLMs) for deep conversation analysis and smart recommendations. You will also design Retrieval-Augmented Generation (RAG) pipelines leveraging external knowledge sources for enhanced performance. Furthermore, you will be tasked with creating sentiment and intent classification systems, developing predictive models for next-best-action suggestions based on historical call data and engagement, and deploying AI models on cloud platforms like AWS, Azure, or GCP. Optimizing inference and establishing MLOps pipelines for continual learning and performance enhancement will also be part of your responsibilities. To excel in this role, you must possess proven expertise in ASR, NLP, and Conversational AI systems, along with experience in tools such as Whisper, DeepSpeech, Kaldi, AWS Transcribe, and Google STT. Proficiency in programming languages like Python, PyTorch, TensorFlow, and familiarity with RAG, LangChain, and LLM fine-tuning is essential. Hands-on experience with vector databases and deploying AI solutions using Docker, Kubernetes, FastAPI, or Flask will be beneficial. Apart from technical skills, you should have strong business acumen to translate AI insights into impact, be a fast-paced problem-solver with innovative thinking abilities, and possess excellent collaboration and communication skills for effective teamwork across functions. Preferred qualifications include experience in healthcare, pharma, or life sciences NLP projects, knowledge of multimodal AI, prompt engineering, and exposure to Reinforcement Learning (RLHF) techniques for conversational models. Join us to work on impactful real-world projects in Conversational AI and Gen AI, collaborate with innovative teams and industry experts, leverage cutting-edge tools and cloud platforms, and enjoy a hybrid work environment that promotes flexibility and balance. Your ideas will be valued in our forward-thinking, AI-first culture. To apply for this exciting opportunity, please share your updated resume at resumes@empglobal.ae or apply directly through the platform. Kindly note that while we appreciate all applications, only shortlisted candidates will be contacted. Thank you for your understanding!,
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
As a LLM Engineer at HuggingFace, you will play a crucial role in bridging the gap between advanced language models and real-world applications. Your primary focus will be on fine-tuning, evaluating, and deploying LLMs using frameworks such as HuggingFace and Ollama. You will be responsible for developing React-based applications with seamless LLM integrations through REST, WebSockets, and APIs. Additionally, you will work on building scalable pipelines for data extraction, cleaning, and transformation, as well as creating and managing ETL workflows for training data and RAG pipelines. Your role will also involve driving full-stack LLM feature development from prototype to production. To excel in this position, you should have at least 2 years of professional experience in ML engineering, AI tooling, or full-stack development. Strong hands-on experience with HuggingFace Transformers and LLM fine-tuning is essential. Proficiency in React, TypeScript/JavaScript, and back-end integration is required, along with comfort working with data engineering tools such as Python, SQL, and Pandas. Familiarity with vector databases, embeddings, and LLM orchestration frameworks is a plus. Candidates with experience in Ollama, LangChain, or LlamaIndex will be given bonus points. Exposure to real-time LLM applications like chatbots, copilots, or internal assistants, as well as prior work with enterprise or SaaS AI integrations, are highly valued. This role offers a remote-friendly environment with flexible working hours and a high-ownership opportunity. Join our small, fast-moving team at HuggingFace and be part of building the next generation of intelligent systems. If you are passionate about working on impactful AI products and have the drive to grow in this field, we would love to hear from you.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a Software Engineer specializing in AI/ML/LLM/Data Science at Entra Solutions, a FinTech company within the mortgage Industry, you will play a crucial role in designing, developing, and deploying AI-driven solutions using cutting-edge technologies such as Machine Learning, NLP, and Large Language Models (LLMs). Your primary focus will be on building and optimizing retrieval-augmented generation (RAG) systems, LLM fine-tuning, and vector search technologies using Python. You will be responsible for developing scalable AI pipelines that ensure high performance and seamless integration with both cloud and on-premises environments. Additionally, this role will involve implementing MLOps best practices, optimizing AI model performance, and deploying intelligent applications. In this role, you will: - Develop, fine-tune, and deploy AI/ML models and LLM-based applications for real-world use cases. - Build and optimize retrieval-augmented generation (RAG) systems using Vector Databases such as ChromaDB, Pinecone, and FAISS. - Work on LLM fine-tuning, embeddings, and prompt engineering to enhance model performance. - Create end-to-end AI solutions with APIs using frameworks like FastAPI, Flask, or similar technologies. - Establish and maintain scalable data pipelines for training and inferencing AI models. - Deploy and manage models using MLOps best practices on cloud platforms like AWS or Azure. - Optimize AI model performance for low-latency inference and scalability. - Collaborate with cross-functional teams including Product, Engineering, and Data Science to integrate AI capabilities into applications. Qualifications: Must Have: - Proficiency in Python - Strong hands-on experience in AI/ML frameworks such as TensorFlow, PyTorch, Hugging Face, LangChain, and OpenAI APIs. Good to Have: - Experience with LLM fine-tuning, embeddings, and transformers. - Knowledge of NLP, vector search technologies (ChromaDB, Pinecone, FAISS, Milvus). - Experience in building scalable AI models and data pipelines with Spark, Kafka, or Dask. - Familiarity with MLOps tools like Docker, Kubernetes, and CI/CD for AI models. - Hands-on experience in cloud-based AI deployment using platforms like AWS Lambda, SageMaker, GCP Vertex AI, or Azure ML. - Knowledge of prompt engineering, GPT models, or knowledge graphs. What's in it for you - Competitive Salary & Full Benefits Package - PTOs / Medical Insurance - Exposure to cutting-edge AI/LLM projects in an innovative environment - Career Growth Opportunities in AI/ML leadership - Collaborative & AI-driven work culture Entra Solutions is an equal employment opportunity employer, and we welcome applicants from diverse backgrounds. Join us and be a part of our dynamic team driving innovation in the FinTech industry.,
Posted 1 month ago
2.0 - 10.0 years
0 Lacs
coimbatore, tamil nadu
On-site
You should have 3 to 10 years of experience in AI development and be located in Coimbatore. Immediate joiners are preferred. A minimum of 2 years of experience in core Gen AI is required. As an AI Developer, your responsibilities will include designing, developing, and fine-tuning Large Language Models (LLMs) for various in-house applications. You will implement and optimize Retrieval-Augmented Generation (RAG) techniques to enhance AI response quality. Additionally, you will develop and deploy Agentic AI systems capable of autonomous decision-making and task execution. Building and managing data pipelines for processing, transforming, and feeding structured/unstructured data into AI models will be part of your role. It is essential to ensure scalability, performance, and security of AI-driven solutions in production environments. Collaboration with cross-functional teams, including data engineers, software developers, and product managers, is expected. You will conduct experiments and evaluations to improve AI system accuracy and efficiency while staying updated with the latest advancements in AI/ML research, open-source models, and industry best practices. You should have strong experience in LLM fine-tuning using frameworks like Hugging Face, DeepSpeed, or LoRA/PEFT. Hands-on experience with RAG architectures, including vector databases such as Pinecone, ChromaDB, Weaviate, OpenSearch, and FAISS, is required. Experience in building AI agents using LangChain, LangGraph, CrewAI, AutoGPT, or similar frameworks is preferred. Proficiency in Python and deep learning frameworks like PyTorch or TensorFlow is necessary. Experience in Python web frameworks such as FastAPI, Django, or Flask is expected. You should also have experience in designing and managing data pipelines using tools like Apache Airflow, Kafka, or Spark. Knowledge of cloud platforms (AWS/GCP/Azure) and containerization technologies (Docker, Kubernetes) is essential. Familiarity with LLM APIs (OpenAI, Anthropic, Mistral, Cohere, Llama, etc.) and their integration in applications is a plus. A strong understanding of vector search, embedding models, and hybrid retrieval techniques is required. Experience with optimizing inference and serving AI models in real-time production systems is beneficial. Experience with multi-modal AI (text, image, audio) and familiarity with privacy-preserving AI techniques and responsible AI frameworks are desirable. Understanding of MLOps best practices, including model versioning, monitoring, and deployment automation, is a plus. Skills required for this role include PyTorch, RAG architectures, OpenSearch, Weaviate, Docker, LLM fine-tuning, ChromaDB, Apache Airflow, LoRA, Python, hybrid retrieval techniques, Django, GCP, CrewAI, OpenAI, Hugging Face, Gen AI, Pinecone, FAISS, AWS, AutoGPT, embedding models, Flask, FastAPI, LLM APIs, DeepSpeed, vector search, PEFT, LangChain, Azure, Spark, Kubernetes, AI Gen, TensorFlow, real-time production systems, LangGraph, and Kafka.,
Posted 1 month ago
7.0 - 11.0 years
0 Lacs
indore, madhya pradesh
On-site
You are a highly skilled Lead Backend Engineer with 7-10 years of experience, possessing a strong command over Java and Python along with a deep understanding of GenAI and Large Language Models (LLMs). At Team Geek Solutions, we are innovators driven by cutting-edge technology, aiming to solve real-world problems using scalable backend systems and next-gen AI. Our collaborative and forward-thinking culture values every engineer's role in building impactful products. As a Lead Backend Engineer, you will be responsible for leading a team of developers and data scientists to design scalable backend architectures and AI-driven solutions, leveraging the latest advancements in AI capabilities. Your key responsibilities will include leading and mentoring a team of backend and AI engineers, architecting and developing robust backend solutions using Java and Python, solving complex problems using structured and unstructured data, and implementing state-of-the-art LLMs such as OpenAI and HuggingFace models. You will also be involved in utilizing techniques like Retrieval-Augmented Generation (RAG) to enhance the performance and capabilities of LLM-based systems, as well as owning the development of end-to-end ML pipelines encompassing training, fine-tuning, evaluation, deployment, and monitoring (MLOps). Collaboration with business and product teams to identify use cases and deliver AI-powered solutions is another crucial aspect of your role. The required skills for this position include proficiency in Java and Python, a solid understanding of Data Structures & Algorithms, deep experience in Transformer-based models, LLM fine-tuning, and deployment, hands-on experience with PyTorch, LangChain, and Python web frameworks such as Flask and FastAPI, strong database skills with SQL, and experience in deploying and managing ML models in production environments. Leadership experience in managing small to mid-sized engineering teams is also essential. Preferred or good-to-have skills for this role include experience with LLMOps tools and techniques, exposure to cloud platforms like AWS, GCP, or Azure for model deployment, strong written and verbal communication skills for both technical and non-technical audiences, and a passion for innovation and building AI-first products. Keeping yourself updated with the latest advancements in AI and integrating best practices into development workflows will be a key aspect of your role at Team Geek Solutions.,
Posted 1 month ago
2.0 - 7.0 years
0 Lacs
karnataka
On-site
Job Description: We are seeking a highly skilled AI Audio/Speech Developer with the ability to work independently and proficiently construct AI/DL models from the ground up. The ideal candidate will demonstrate competency in coding relevant research papers autonomously and possess a solid foundation in NLP & Audio and ML/DL. The successful candidate should possess 2-7 years of overall development experience with a minimum of 2 years dedicated to NLP & Audio and ML/DL. Essential responsibilities will include training, fine-tuning, and optimizing various transformer model variations. Hands-on familiarity with LLM, fine-tuning, and optimization of LLM will be crucial for this role. Required Skills: - Proficiency in Audio & NLP AI Experience - Strong Programming Knowledge in Python, C++ - Hands-on experience with ASR frameworks like Kaldi, DeepSpeech, or Wav2Vec - Understanding of acoustic models, language models, and their integration - Experience in utilizing pre-trained models such as Wav2Vec 2.0, HuBERT, or Whisper - Competence in speech corpora and dataset preparation for ASR training and evaluation - Knowledge of model optimization techniques for real-time ASR applications - Practical experience in LLM fine-tuning, optimization, and performance enhancement Preferred Skills: - Certification in AI Audio related areas - Previous experience in ASR In this role, you will have the opportunity to leverage your expertise in AI Audio/Speech development to contribute to cutting-edge projects and make a significant impact in the field. If you are passionate about innovation and possess the necessary skills, we encourage you to apply for this exciting opportunity.,
Posted 1 month ago
1.0 - 5.0 years
0 Lacs
haryana
On-site
We are seeking a Gen AI Engineer- LLM to focus on designing and optimizing ChatGPT agents and other LLM models tailored for marketing, sales, and content creation purposes. Your primary responsibilities will include developing AI-driven bots to elevate content generation, customer engagement, and streamline business operations. Your tasks will involve: - Designing and fine-tuning ChatGPT agents and various LLM models specifically for marketing and sales applications. - Setting context and refining models to align with the unique content and interaction requirements of the business. - Collaborating with cross-functional teams to seamlessly integrate LLM-powered bots into marketing strategies and Python-based applications. The ideal candidate should possess: - Proficiency in Python development with a minimum of 1-4 years of experience, along with 1-2 years dedicated to LLM fine-tuning. - Hands-on expertise in utilizing ChatGPT and other LLMs for marketing and content-focused use cases. - The capability to optimize models to cater to specific business needs such as marketing, sales, etc. If you are enthusiastic about pioneering innovations and deep learning systems, and eager to engage with cutting-edge technologies in this domain, we invite you to submit your application! Flixstock is dedicated to fostering a workplace that celebrates diversity, promotes inclusivity, and empowers every team member to excel. We recognize that a range of perspectives and backgrounds fosters innovation and drives success. As an equal-opportunity employer, we welcome talented individuals from all walks of life. Our goal is to nurture an environment where everyone feels valued, supported, and motivated to advance. Come aboard to contribute your unique talents and become a part of a cohesive team that prioritizes your personal and professional growth. Join us today! **Employment Type:** Full-time **Job Location:** Gurugram, Haryana, India **Date posted:** November 20, 2024,
Posted 1 month ago
3.0 - 12.0 years
0 Lacs
kochi, kerala
On-site
As a talented Full Stack Developer with expertise in Generative AI and Natural Language Processing, you will be a key member of our team, contributing to the design, development, and scaling of cutting-edge LLM and Generative AI applications to enhance user experiences. Your responsibilities will include developing backend logic and intelligent workflows using pre-trained AI models such as large language models (LLMs) and natural language understanding (NLU) engines. You will integrate and operationalise NLP and Generative AI models in production environments, including speech processing pipelines like automatic speech recognition (ASR) and text-to-speech (TTS) technologies. Applying techniques such as LLM fine-tuning, prompt engineering, and Retrieval-Augmented Generation (RAG) will be crucial for enhancing AI system performance. Moreover, you will design and deploy scalable full-stack solutions supporting AI-driven applications, working with various data sources to enable contextual AI retrieval and responses. Utilising cloud platforms like AWS/Azure effectively for hosting, managing, and scaling AI-enabled services will also be part of your role. If you are passionate about combining full-stack development with AI and LLM technologies to create innovative text and voice applications, we look forward to hearing from you. Qualifications: - 3+ years of hands-on experience in full-stack application development with a strong understanding of frontend and backend technologies. - 12 years of proven experience in designing and implementing AI-driven conversational systems. - Deep knowledge of integrating Speech-to-Text (STT) and Natural Language Processing (NLP) components into production-ready systems. Nice-to-Have Skills: - Exposure to MLOps practices, including model deployment, monitoring, lifecycle management, and performance optimization in production environments. What You'll Get: - Opportunity to work on one of the most advanced AI systems. - A high-performing, fast-paced startup culture with a deep tech focus.,
Posted 2 months ago
7.0 - 11.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Lead Backend Engineer at Team Geek Solutions, you will be a key member of our innovative team dedicated to leveraging cutting-edge technology to solve real-world problems. With a focus on scalable backend systems and next-gen AI, you will lead a group of developers and data scientists in creating impactful products that push the boundaries of AI capabilities. If you are passionate about leading with purpose and driving innovation, we invite you to join us on this exciting journey. Your primary responsibilities will include leading and mentoring a team of backend and AI engineers, designing and implementing robust backend solutions using Java and Python, and utilizing state-of-the-art techniques such as Large Language Models (LLMs) and Transformer models. By leveraging tools like PyTorch, LangChain, and Flask, you will develop end-to-end ML pipelines, from training to deployment, while collaborating with cross-functional teams to deliver AI-powered solutions that address specific use cases. To excel in this role, you must possess strong proficiency in Java and Python, along with a solid understanding of Data Structures & Algorithms. Experience with Transformer-based models, LLM fine-tuning, and deployment is essential, as well as familiarity with SQL and database management. Additionally, leadership skills and the ability to manage engineering teams effectively are key requirements for this position. Preferred skills include knowledge of LLMOps tools and cloud platforms (AWS/GCP/Azure) for model deployment, as well as excellent written and verbal communication abilities. A passion for innovation and a commitment to building AI-first products will set you apart as a valuable contributor to our team. Stay informed about the latest advancements in AI and integrate best practices into your work to drive continuous improvement and growth in our organization.,
Posted 2 months ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
You will be responsible for designing, building, and deploying scalable NLP/ML models for real-world applications. Your role will involve fine-tuning and optimizing Large Language Models (LLMs) using techniques like LoRA, PEFT, or QLoRA. You will work with transformer-based architectures such as BERT, GPT, LLaMA, and T5, and develop GenAI applications using frameworks like LangChain, Hugging Face, OpenAI API, or RAG (Retrieval-Augmented Generation). Writing clean, efficient, and testable Python code will be a crucial part of your tasks. Collaboration with data scientists, software engineers, and stakeholders to define AI-driven solutions will also be an essential aspect of your work. Additionally, you will evaluate model performance and iterate rapidly based on user feedback and metrics. The ideal candidate should have a minimum of 3 years of experience in Python programming with a strong understanding of ML pipelines. A solid background and experience in NLP, including text preprocessing, embeddings, NER, and sentiment analysis, are required. Proficiency in ML libraries such as scikit-learn, PyTorch, TensorFlow, Hugging Face Transformers, and spaCy is essential. Experience with GenAI concepts, including prompt engineering, LLM fine-tuning, and vector databases like FAISS and ChromaDB, will be beneficial. Strong problem-solving and communication skills are highly valued, along with the ability to learn new tools and work both independently and collaboratively in a fast-paced environment. Attention to detail and accuracy is crucial for this role. Preferred skills include theoretical knowledge or experience in Data Engineering, Data Science, AI, ML, RPA, or related domains. Certification in Business Analysis or Project Management from a recognized institution is a plus. Experience in working with agile methodologies such as Scrum or Kanban is desirable. Additional experience in deep learning and transformer architectures and models, prompt engineering, training LLMs, and GenAI pipeline preparation will be advantageous. Practical experience in integrating LLM models like ChatGPT, Gemini, Claude, etc., with context-aware capabilities using RAG or fine-tuning models is a plus. Knowledge of model evaluation and alignment, as well as metrics to calculate model accuracy, is beneficial. Data curation from sources for RAG preprocessing and development of LLM pipelines is an added advantage. Proficiency in scalable deployment and logging tooling, including skills like Flask, Django, FastAPI, APIs, Docker containerization, and Kubeflow, is preferred. Familiarity with Lang Chain, LlamaIndex, vLLM, HuggingFace Transformers, LoRA, and a basic understanding of cost-to-performance tradeoffs will be beneficial for this role.,
Posted 2 months ago
7.0 - 11.0 years
0 Lacs
indore, madhya pradesh
On-site
You will be working as a Lead Backend Engineer at Team Geek Solutions, a company based in Noida/Indore, with a mission to solve real-world problems using scalable backend systems and next-gen AI technologies. As part of our collaborative and forward-thinking culture, you will play a crucial role in building impactful products driven by cutting-edge technology. Your primary responsibility will be to lead a team of backend and AI engineers, guiding them in developing robust backend solutions using Java and Python. You will leverage your expertise in GenAI and Large Language Models (LLMs) to architect scalable backend architectures and AI-driven solutions, pushing the boundaries of AI capabilities. Key Responsibilities: - Lead and mentor a team of backend and AI engineers to deliver innovative solutions. - Architect and develop robust backend solutions using Java and Python. - Utilize state-of-the-art LLMs like OpenAI and HuggingFace models to build solutions using LangChain, Transformer models, and PyTorch. - Implement advanced techniques such as Retrieval-Augmented Generation (RAG) to enhance LLM-based systems. - Drive the development of end-to-end ML pipelines, including training, fine-tuning, evaluation, deployment, and monitoring (MLOps). - Collaborate with business and product teams to identify use cases and deliver AI-powered solutions. - Stay abreast of the latest advancements in AI and integrate best practices into development workflows. Required Skills: - Proficiency in Java and Python with hands-on experience in both languages. - Strong understanding of Data Structures & Algorithms. - Deep expertise in Transformer-based models, LLM fine-tuning, and deployment. - Hands-on experience with PyTorch, LangChain, and Python web frameworks (e.g., Flask, FastAPI). - Solid database skills, particularly with SQL. - Experience in deploying and managing ML models in production environments. - Leadership experience in managing small to mid-sized engineering teams. Preferred / Good-to-Have Skills: - Familiarity with LLMOps tools and techniques. - Exposure to cloud platforms like AWS/GCP/Azure for model deployment. - Excellent written and verbal communication skills suitable for technical and non-technical audiences. - A strong passion for innovation and building AI-first products. If you are a tech enthusiast with a knack for problem-solving and a drive to innovate, we welcome you to join our team at Team Geek Solutions and contribute to shaping the future of AI-driven solutions.,
Posted 2 months ago
7.0 - 11.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Lead Backend Engineer at Team Geek Solutions, you will be responsible for leading a team of developers and data scientists to build scalable backend architectures and AI-driven solutions. You will need to have a strong command over Java and a deep understanding of GenAI and Large Language Models (LLMs) to push the boundaries of what's possible with today's AI capabilities. You will be tasked with mentoring the team, architecting and developing robust backend solutions using Java and Python, and solving complex problems using structured and unstructured data. Your role will also involve applying state-of-the-art LLMs such as OpenAI and HuggingFace models, utilizing techniques like Retrieval-Augmented Generation (RAG) to enhance performance, and owning the development of end-to-end ML pipelines including training, fine-tuning, evaluation, deployment, and monitoring (MLOps). Collaboration with business and product teams to identify use cases and deliver AI-powered solutions, staying updated with the latest advancements in AI, and integrating best practices into development workflows are key aspects of this role. Additionally, you should have proficiency in Java and Python, a solid understanding of Data Structures & Algorithms, and hands-on experience with Transformer-based models, LLM fine-tuning, PyTorch, LangChain, Python web frameworks (e.g., Flask, FastAPI), and SQL. Experience in deploying and managing ML models in production environments, leadership skills in managing small to mid-sized engineering teams, and a passion for innovation and building AI-first products are also essential requirements. Preferred skills include familiarity with LLMOps tools and techniques, exposure to cloud platforms like AWS/GCP/Azure for model deployment, and strong written and verbal communication skills for technical and non-technical audiences.,
Posted 2 months ago
1.0 - 5.0 years
0 Lacs
navi mumbai, maharashtra
On-site
Position : Junior Software Developer Location : Vashi, Navi Mumbai Responsibilities Assist in designing, developing, and testing machine learning models. Work with data engineering teams to collect, clean, and format data for analysis. Implement current machine learning algorithms and experiment with new techniques. Document and present model development processes and results to key stakeholders. Contribute to improving existing AI functionalities within our products. Stay updated with AI/ML advancements and suggest potential integrations. Who We're Looking For Individuals with a strong background in Python, SQL, and a passion for Machine Learning, NLP, and Generative AI. Proficiency in interacting with databases (MySQL) using Python Experience with sentiment analysis, topic modeling, Hugging Face models, RAG, LLM fine-tuning, and Stability AI models. Familiarity with creating and managing APIs, with experience in tools like Swagger for documentation. Fully committed individuals not currently pursuing any academic programs. (ref:hirist.tech),
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |