Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12.0 - 16.0 years
0 Lacs
karnataka
On-site
At CommBank, we are dedicated to enhancing the financial well-being of individuals and businesses by assisting them in making informed financial decisions and achieving their goals and aspirations. Regardless of your role within our organization, your initiative, talent, ideas, and energy are all valuable contributions to the positive impact we strive to make through our work. Together, we can accomplish remarkable things. We are currently seeking a Principal Software Engineer - GenAI to join our team in Bengaluru at Manyata Tech Park. As part of the Gen AI Domain, you will play a pivotal role in developing tools and capabilities that leverage Generative AI technology to address key needs within the group. Specifically, you will be involved in leading the Transformer Banking initiative. In this role, your primary responsibility will be to utilize your advanced technical expertise in engineering principles and practices within the Gen AI platform to drive business outcomes. By providing core technology and domain knowledge, you will support the technical strategy of the team and lead the design of solutions for complex challenges within Gen AI. This position offers the opportunity to be at the forefront of AI innovation within Australia's largest bank and fintech sector, shaping the future of banking with cutting-edge Gen AI solutions. Key Responsibilities: - Act as a thought and technology leader, providing technical guidance and overseeing engineering projects across AWS, Azure, AI, and ML development. - Champion strategic practice development within the Gen AI domain and mentor team members on design and technical aspects. - Build Gen AI capability among engineers through Knowledge Engineering, Prompt Engineering, and Platform Engineering. - Stay updated on advancements in generative AI, suggest best practices for infrastructure design, and ensure clear documentation of AI/ML/LLM processes. - Collaborate with other teams to integrate AI solutions into existing workflows and systems, ensuring scalability, reliability, and high availability. - Implement security best practices, monitoring systems, and Responsible AI guardrails to safeguard the platform and ensure data privacy and governance. Essential Skills: - Minimum 12 years of experience with a strong background in tech delivery and cross-cultural communication. - Proficiency in IT SDLC processes, written documentation, and key capabilities such as Prompt Engineering, Platform Engineering, and Knowledge Engineering. - Track record of building and leading engineering teams and exposure to cloud solutions, Gen AI frameworks, databases, and machine learning. Education Qualifications: - Bachelors/Masters degree in Engineering in Computer Science/Information Technology. If you are part of the Commonwealth Bank Group and interested in this opportunity, please apply through Sidekick. We are committed to supporting you in advancing your career. For any accessibility support, please contact HR Direct at 1800 989 696. Join us in shaping the future of banking through innovative Gen AI solutions!,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a Senior Data Scientist (Gen AI Developer) with 5 to 7+ years of experience, you will be based in Hyderabad and employed full-time in a hybrid work mode, spending 4 days in the office and 1 day working from home. Your primary responsibility will be to tackle a Conversational AI challenge for our client by utilizing your expertise in Speech-to-Text and Text Generation technologies. Your role will involve developing and fine-tuning Automatic Speech Recognition (ASR) models, implementing language models for industry-specific terminology, and incorporating speaker diarization to distinguish multiple voices in a conversation. Additionally, you will build conversation summarization models, apply Named Entity Recognition (NER), and use Large Language Models (LLMs) for deep conversation analysis and smart recommendations. You will also design Retrieval-Augmented Generation (RAG) pipelines leveraging external knowledge sources for enhanced performance. Furthermore, you will be tasked with creating sentiment and intent classification systems, developing predictive models for next-best-action suggestions based on historical call data and engagement, and deploying AI models on cloud platforms like AWS, Azure, or GCP. Optimizing inference and establishing MLOps pipelines for continual learning and performance enhancement will also be part of your responsibilities. To excel in this role, you must possess proven expertise in ASR, NLP, and Conversational AI systems, along with experience in tools such as Whisper, DeepSpeech, Kaldi, AWS Transcribe, and Google STT. Proficiency in programming languages like Python, PyTorch, TensorFlow, and familiarity with RAG, LangChain, and LLM fine-tuning is essential. Hands-on experience with vector databases and deploying AI solutions using Docker, Kubernetes, FastAPI, or Flask will be beneficial. Apart from technical skills, you should have strong business acumen to translate AI insights into impact, be a fast-paced problem-solver with innovative thinking abilities, and possess excellent collaboration and communication skills for effective teamwork across functions. Preferred qualifications include experience in healthcare, pharma, or life sciences NLP projects, knowledge of multimodal AI, prompt engineering, and exposure to Reinforcement Learning (RLHF) techniques for conversational models. Join us to work on impactful real-world projects in Conversational AI and Gen AI, collaborate with innovative teams and industry experts, leverage cutting-edge tools and cloud platforms, and enjoy a hybrid work environment that promotes flexibility and balance. Your ideas will be valued in our forward-thinking, AI-first culture. To apply for this exciting opportunity, please share your updated resume at resumes@empglobal.ae or apply directly through the platform. Kindly note that while we appreciate all applications, only shortlisted candidates will be contacted. Thank you for your understanding!,
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
As a LLM Engineer at HuggingFace, you will play a crucial role in bridging the gap between advanced language models and real-world applications. Your primary focus will be on fine-tuning, evaluating, and deploying LLMs using frameworks such as HuggingFace and Ollama. You will be responsible for developing React-based applications with seamless LLM integrations through REST, WebSockets, and APIs. Additionally, you will work on building scalable pipelines for data extraction, cleaning, and transformation, as well as creating and managing ETL workflows for training data and RAG pipelines. Your role will also involve driving full-stack LLM feature development from prototype to production. To excel in this position, you should have at least 2 years of professional experience in ML engineering, AI tooling, or full-stack development. Strong hands-on experience with HuggingFace Transformers and LLM fine-tuning is essential. Proficiency in React, TypeScript/JavaScript, and back-end integration is required, along with comfort working with data engineering tools such as Python, SQL, and Pandas. Familiarity with vector databases, embeddings, and LLM orchestration frameworks is a plus. Candidates with experience in Ollama, LangChain, or LlamaIndex will be given bonus points. Exposure to real-time LLM applications like chatbots, copilots, or internal assistants, as well as prior work with enterprise or SaaS AI integrations, are highly valued. This role offers a remote-friendly environment with flexible working hours and a high-ownership opportunity. Join our small, fast-moving team at HuggingFace and be part of building the next generation of intelligent systems. If you are passionate about working on impactful AI products and have the drive to grow in this field, we would love to hear from you.,
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a Software Engineer specializing in AI/ML/LLM/Data Science at Entra Solutions, a FinTech company within the mortgage Industry, you will play a crucial role in designing, developing, and deploying AI-driven solutions using cutting-edge technologies such as Machine Learning, NLP, and Large Language Models (LLMs). Your primary focus will be on building and optimizing retrieval-augmented generation (RAG) systems, LLM fine-tuning, and vector search technologies using Python. You will be responsible for developing scalable AI pipelines that ensure high performance and seamless integration with both cloud and on-premises environments. Additionally, this role will involve implementing MLOps best practices, optimizing AI model performance, and deploying intelligent applications. In this role, you will: - Develop, fine-tune, and deploy AI/ML models and LLM-based applications for real-world use cases. - Build and optimize retrieval-augmented generation (RAG) systems using Vector Databases such as ChromaDB, Pinecone, and FAISS. - Work on LLM fine-tuning, embeddings, and prompt engineering to enhance model performance. - Create end-to-end AI solutions with APIs using frameworks like FastAPI, Flask, or similar technologies. - Establish and maintain scalable data pipelines for training and inferencing AI models. - Deploy and manage models using MLOps best practices on cloud platforms like AWS or Azure. - Optimize AI model performance for low-latency inference and scalability. - Collaborate with cross-functional teams including Product, Engineering, and Data Science to integrate AI capabilities into applications. Qualifications: Must Have: - Proficiency in Python - Strong hands-on experience in AI/ML frameworks such as TensorFlow, PyTorch, Hugging Face, LangChain, and OpenAI APIs. Good to Have: - Experience with LLM fine-tuning, embeddings, and transformers. - Knowledge of NLP, vector search technologies (ChromaDB, Pinecone, FAISS, Milvus). - Experience in building scalable AI models and data pipelines with Spark, Kafka, or Dask. - Familiarity with MLOps tools like Docker, Kubernetes, and CI/CD for AI models. - Hands-on experience in cloud-based AI deployment using platforms like AWS Lambda, SageMaker, GCP Vertex AI, or Azure ML. - Knowledge of prompt engineering, GPT models, or knowledge graphs. What's in it for you - Competitive Salary & Full Benefits Package - PTOs / Medical Insurance - Exposure to cutting-edge AI/LLM projects in an innovative environment - Career Growth Opportunities in AI/ML leadership - Collaborative & AI-driven work culture Entra Solutions is an equal employment opportunity employer, and we welcome applicants from diverse backgrounds. Join us and be a part of our dynamic team driving innovation in the FinTech industry.,
Posted 1 week ago
2.0 - 10.0 years
0 Lacs
coimbatore, tamil nadu
On-site
You should have 3 to 10 years of experience in AI development and be located in Coimbatore. Immediate joiners are preferred. A minimum of 2 years of experience in core Gen AI is required. As an AI Developer, your responsibilities will include designing, developing, and fine-tuning Large Language Models (LLMs) for various in-house applications. You will implement and optimize Retrieval-Augmented Generation (RAG) techniques to enhance AI response quality. Additionally, you will develop and deploy Agentic AI systems capable of autonomous decision-making and task execution. Building and managing data pipelines for processing, transforming, and feeding structured/unstructured data into AI models will be part of your role. It is essential to ensure scalability, performance, and security of AI-driven solutions in production environments. Collaboration with cross-functional teams, including data engineers, software developers, and product managers, is expected. You will conduct experiments and evaluations to improve AI system accuracy and efficiency while staying updated with the latest advancements in AI/ML research, open-source models, and industry best practices. You should have strong experience in LLM fine-tuning using frameworks like Hugging Face, DeepSpeed, or LoRA/PEFT. Hands-on experience with RAG architectures, including vector databases such as Pinecone, ChromaDB, Weaviate, OpenSearch, and FAISS, is required. Experience in building AI agents using LangChain, LangGraph, CrewAI, AutoGPT, or similar frameworks is preferred. Proficiency in Python and deep learning frameworks like PyTorch or TensorFlow is necessary. Experience in Python web frameworks such as FastAPI, Django, or Flask is expected. You should also have experience in designing and managing data pipelines using tools like Apache Airflow, Kafka, or Spark. Knowledge of cloud platforms (AWS/GCP/Azure) and containerization technologies (Docker, Kubernetes) is essential. Familiarity with LLM APIs (OpenAI, Anthropic, Mistral, Cohere, Llama, etc.) and their integration in applications is a plus. A strong understanding of vector search, embedding models, and hybrid retrieval techniques is required. Experience with optimizing inference and serving AI models in real-time production systems is beneficial. Experience with multi-modal AI (text, image, audio) and familiarity with privacy-preserving AI techniques and responsible AI frameworks are desirable. Understanding of MLOps best practices, including model versioning, monitoring, and deployment automation, is a plus. Skills required for this role include PyTorch, RAG architectures, OpenSearch, Weaviate, Docker, LLM fine-tuning, ChromaDB, Apache Airflow, LoRA, Python, hybrid retrieval techniques, Django, GCP, CrewAI, OpenAI, Hugging Face, Gen AI, Pinecone, FAISS, AWS, AutoGPT, embedding models, Flask, FastAPI, LLM APIs, DeepSpeed, vector search, PEFT, LangChain, Azure, Spark, Kubernetes, AI Gen, TensorFlow, real-time production systems, LangGraph, and Kafka.,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
indore, madhya pradesh
On-site
You are a highly skilled Lead Backend Engineer with 7-10 years of experience, possessing a strong command over Java and Python along with a deep understanding of GenAI and Large Language Models (LLMs). At Team Geek Solutions, we are innovators driven by cutting-edge technology, aiming to solve real-world problems using scalable backend systems and next-gen AI. Our collaborative and forward-thinking culture values every engineer's role in building impactful products. As a Lead Backend Engineer, you will be responsible for leading a team of developers and data scientists to design scalable backend architectures and AI-driven solutions, leveraging the latest advancements in AI capabilities. Your key responsibilities will include leading and mentoring a team of backend and AI engineers, architecting and developing robust backend solutions using Java and Python, solving complex problems using structured and unstructured data, and implementing state-of-the-art LLMs such as OpenAI and HuggingFace models. You will also be involved in utilizing techniques like Retrieval-Augmented Generation (RAG) to enhance the performance and capabilities of LLM-based systems, as well as owning the development of end-to-end ML pipelines encompassing training, fine-tuning, evaluation, deployment, and monitoring (MLOps). Collaboration with business and product teams to identify use cases and deliver AI-powered solutions is another crucial aspect of your role. The required skills for this position include proficiency in Java and Python, a solid understanding of Data Structures & Algorithms, deep experience in Transformer-based models, LLM fine-tuning, and deployment, hands-on experience with PyTorch, LangChain, and Python web frameworks such as Flask and FastAPI, strong database skills with SQL, and experience in deploying and managing ML models in production environments. Leadership experience in managing small to mid-sized engineering teams is also essential. Preferred or good-to-have skills for this role include experience with LLMOps tools and techniques, exposure to cloud platforms like AWS, GCP, or Azure for model deployment, strong written and verbal communication skills for both technical and non-technical audiences, and a passion for innovation and building AI-first products. Keeping yourself updated with the latest advancements in AI and integrating best practices into development workflows will be a key aspect of your role at Team Geek Solutions.,
Posted 1 week ago
2.0 - 7.0 years
0 Lacs
karnataka
On-site
Job Description: We are seeking a highly skilled AI Audio/Speech Developer with the ability to work independently and proficiently construct AI/DL models from the ground up. The ideal candidate will demonstrate competency in coding relevant research papers autonomously and possess a solid foundation in NLP & Audio and ML/DL. The successful candidate should possess 2-7 years of overall development experience with a minimum of 2 years dedicated to NLP & Audio and ML/DL. Essential responsibilities will include training, fine-tuning, and optimizing various transformer model variations. Hands-on familiarity with LLM, fine-tuning, and optimization of LLM will be crucial for this role. Required Skills: - Proficiency in Audio & NLP AI Experience - Strong Programming Knowledge in Python, C++ - Hands-on experience with ASR frameworks like Kaldi, DeepSpeech, or Wav2Vec - Understanding of acoustic models, language models, and their integration - Experience in utilizing pre-trained models such as Wav2Vec 2.0, HuBERT, or Whisper - Competence in speech corpora and dataset preparation for ASR training and evaluation - Knowledge of model optimization techniques for real-time ASR applications - Practical experience in LLM fine-tuning, optimization, and performance enhancement Preferred Skills: - Certification in AI Audio related areas - Previous experience in ASR In this role, you will have the opportunity to leverage your expertise in AI Audio/Speech development to contribute to cutting-edge projects and make a significant impact in the field. If you are passionate about innovation and possess the necessary skills, we encourage you to apply for this exciting opportunity.,
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
haryana
On-site
We are seeking a Gen AI Engineer- LLM to focus on designing and optimizing ChatGPT agents and other LLM models tailored for marketing, sales, and content creation purposes. Your primary responsibilities will include developing AI-driven bots to elevate content generation, customer engagement, and streamline business operations. Your tasks will involve: - Designing and fine-tuning ChatGPT agents and various LLM models specifically for marketing and sales applications. - Setting context and refining models to align with the unique content and interaction requirements of the business. - Collaborating with cross-functional teams to seamlessly integrate LLM-powered bots into marketing strategies and Python-based applications. The ideal candidate should possess: - Proficiency in Python development with a minimum of 1-4 years of experience, along with 1-2 years dedicated to LLM fine-tuning. - Hands-on expertise in utilizing ChatGPT and other LLMs for marketing and content-focused use cases. - The capability to optimize models to cater to specific business needs such as marketing, sales, etc. If you are enthusiastic about pioneering innovations and deep learning systems, and eager to engage with cutting-edge technologies in this domain, we invite you to submit your application! Flixstock is dedicated to fostering a workplace that celebrates diversity, promotes inclusivity, and empowers every team member to excel. We recognize that a range of perspectives and backgrounds fosters innovation and drives success. As an equal-opportunity employer, we welcome talented individuals from all walks of life. Our goal is to nurture an environment where everyone feels valued, supported, and motivated to advance. Come aboard to contribute your unique talents and become a part of a cohesive team that prioritizes your personal and professional growth. Join us today! **Employment Type:** Full-time **Job Location:** Gurugram, Haryana, India **Date posted:** November 20, 2024,
Posted 2 weeks ago
3.0 - 12.0 years
0 Lacs
kochi, kerala
On-site
As a talented Full Stack Developer with expertise in Generative AI and Natural Language Processing, you will be a key member of our team, contributing to the design, development, and scaling of cutting-edge LLM and Generative AI applications to enhance user experiences. Your responsibilities will include developing backend logic and intelligent workflows using pre-trained AI models such as large language models (LLMs) and natural language understanding (NLU) engines. You will integrate and operationalise NLP and Generative AI models in production environments, including speech processing pipelines like automatic speech recognition (ASR) and text-to-speech (TTS) technologies. Applying techniques such as LLM fine-tuning, prompt engineering, and Retrieval-Augmented Generation (RAG) will be crucial for enhancing AI system performance. Moreover, you will design and deploy scalable full-stack solutions supporting AI-driven applications, working with various data sources to enable contextual AI retrieval and responses. Utilising cloud platforms like AWS/Azure effectively for hosting, managing, and scaling AI-enabled services will also be part of your role. If you are passionate about combining full-stack development with AI and LLM technologies to create innovative text and voice applications, we look forward to hearing from you. Qualifications: - 3+ years of hands-on experience in full-stack application development with a strong understanding of frontend and backend technologies. - 12 years of proven experience in designing and implementing AI-driven conversational systems. - Deep knowledge of integrating Speech-to-Text (STT) and Natural Language Processing (NLP) components into production-ready systems. Nice-to-Have Skills: - Exposure to MLOps practices, including model deployment, monitoring, lifecycle management, and performance optimization in production environments. What You'll Get: - Opportunity to work on one of the most advanced AI systems. - A high-performing, fast-paced startup culture with a deep tech focus.,
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Lead Backend Engineer at Team Geek Solutions, you will be a key member of our innovative team dedicated to leveraging cutting-edge technology to solve real-world problems. With a focus on scalable backend systems and next-gen AI, you will lead a group of developers and data scientists in creating impactful products that push the boundaries of AI capabilities. If you are passionate about leading with purpose and driving innovation, we invite you to join us on this exciting journey. Your primary responsibilities will include leading and mentoring a team of backend and AI engineers, designing and implementing robust backend solutions using Java and Python, and utilizing state-of-the-art techniques such as Large Language Models (LLMs) and Transformer models. By leveraging tools like PyTorch, LangChain, and Flask, you will develop end-to-end ML pipelines, from training to deployment, while collaborating with cross-functional teams to deliver AI-powered solutions that address specific use cases. To excel in this role, you must possess strong proficiency in Java and Python, along with a solid understanding of Data Structures & Algorithms. Experience with Transformer-based models, LLM fine-tuning, and deployment is essential, as well as familiarity with SQL and database management. Additionally, leadership skills and the ability to manage engineering teams effectively are key requirements for this position. Preferred skills include knowledge of LLMOps tools and cloud platforms (AWS/GCP/Azure) for model deployment, as well as excellent written and verbal communication abilities. A passion for innovation and a commitment to building AI-first products will set you apart as a valuable contributor to our team. Stay informed about the latest advancements in AI and integrate best practices into your work to drive continuous improvement and growth in our organization.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
You will be responsible for designing, building, and deploying scalable NLP/ML models for real-world applications. Your role will involve fine-tuning and optimizing Large Language Models (LLMs) using techniques like LoRA, PEFT, or QLoRA. You will work with transformer-based architectures such as BERT, GPT, LLaMA, and T5, and develop GenAI applications using frameworks like LangChain, Hugging Face, OpenAI API, or RAG (Retrieval-Augmented Generation). Writing clean, efficient, and testable Python code will be a crucial part of your tasks. Collaboration with data scientists, software engineers, and stakeholders to define AI-driven solutions will also be an essential aspect of your work. Additionally, you will evaluate model performance and iterate rapidly based on user feedback and metrics. The ideal candidate should have a minimum of 3 years of experience in Python programming with a strong understanding of ML pipelines. A solid background and experience in NLP, including text preprocessing, embeddings, NER, and sentiment analysis, are required. Proficiency in ML libraries such as scikit-learn, PyTorch, TensorFlow, Hugging Face Transformers, and spaCy is essential. Experience with GenAI concepts, including prompt engineering, LLM fine-tuning, and vector databases like FAISS and ChromaDB, will be beneficial. Strong problem-solving and communication skills are highly valued, along with the ability to learn new tools and work both independently and collaboratively in a fast-paced environment. Attention to detail and accuracy is crucial for this role. Preferred skills include theoretical knowledge or experience in Data Engineering, Data Science, AI, ML, RPA, or related domains. Certification in Business Analysis or Project Management from a recognized institution is a plus. Experience in working with agile methodologies such as Scrum or Kanban is desirable. Additional experience in deep learning and transformer architectures and models, prompt engineering, training LLMs, and GenAI pipeline preparation will be advantageous. Practical experience in integrating LLM models like ChatGPT, Gemini, Claude, etc., with context-aware capabilities using RAG or fine-tuning models is a plus. Knowledge of model evaluation and alignment, as well as metrics to calculate model accuracy, is beneficial. Data curation from sources for RAG preprocessing and development of LLM pipelines is an added advantage. Proficiency in scalable deployment and logging tooling, including skills like Flask, Django, FastAPI, APIs, Docker containerization, and Kubeflow, is preferred. Familiarity with Lang Chain, LlamaIndex, vLLM, HuggingFace Transformers, LoRA, and a basic understanding of cost-to-performance tradeoffs will be beneficial for this role.,
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
indore, madhya pradesh
On-site
You will be working as a Lead Backend Engineer at Team Geek Solutions, a company based in Noida/Indore, with a mission to solve real-world problems using scalable backend systems and next-gen AI technologies. As part of our collaborative and forward-thinking culture, you will play a crucial role in building impactful products driven by cutting-edge technology. Your primary responsibility will be to lead a team of backend and AI engineers, guiding them in developing robust backend solutions using Java and Python. You will leverage your expertise in GenAI and Large Language Models (LLMs) to architect scalable backend architectures and AI-driven solutions, pushing the boundaries of AI capabilities. Key Responsibilities: - Lead and mentor a team of backend and AI engineers to deliver innovative solutions. - Architect and develop robust backend solutions using Java and Python. - Utilize state-of-the-art LLMs like OpenAI and HuggingFace models to build solutions using LangChain, Transformer models, and PyTorch. - Implement advanced techniques such as Retrieval-Augmented Generation (RAG) to enhance LLM-based systems. - Drive the development of end-to-end ML pipelines, including training, fine-tuning, evaluation, deployment, and monitoring (MLOps). - Collaborate with business and product teams to identify use cases and deliver AI-powered solutions. - Stay abreast of the latest advancements in AI and integrate best practices into development workflows. Required Skills: - Proficiency in Java and Python with hands-on experience in both languages. - Strong understanding of Data Structures & Algorithms. - Deep expertise in Transformer-based models, LLM fine-tuning, and deployment. - Hands-on experience with PyTorch, LangChain, and Python web frameworks (e.g., Flask, FastAPI). - Solid database skills, particularly with SQL. - Experience in deploying and managing ML models in production environments. - Leadership experience in managing small to mid-sized engineering teams. Preferred / Good-to-Have Skills: - Familiarity with LLMOps tools and techniques. - Exposure to cloud platforms like AWS/GCP/Azure for model deployment. - Excellent written and verbal communication skills suitable for technical and non-technical audiences. - A strong passion for innovation and building AI-first products. If you are a tech enthusiast with a knack for problem-solving and a drive to innovate, we welcome you to join our team at Team Geek Solutions and contribute to shaping the future of AI-driven solutions.,
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Lead Backend Engineer at Team Geek Solutions, you will be responsible for leading a team of developers and data scientists to build scalable backend architectures and AI-driven solutions. You will need to have a strong command over Java and a deep understanding of GenAI and Large Language Models (LLMs) to push the boundaries of what's possible with today's AI capabilities. You will be tasked with mentoring the team, architecting and developing robust backend solutions using Java and Python, and solving complex problems using structured and unstructured data. Your role will also involve applying state-of-the-art LLMs such as OpenAI and HuggingFace models, utilizing techniques like Retrieval-Augmented Generation (RAG) to enhance performance, and owning the development of end-to-end ML pipelines including training, fine-tuning, evaluation, deployment, and monitoring (MLOps). Collaboration with business and product teams to identify use cases and deliver AI-powered solutions, staying updated with the latest advancements in AI, and integrating best practices into development workflows are key aspects of this role. Additionally, you should have proficiency in Java and Python, a solid understanding of Data Structures & Algorithms, and hands-on experience with Transformer-based models, LLM fine-tuning, PyTorch, LangChain, Python web frameworks (e.g., Flask, FastAPI), and SQL. Experience in deploying and managing ML models in production environments, leadership skills in managing small to mid-sized engineering teams, and a passion for innovation and building AI-first products are also essential requirements. Preferred skills include familiarity with LLMOps tools and techniques, exposure to cloud platforms like AWS/GCP/Azure for model deployment, and strong written and verbal communication skills for technical and non-technical audiences.,
Posted 2 weeks ago
1.0 - 5.0 years
0 Lacs
navi mumbai, maharashtra
On-site
Position : Junior Software Developer Location : Vashi, Navi Mumbai Responsibilities Assist in designing, developing, and testing machine learning models. Work with data engineering teams to collect, clean, and format data for analysis. Implement current machine learning algorithms and experiment with new techniques. Document and present model development processes and results to key stakeholders. Contribute to improving existing AI functionalities within our products. Stay updated with AI/ML advancements and suggest potential integrations. Who We're Looking For Individuals with a strong background in Python, SQL, and a passion for Machine Learning, NLP, and Generative AI. Proficiency in interacting with databases (MySQL) using Python Experience with sentiment analysis, topic modeling, Hugging Face models, RAG, LLM fine-tuning, and Stability AI models. Familiarity with creating and managing APIs, with experience in tools like Swagger for documentation. Fully committed individuals not currently pursuing any academic programs. (ref:hirist.tech),
Posted 3 weeks ago
3.0 - 8.0 years
15 - 18 Lacs
Bengaluru
Work from Office
Greetings, We are hiring for Senior Data Scientist in Bengaluru. As a Senior Data Scientist, you will focus on developing and implementing advanced data science and machine learning models to support our clients business needs. This role emphasizes hands-on technical work and collaboration with other data scientists and engineers, while working independently toward defined goals . You will play a key role in creating data solutions while contributing to our collective knowledge and innovation efforts. Key Responsibilities Data Science & Model Development: Data Preparation & Feature Engineering: Model Evaluation and Tuning: NLP & Advanced Frameworks: Collaboration & Communication: API Development & Integration: Continuous Learning & Improvement: Required Candidate Profile Qualification : Bachelor's or Master's degree in a relevant field (Computer Science, Data Science, Statistics etc.). Experience : 3+ years of experience in data science and machine learning. Programming & Tools : Proficient in Python, Git, and coding IDEs like VS Code or PyCharm. Machine Learning & Deep Learning : Strong knowledge of traditional machine learning algorithms, deep learning, and advanced hyper-parameter tuning. NLP : Proficiency with NLP techniques and experience with NLP projects on production. LLM Finetuning and Quantization Experience with Edge Deployment MLOps & Deployment : Experience with MLOps basics, containerization tools (Docker, Kubernetes), and cloud platforms (AWS, Azure, Google Cloud). To Discuss further on the role kindly reach out to our expert HR's *Ayushi 8602279217 Warm Regards, Prajit Grover HR TEAM KVC CONSULTANTS LTD.
Posted 3 weeks ago
5.0 - 12.0 years
3 - 6 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Key Responsibilities: Research & Innovation: Conduct applied research in generative AI, foundation models, NLP, computer vision, and multimodal AI. Stay abreast of the latest publications and open-source advancements. Model Development: Fine-tune, evaluate, and optimize large language models (LLMs), transformers, and other generative models for specific business and product use cases. Prototyping & Experimentation: Build proof-of-concepts and experimental systems that demonstrate the potential of GenAI across domains such as content generation, summarization, synthetic data, agent systems, etc. Data & Evaluation Pipelines: Design robust data pipelines, evaluation metrics, and benchmarking systems to validate model performance, safety, and bias. Collaboration: Work with cross-functional teams including product managers, ML engineers, and data scientists to translate research into production-grade systems. Open Source & IP Contribution: Publish findings in peer-reviewed venues, contribute to open-source projects, or generate intellectual property relevant to the business. Required Qualifications: 57 years of experience in machine learning or applied AI roles, with at least 23 years working on generative models or related research. Strong foundation in deep learning frameworks such as PyTorch, TensorFlow, or JAX. Experience with LLMs (e.g., GPT, LLaMA, Claude), diffusion models, or vision-language models. Proficient in Python and ML tools/libraries such as Hugging Face Transformers, LangChain, or similar. Understanding of responsible AI practices, bias mitigation, and model explainability. Master's or PhD in Computer Science, Machine Learning, Mathematics, or related fields. Preferred Qualifications: Experience with open-source LLMs or fine-tuning techniques like LoRA, PEFT, RLHF, etc. Knowledge of MLOps practices and deployment of models in production (e.g., via Kubernetes, Ray, Triton).
Posted 1 month ago
4.0 - 5.0 years
10 - 14 Lacs
Chennai, Delhi / NCR, Bengaluru
Work from Office
Python Programming: Proficiency in Python, Working with libraries like TensorFlow, PyTorch,Transformers,NLTK,Pandas,sklearn, and other related libraries used in NLP tasks and fine tuning language models. Experience with building comprehensive python modules for NLP tasks like tokenization, word embeddings,classifications. Evaluating and selecting the open source and/or commercial LLMs suitable for financial lending domains Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough