Jobs
Interviews

895 Summarization Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Roles & Responsibilities Design & Develop: Architect and implement Generative AI applications using Azure OpenAI Services, Azure Machine Learning, and other Azure AI tools. Develop and fine-tune LLMs for domain-specific tasks using frameworks such as LangChain, LlamaIndex, Hugging Face Transformers, etc. Build agentic AI systems capable of autonomous reasoning and task execution. NLP and Advanced Language Models: Work with NLP pipelines (text classification, summarization, entity extraction, question answering). MLOps & Azure Integration: Utilize CI/CD pipelines, model versioning, deployment, and monitoring for AI models using Azure Machine Learning MLOps capabilities. Collaborate with DevOps to integrate AI models into production systems. Innovation & Scalability: Stay updated on AI advancements—especially LLMs and Generative AI—to improve AI solutions. Promote best practices for scalable and secure AI system development on Azure. Mentorship & Collaboration: Provide technical mentorship to junior engineers; share knowledge within the team. Work closely with cross-functional teams (Product, Engineering) to translate requirements into technical solutions. Requirements Experience: Minimum 4 years in AI engineering, focused on Generative AI, LLMs, and NLP. Proven experience building and deploying AI models with Azure AI tools: Azure OpenAI, Azure ML, Azure Cognitive Services. Strong use of LLM frameworks: LangChain, LlamaIndex, Hugging Face, etc. Hands-on experience with MLOps pipelines and CI/CD. Technical Skills: Experience with backend development, especially building APIs or integrating AI models into web apps. Soft Skills: Excellent problem-solving; focus on production-ready solutions. Strong communication and collaboration; able to explain complex AI topics to non-technical stakeholders.

Posted 1 day ago

Apply

0 years

0 Lacs

Kolkata metropolitan area, West Bengal, India

Remote

Job Title: Freelance Technical Content Writer (Java, Python, MATLAB, R + UML Diagrams) Job Type: Freelance / Project-based Location: Remote Duration : On-demand (project-based with ongoing opportunities) Compensation: Per project or per word (negotiable based on complexity & quality) About the Role We are seeking a skilled and reliable freelance technical content writer who can work with us on a project or contract basis to deliver clean, well-structured, and professional-level documentation. You will create high-quality written content involving programming concepts, software project reports, user manuals, academic articles, and visual diagrams (e.g., UML, class, activity diagrams). Key Responsibilities Write technical documentation, tutorials, and project reports involving: Java, Python, MATLAB, and R * Prepare clean code snippets with proper explanation. * Design and include professional diagrams such as: *UML Diagrams (Use Case, Class, Activity, Sequence, etc.) * System Architecture, Flowcharts, Entity Relationship Diagrams (ERD) * Interpret project requirements and turn them into detailed, structured write-ups. * Maintain originality, accuracy, and clarity in all deliverables. * Optional: Support academic-style writing (IEEE/APA/MLA citations, etc.) Required Skills * Strong command of technical writing in English. * Basic programming understanding in Java, Python, *MATLAB, and R. * Experience creating UML and software engineering diagrams Nice-to-Have * Past experience with academic writing or project report creation * Graphic design ability (for clean visual illustrations or PDF formatting). * Research and summarization skills for tech domains like AI, ML, cybersecurity, etc. How to Apply If you're interested, send us: * A brief introduction about yourself * Your portfolio or sample technical work * Tools you use for diagramming and formatting email at : elegantresearchsolution@gmail.com

Posted 1 day ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: AI/ML Agent Developer Location: All EXL Locations Department: Artificial Intelligence & Data Science Reports To: Director of AI Engineering / Head of Intelligent Automation Position Summary: We are seeking an experienced and innovative AI/ML Agent Developer to design, develop, and deploy intelligent agents within a multi-agent orchestration framework. This role involves building autonomous agents that leverage LLMs, reinforcement learning, prompt engineering, and decision-making strategies to perform complex data and workflow tasks. You’ll work closely with cross-functional teams to operationalize AI across diverse use cases such as annotation, data quality, knowledge graph construction, and enterprise automation. Key Responsibilities: Design and implement modular, reusable AI agents capable of autonomous decision-making using LLMs, APIs, and tools like LangChain, AutoGen, or Semantic Kernel. Engineer prompt strategies for task-specific agent workflows (e.g., document classification, summarization, labeling, sentiment detection). Integrate ML models (NLP, CV, RL) into agent behavior pipelines to support inference, learning, and feedback loops. Contribute to multi-agent orchestration logic including task delegation, tool selection, message passing, and memory/state management. Collaborate with MLOps, data engineering, and product teams to deploy agents at scale in production environments. Develop and maintain agent evaluations, unit tests, and automated quality checks for reliability and interpretability. Monitor and refine agent performance using logging, observability tools, and feedback signals. Required Qualifications: Bachelor’s or Master’s in Computer Science, AI/ML, Data Science, or related field. 3+ years of experience in developing AI/ML systems; 1+ year in agent-based architectures or LLM-enabled automation. Proficiency in Python and ML libraries (PyTorch, TensorFlow, scikit-learn). Experience with LLM frameworks (LangChain, AutoGen, OpenAI, Anthropic, Hugging Face Transformers). Strong grasp of NLP, prompt engineering, reinforcement learning, and decision systems. Knowledge of cloud environments (AWS, Azure, GCP) and CI/CD for AI systems. Preferred Skills: Familiarity with multi-agent frameworks and agent orchestration design patterns. Experience in building autonomous AI applications for data governance, annotation, or knowledge extraction. Background in human-in-the-loop systems, active learning, or interactive AI workflows. Understanding of vector databases (e.g., FAISS, Pinecone) and semantic search. Why Join Us: Work at the forefront of AI orchestration and intelligent agents. Collaborate with a high-performing team driving innovation in enterprise AI platforms. Opportunity to shape the future of AI-based automation in real-world domains like healthcare, finance, and unstructured data.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description: We are looking for a Lead Generative AI Engineer with 3–5 years of experience to spearhead development of cutting-edge AI systems involving Large Language Models (LLMs) , Vision-Language Models (VLMs) , and Computer Vision (CV) . You will lead model development, fine-tuning, and optimization for text, image, and multi-modal use cases. This is a hands-on leadership role that requires a deep understanding of transformer architectures, generative model fine-tuning, prompt engineering, and deployment in production environments. Roles and Responsibilities: Lead the design, development, and fine-tuning of LLMs for tasks such as text generation, summarization, classification, Q&A, and dialogue systems. Develop and apply Vision-Language Models (VLMs) for tasks like image captioning, VQA, multi-modal retrieval, and grounding. Work on Computer Vision tasks including image generation, detection, segmentation, and manipulation using SOTA deep learning techniques. Leverage frameworks like Transformers, Diffusion Models, and CLIP to build and fine-tune multi-modal models. Fine-tune open-source LLMs and VLMs (e.g., LLaMA, Mistral, Gemma, Qwen, MiniGPT, Kosmos, etc.) using task-specific or domain-specific datasets. Design data pipelines , model training loops, and evaluation metrics for generative and multi-modal AI tasks. Optimize model performance for inference using techniques like quantization, LoRA, and efficient transformer variants. Collaborate cross-functionally with product, backend, and ML ops teams to ship models into production. Stay current with the latest research and incorporate emerging techniques into product pipelines. Requirements: Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or related field. 3–5 years of hands-on experience in building, training, and deploying deep learning models, especially in LLM, VLM , and/or CV domains. Strong proficiency with Python , PyTorch (or TensorFlow), and libraries like Hugging Face Transformers, OpenCV, Datasets, LangChain, etc. Deep understanding of transformer architecture , self-attention mechanisms , tokenization , embedding , and diffusion models . Experience with LoRA , PEFT , RLHF , prompt tuning , and transfer learning techniques. Experience with multi-modal datasets and fine-tuning vision-language models (e.g., BLIP, Flamingo, MiniGPT, Kosmos, etc.). Familiarity with MLOps tools , containerization (Docker), and model deployment workflows (e.g., Triton Inference Server, TorchServe). Strong problem-solving, architectural thinking, and team mentorship skills.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description: We are hiring a Generative AI Engineer to join our team working on AI applications involving Large Language Models (LLMs) , Vision-Language Models (VLMs) , and Computer Vision (CV) . Roles and Responsibilities: Fine-tune and optimize LLMs for tasks such as summarization, classification, Q&A, and creative generation. Work with Vision-Language Models (e.g., BLIP, MiniGPT, Kosmos) for tasks like image captioning, visual question answering, and retrieval. Contribute to training and evaluating Computer Vision models for tasks like image generation, detection, or editing. Implement and test various prompt engineering and LoRA-based fine-tuning approaches for different tasks. Preprocess and manage multi-modal datasets (images + text), and create training/evaluation pipelines. Write clean, modular, and well-documented code for training, inference, and deployment. Collaborate with backend, product, and design teams to integrate models into real-world applications. Stay updated with the latest papers and open-source projects in the Generative AI space. Requirements: Bachelor's degree in Computer Science, Artificial Intelligence, or related field. 2–3 years of experience in machine learning or deep learning, preferably in NLP or computer vision domains. Proficiency in Python , PyTorch , and libraries such as Transformers (Hugging Face) , OpenCV , or TorchVision . Hands-on experience with LLMs (e.g., GPT, BERT, LLaMA) or CV models (e.g., CLIP, Stable Diffusion). Familiarity with transformers , attention mechanisms , tokenization , and embedding techniques . Comfortable working with datasets , training loops , and evaluation metrics .

Posted 1 day ago

Apply

0 years

0 Lacs

Surat, Gujarat, India

On-site

About the Role: We are seeking a highly motivated and skilled AI/ML Engineer to join our innovation team focused on developing intelligent systems to improve internal processes, reduce manual workload, and accelerate project delivery timelines. The ideal candidate will have experience in designing, developing, and deploying AI agents, automation tools, and machine learning models that can streamline operations and enhance overall productivity across the organization. Key Responsibilities: Design, build, and deploy intelligent AI agents and automation tools that optimize internal workflows and reduce repetitive tasks Develop machine learning models and algorithms to support decision-making and improve operational efficiency Implement AI-driven solutions for use cases such as task automation, knowledge retrieval, data summarization, and intelligent reporting Research and experiment with various AI/ML approaches to identify the best-fit technologies for business problems Collaborate with cross-functional teams to identify pain points and deliver AI-based tools that support their functions Apply prompt engineering and fine-tuning techniques to maximize the value of large language models (LLMs) Develop systems that enable automated document generation, test case creation, ticketing, and other IT operations Ensure ethical, efficient, and scalable implementation of AI across internal tools and processes Maintain clear documentation of AI workflows, architecture, and operational procedures Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field Proficiency in Python and strong experience in machine learning and deep learning frameworks Proven experience in developing AI-powered agents, automation systems, or productivity-enhancing tools Strong understanding of NLP, ML algorithms, LLMs, and AI-driven workflow automation Ability to work independently on research and prototyping of new AI capabilities Excellent problem-solving and communication skills Comfortable working in fast-paced and agile environments Preferred Qualifications: Experience in designing multi-agent systems or autonomous workflow assistants Understanding of retrieval-based systems, semantic search, or document intelligence Familiarity with MLOps practices and scalable deployment of ML solutions Experience using AI to automate software development lifecycle tasks such as code review, documentation, and QA Background in building internal tooling for IT or engineering teams

Posted 1 day ago

Apply

2.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Argus is where smart people belong and where they can grow. We answer the challenge of illuminating markets and shaping new futures. What We’re Looking For Join our Generative AI team as a Senior Data Scientist, reporting directly to the Lead Data Scientist in India. You will play a crucial role in building, optimizing, and maintaining AI-ready data infrastructure for advanced Generative AI applications. Your focus will be on hands-on implementation of cutting-edge data extraction, curation, and metadata enhancement techniques for both text and numerical data. You will be a key contributor to the development of innovative solutions, ensuring rapid iteration and deployment, and supporting the Lead in achieving the team's strategic goals. What Will You Be Doing AI-Ready Data Development: Design, develop, and maintain high-quality AI-ready datasets, ensuring data integrity, usability, and scalability to support advanced generative AI models. Advanced Data Processing: Drive hands-on efforts in complex data extraction, cleansing, and curation for diverse text and numerical datasets. Implement sophisticated metadata enrichment strategies to enhance data utility and accessibility for AI systems. Algorithm Implementation & Optimization: Implement and optimize state-of-the-art algorithms and pipelines for efficient data processing, feature engineering, and data transformation tailored for LLM and GenAI applications. GenAI Application Development: Apply and integrate frameworks like LangChain and Hugging Face Transformers to build modular, scalable, and robust Generative AI data pipelines and applications. Prompt Engineering Application: Apply advanced prompt engineering techniques to optimize LLM performance for specific data extraction, summarization, and generation tasks, working closely with the Lead's guidance. LLM Evaluation Support: Contribute to the systematic evaluation of Large Language Models (LLMs) outputs, analysing quality, relevance, and accuracy, and supporting the implementation of LLM-as-a-judge frameworks. Retrieval-Augmented Generation (RAG) Contribution: Actively contribute to the implementation and optimization of RAG systems, including working with embedding models, vector databases, and, where applicable, knowledge graphs, to enhance data retrieval for GenAI. Technical Mentorship: Act as a technical mentor and subject matter expert for junior data scientists, providing guidance on best practices in coding and PR reviews, data handling, and GenAI methodologies. Cross-Functional Collaboration: Collaborate effectively with global data science teams, engineering, and product stakeholders to integrate data solutions and ensure alignment with broader company objectives. Operational Excellence: Troubleshoot and resolve data-related issues promptly to minimize potential disruptions, ensuring high operational efficiency and responsiveness. Documentation & Code Quality: Produce clean, well-documented, production-grade code, adhering to best practices for version control and software engineering. Skills And Experience Academic Background: Advanced degree in AI, statistics, mathematics, computer science, or a related field. Programming and Frameworks: 2+ years of hands-on experience with Python, TensorFlow or PyTorch, and NLP libraries such as spaCy and Hugging Face. GenAI Tools: 1+ years Practical experience with LangChain, Hugging Face Transformers, and embedding models for building GenAI applications. Prompt Engineering: Deep expertise in prompt engineering, including prompt tuning, chaining, and optimization techniques. LLM Evaluation: Experience evaluating LLM outputs, including using LLM-as-a-judge methodologies to assess quality and alignment. RAG and Knowledge Graphs: Practical understanding and experience using vector databases. In addition, familiarity with graph-based RAG architectures and the use of knowledge graphs to enhance retrieval and reasoning would be a strong plus. Cloud: 2+ years of experience with Gemini/OpenAI models and cloud platforms such as AWS, Google Cloud, or Azure. Proficient with Docker for containerization. Data Engineering: Strong understanding of data extraction, curation, metadata enrichment, and AI-ready dataset creation. Collaboration and Communication: Excellent communication skills and a collaborative mindset, with experience working across global teams. What’s In It For You Our rapidly growing, award-winning business offers a dynamic environment for talented, entrepreneurial professionals to achieve results and grow their careers. Argus recognizes and rewards successful performance and as an Investor in People, we promote professional development and retain a high-performing team committed to building our success. Competitive salary Hybrid Working Policy (3 days in Mumbai office/ 2 days WFH once fully inducted) Group healthcare scheme 18 days annual leave 8 days of casual leave Extensive internal and external training Hours This is a full-time position operating under a hybrid model, with three days in the office and up to two days working remotely. The team supports Argus’ key business processes every day, as such you will be required to work on a shift-based rota with other members of the team supporting the business until 8pm. Typically support hours run from 11am to 8pm with each member of the team participating up to 2/3 times a week. Argus is the leading independent provider of market intelligence to the global energy and commodity markets. We offer essential price assessments, news, analytics, consulting services, data science tools and industry conferences to illuminate complex and opaque commodity markets. Headquartered in London with 1,500 staff, Argus is an independent media organisation with 30 offices in the world’s principal commodity trading hubs. Companies, trading firms and governments in 160 countries around the world trust Argus data to make decisions, analyse situations, manage risk, facilitate trading and for long-term planning. Argus prices are used as trusted benchmarks around the world for pricing transportation, commodities and energy. Founded in 1970, Argus remains a privately held UK-registered company owned by employee shareholders and global growth equity firm General Atlantic.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Argus is where smart people belong and where they can grow. We answer the challenge of illuminating markets and shaping new futures. What We’re Looking For Join our Generative AI team to lead a new group in India, focused on creating and maintaining AI-ready data. As the point of contact in Mumbai, you will guide the local team and ensure seamless collaboration with our global counterparts. Your contributions will directly impact the development of innovative solutions used by industry leaders worldwide, supporting text and numerical data extraction, curation, and metadata enhancements to accelerate development and ensure rapid response times. You will play a pivotal role in transforming how our data are seamlessly integrated with AI systems, paving the way for the next generation of customer interactions. What Will You Be Doing Lead and Develop the Team: Oversee a team of data scientists in Mumbai. Mentoring and guiding junior team members, fostering their professional growth and development. Strategic Planning: Develop and implement strategic plans for data science projects, ensuring alignment with the company's goals and objectives. AI-Ready Data Development: Design, develop, and maintain high-quality AI-ready datasets, ensuring data integrity, usability, and scalability to support advanced Generative AI models. Advanced Data Processing: Drive hands-on efforts in complex data extraction, cleansing, and curation for diverse text and numerical datasets. Implement sophisticated metadata enrichment strategies to enhance data utility and accessibility for AI systems. Algorithm Implementation & Optimization: Implement and optimize state-of-the-art algorithms and pipelines for efficient data processing, feature engineering, and data transformation tailored for LLM and GenAI applications. GenAI Application Development: Apply and integrate frameworks like LangChain and Hugging Face Transformers to build modular, scalable, and robust Generative AI data pipelines and applications. Prompt Engineering Application: Apply advanced prompt engineering techniques to optimize LLM performance for specific data extraction, summarization, and generation tasks, working closely with the Lead's guidance. LLM Evaluation Support: Contribute to the systematic evaluation of Large Language Models (LLMs) outputs, analysing quality, relevance, and accuracy, and supporting the implementation of LLM-as-a-judge frameworks. Retrieval-Augmented Generation (RAG) Contribution: Actively contribute to the implementation and optimization of RAG systems, including working with embedding models, vector databases, and, where applicable, knowledge graphs, to enhance data retrieval for GenAI. Technical Leadership: Act as a technical leader and subject matter expert for junior data scientists, providing guidance on best practices in coding and PR reviews, data handling, and GenAI methodologies. Cross-Functional Collaboration: Collaborate effectively with global data science teams, engineering, and product stakeholders to integrate data solutions and ensure alignment with broader company objectives. Operational Excellence: Troubleshoot and resolve data-related issues promptly to minimize potential disruptions, ensuring high operational efficiency and responsiveness. Documentation & Code Quality: Produce clean, well-documented, production-grade code, adhering to best practices for version control and software engineering. Skills And Experience Leadership Experience: Proven track record in leading and mentoring data science teams, with a focus on strategic planning and operational excellence. Academic Background: Advanced degree in AI, statistics, mathematics, computer science, or a related field. Programming and Frameworks: 5+ years of hands-on experience with Python, TensorFlow or PyTorch, and NLP libraries such as spaCy and Hugging Face. GenAI Tools: 2+ years of Practical experience with LangChain, Hugging Face Transformers, and embedding models for building GenAI applications. Prompt Engineering: Deep expertise in prompt engineering, including prompt tuning, chaining, and optimization techniques. LLM Evaluation: Experience evaluating LLM outputs, including using LLM-as-a-judge methodologies to assess quality and alignment. RAG and Knowledge Graphs: Practical understanding and experience using vector databases. In addition, familiarity with graph-based RAG architectures and the use of knowledge graphs to enhance retrieval and reasoning would be a strong plus. Cloud: 2+ years of experience with Gemini/OpenAI models and cloud platforms such as AWS, Google Cloud, or Azure. Proficient with Docker for containerization. Data Engineering: Strong understanding of data extraction, curation, metadata enrichment, and AI-ready dataset creation. Collaboration and Communication: Excellent communication skills and a collaborative mindset, with experience working across global teams. What’s In It For You Our rapidly growing, award-winning business offers a dynamic environment for talented, entrepreneurial professionals to achieve results and grow their careers. Argus recognizes and rewards successful performance and as an Investor in People, we promote professional development and retain a high-performing team committed to building our success. Competitive salary Hybrid Working Policy (3 days in Mumbai office/ 2 days WFH once fully inducted) Group healthcare scheme 18 days annual leave 8 days of casual leave Extensive internal and external training Hours This is a full-time position operating under a hybrid model, with three days in the office and up to two days working remotely. The team supports Argus’ key business processes every day, as such you will be required to work on a shift-based rota with other members of the team supporting the business until 8pm. Typically support hours run from 11am to 8pm with each member of the team participating up to 2/3 times a week. Argus is the leading independent provider of market intelligence to the global energy and commodity markets. We offer essential price assessments, news, analytics, consulting services, data science tools and industry conferences to illuminate complex and opaque commodity markets. Headquartered in London with 1,500 staff, Argus is an independent media organisation with 30 offices in the world’s principal commodity trading hubs. Companies, trading firms and governments in 160 countries around the world trust Argus data to make decisions, analyse situations, manage risk, facilitate trading and for long-term planning. Argus prices are used as trusted benchmarks around the world for pricing transportation, commodities and energy. Founded in 1970, Argus remains a privately held UK-registered company owned by employee shareholders and global growth equity firm General Atlantic.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Junglee Games With over 140 million users, Junglee Games is a leader in the online skill gaming space. Founded in San Francisco in 2012 and part of the Flutter Entertainment Group, we are revolutionizing how people play games. Our notable games include Howzat, Junglee Rummy, and Junglee Poker. Our team comprises over 1000 talented individuals who have worked on internationally acclaimed AAA titles like Transformers and Star Wars: The Old Republic and contributed to Hollywood hits such as Avatar. Junglee’s mission is to build entertainment for millions of people around the world and connect them through games. Junglee Games is not just a gaming company but a blend of innovation, data science, cutting-edge tech, and, most importantly, a values-driven culture that is creating the next set of conscious leaders. Job overview: As a Chief Of Staff to our CMO, you will be responsible for interpreting data, formulating reports and making recommendations based on research findings and business insights. You must understand the responsibilities, needs and priorities of the leader to create the time and space needed for him to focus on the most strategically critical demands of the role. Job Location Gurgaon Key Responsibilities Responding to routine questions and requests and refers higher level managerial requests to E- Team, as appropriate. Keep the CMO on schedule by providing prompts. Perform special projects as directed by the CMO, including the review, research, summarization or analysis of information, gather data for market competition analysis and gain strategic market inputs. Attend key meetings with the CMO with the purpose of ensuring follow-up and execution on identified next steps. Coordinate with different teams to ensure timely accomplishment of project deliverables. Prepare and review profitability reports and variance analysis with estimates and projections. Working with the Leaders to strategize and manage a portfolio of relationships including scheduling of meetings and relationship management tactics such as thank you notes Manage and ensure execution of specific assignments/projects initiated by the Leader. Analyze consumer data to drive insights into how to target effectively, retain customers at low cost, and optimize engagement metrics. Strategically manage the Leader's time and calendar by exercising discretion and decision-making while sorting and filtering requests for the Leader’s time while ensuring strategic priorities are met in a timely manner. Prepare various reports on key business parameters after data collection and analysis to facilitate decision-making. Qualifications & skills required Must be a postgraduate from a premium B-School. 3+ years of work experience, preferably 2 years in a top tier consulting firm Exceptional organizational skills and the ability to manage multiple priorities Should have great analytical skills, strong number skills, and an aptitude for problem-solving. Ability to exercise sound, independent judgment Fierce determination to successfully meet complex challenges Advanced communication skills. Be a part of Junglee Games to: Value Customers & Data - Prioritize customers, use data-driven decisions, master KPIs, and leverage ideation and A/B testing to drive impactful outcomes. Inspire Extreme Ownership - We embrace ownership, collaborate effectively, and take pride in every detail to ensure every game becomes a smashing success. Lead with Love - We reject micromanagement and fear, fostering open dialogue, mutual growth, and a fearless yet responsible work ethic. Embrace change - Change drives progress and our strength lies in adapting swiftly and recognizing when to evolve to stay ahead. Play the Big Game - We think big, challenge norms, and innovate boldly, driving impactful results through fresh ideas and inventive problem-solving. Avail a comprehensive benefits package that includes paid gift coupons, fitness plans, gadget allowances, fuel costs, family healthcare, and much more. Know more about us Explore the world of Junglee Games through our website, www.jungleegames.com . Get a glimpse of what Life at Junglee Games looks like on LinkedIn . Here is a quick snippet of the Junglee Games Offsite’24 Liked what you saw so far? Be A Junglee

Posted 1 day ago

Apply

1.0 - 3.0 years

0 Lacs

India

On-site

We’re looking for a hands-on, product-minded full-stack developer with a strong interest in AI and automation . This role is ideal for someone who loves to build, experiment, and bring ideas to life — fast. You'll work closely with the founding team to prototype AI-powered tools and products from scratch.This is a highly AI-focused role where you will build tools powered by LLMs, workflow automation, and real-time data intelligence — not just build web apps, but create AI-first products . Location - Kochi, Bangalore | Years of experience - 1-3 Years Hire22. ai connects top talent with executive role s anonymously and confidential ly, transforming hiring through a n AI-first, instant CoNCT mod el. Companies ge t interview-ready candidates in just 22 hours . No telecalling, no spam, no manual filtering. Responsibilities Build and experiment with AI-first features powered by LLMs, embeddings, vector databases, and prompt-based workflows Fine-tune or adapt AI/ML models for specific use cases such as job matching, summarization, scoring, and classification Integrate and orchestrate AI capabilities using tools like Vertex AI, LangChain, Cursor, n8n, Flowise, etc. Work with vector databases and implement retrieval-augmented generation (RAG) patterns to build intelligent, context-aware AI applications. Design, build, and maintain full-stack web applications using Next.js and Python as supporting layers around core AI functionality Rapidly prototype ideas, test hypotheses, and iterate fast based on feedback Collaborate with product, design, and founders to transform internal ideas into deployable, AI-powered tools Building internal AI agents, assistants, or copilots Building tools for automated decision-making, resume/job matching, or workflow automation Skills Full-Stack Proficiency: Strong command of JavaScript/TypeScript with experience in modern frameworks like React or Next.js. Back-end experience with Python (FastAPI), orGo. Database Fluent: Comfortable working with both SQL (MySQL) and NoSQL databases (MongoDB, Redis), with good data modeling instincts. AI/ML First Mindset: Hands-on with integrating and optimizing AI models using frameworks like OpenAI, Hugging Face, LangChain, or TensorFlow. You understand LLM architecture, prompt engineering, embeddings, and AI orchestration tools. You’ve ideally built or experimented with AI-driven applications beyond just using APIs.. Builder Mentality: Passionate about product thinking and going from zero to one. You take ownership, work independently, and execute quickly without waiting for perfect clarity. Problem Solver: You break down complex problems, learn fast, and deliver clean, efficient solutions. You value both speed and quality. Communicator & Collaborator: You express your ideas clearly, ask good questions, and keep teams in sync by sharing progress and blockers openly.

Posted 1 day ago

Apply

50.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Client :- Our client is a French multinational information technology (IT) services and consulting company, headquartered in Paris, France. Founded in 1967, It has been a leader in business transformation for over 50 years, leveraging technology to address a wide range of business needs, from strategy and design to managing operations. The company is committed to unleashing human energy through technology for an inclusive and sustainable future, helping organizations accelerate their transition to a digital and sustainable world. They provide a variety of services, including consulting, technology, professional, and outsourcing services. Job Details :- Position: Data Analyst - AI& Bedrock Experience Required: 6-10yrs Notice: immediate Work Location: Pune Mode Of Work: Hybrid Type of Hiring: Contract to Hire Job Description:- FAS - Data Analyst - AI & Bedrock Specialization About Us: We are seeking a highly experienced and visionary Data Analyst with a deep understanding of artificial intelligence (AI) principles and hands-on expertise with cutting-edge tools like Amazon Bedrock. This role is pivotal in transforming complex datasets into actionable insights, enabling data-driven innovation across our organization. Role Summary: The Lead Data Analyst, AI & Bedrock Specialization, will be responsible for spearheading advanced data analytics initiatives, leveraging AI and generative AI capabilities, particularly with Amazon Bedrock. With 5+ years of experience, you will lead the design, development, and implementation of sophisticated analytical models, provide strategic insights to stakeholders, and mentor a team of data professionals. This role requires a blend of strong technical skills, business acumen, and a passion for pushing the boundaries of data analysis with AI. Key Responsibilities: • Strategic Data Analysis & Insight Generation: o End-to-end data analysis projects, from defining business problems to delivering actionable insights that influence strategic decisions. o Utilize advanced statistical methods, machine learning techniques, and AI-driven approaches to uncover complex patterns and trends in large, diverse datasets. o Develop and maintain comprehensive dashboards and reports, translating complex data into clear, compelling visualizations and narratives for executive and functional teams. • AI/ML & Generative AI Implementation (Bedrock Focus): o Implement data analytical solutions leveraging Amazon Bedrock, including selecting appropriate foundation models (e.g., Amazon Titan, Anthropic Claude) for specific use cases (text generation, summarization, complex data analysis). o Design and optimize prompts for Large Language Models (LLMs) to extract meaningful insights from unstructured and semi-structured data within Bedrock. o Explore and integrate other AI/ML services (e.g., Amazon SageMaker, Amazon Q) to enhance data processing, analysis, and automation workflows. o Contribute to the development of AI-powered agents and intelligent systems for automated data analysis and anomaly detection. • Data Governance & Quality Assurance: o Ensure the accuracy, integrity, and reliability of data used for analysis. o Develop and implement robust data cleaning, validation, and transformation processes. o Establish best practices for data management, security, and governance in collaboration with data engineering teams. • Technical Leadership & Mentorship: o Evaluate and recommend new data tools, technologies, and methodologies to enhance analytical capabilities. o Collaborate with cross-functional teams, including product, engineering, and business units, to understand requirements and deliver data-driven solutions. • Research & Innovation: o Stay abreast of the latest advancements in AI, machine learning, and data analytics trends, particularly concerning generative AI and cloud-based AI services. o Proactively identify opportunities to apply emerging technologies to solve complex business challenges. Required Skills & Qualifications: • Bachelor's or Master's degree in Computer Science, Data Science, Statistics, Mathematics, Engineering, or a related quantitative field. • 5+ years of progressive experience as a Data Analyst, Business Intelligence Analyst, or similar role, with a strong portfolio of successful data-driven projects. • Proven hands-on experience with AI/ML concepts and tools, with a specific focus on Generative AI and Large Language Models (LLMs). • Demonstrable experience with Amazon Bedrock is essential, including knowledge of its foundation models, prompt engineering, and ability to build AI-powered applications. • Expert-level proficiency in SQL for data extraction and manipulation from various databases (relational, NoSQL). • Advanced proficiency in Python (Pandas, NumPy, Scikit-learn, etc.) or R for data analysis, statistical modeling, and scripting. • Strong experience with data visualization tools such as Tableau, Power BI, Qlik Sense, or similar, with a focus on creating insightful and interactive dashboards. • Experience with cloud platforms (AWS preferred) and related data services (e.g., S3, Redshift, Glue, Athena). • Excellent analytical, problem-solving, and critical thinking skills. • Strong communication and presentation skills, with the ability to convey complex technical findings to non-technical stakeholders. • Ability to work independently and collaboratively in a fast-paced, evolving environment. Preferred Qualifications: • Experience with other generative AI frameworks or platforms (e.g., OpenAI, Google Cloud AI). • Familiarity with data warehousing concepts and ETL/ELT processes. • Knowledge of big data technologies (e.g., Spark, Hadoop). • Experience with MLOps practices for deploying and managing AI/ML models. Learn about building AI agents with Bedrock and Knowledge Bases to understand how these tools revolutionize data analysis and customer service.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Please Read Carefully Before Applying Do NOT apply unless you have 3+ years of real-world, hands-on experience in the requirements listed below. Do NOT apply if you are not in Delhi or the NCR OR are unwilling to relocate. This is NOT a WFO opportunity. We work 5 days from office, so please do NOT apply if you are looking for hybrid or WFO. About Gigaforce Gigaforce is a California-based InsurTech company delivering a next-generation, SaaS-based claims platform purpose-built for the Property and Casualty industry. Our blockchain-optimized solution integrates artificial intelligence (AI)-powered predictive models with deep domain expertise to streamline and accelerate subrogation and claims processing. Whether for insurers, recovery vendors, or other ecosystem participants, Gigaforce transforms the traditionally fragmented claims lifecycle into an intelligent, end-to-end digital experience. Recognized as one of the most promising emerging players in the insurance technology space, Gigaforce has already achieved significant milestones. We were a finalist for InsurtechNY, a leading platform accelerating innovation in the insurance industry, and twice named a Top 50 company by the TiE Silicon Valley community. Additionally, Plug and Play Tech Center, the world's largest early-stage investor and innovation accelerator, selected Gigaforce to join its prestigious global accelerator headquartered in Sunnyvale, California. At the core of our platform is a commitment to cutting-edge innovation. We harness the power of technologies such as AI, Machine Learning, Robotic Process Automation, Blockchain, Big Data, and Cloud Computing-leveraging modern languages and frameworks like Java, Kotlin, Angular, and Node.js. We are driven by a culture of curiosity, excellence, and inclusion. At Gigaforce, we hire top talent and provide an environment where every voice matters and every idea is valued. Our employees enjoy comprehensive medical benefits, equity participation, meal cards and generous paid time off. As an equal opportunity employer, we are proud to foster a diverse, equitable, and inclusive workplace that empowers all team members to thrive. We're seeking a NLP & Generative AI Engineers with 2-8 years of hands-on experience in traditional machine learning, natural language processing, and modern generative AI techniques. If you have experience deploying GenAI solutions to production, working with open-source technologies, and handling document-centric pipelines, this is the role for you. You'll work in a high-impact role, leading the design, development, and deployment of innovative AI/ML solutions for insurance claims processing and beyond. In this agile environment, you'll work within structured sprints and leverage data-driven insights and user feedback to guide decision-making. You'll balance strategic vision with tactical execution to ensure we continue to lead the industry in subrogation automation and claims optimization for the property and casualty insurance market. Key Responsibilities Build and deploy end-to-end NLP and GenAI-driven products focused on document understanding, summarization, classification, and retrieval. Design and implement models leveraging LLMs (e.g., GPT, T5, BERT) with capabilities like fine-tuning, instruction tuning, and prompt engineering. Work on scalable, cloud-based pipelines for training, serving, and monitoring models. Handle unstructured data from insurance-related documents such as claims, legal texts, and contracts. Collaborate cross-functionally with data scientists, ML engineers, product managers, and developers. Utilize and contribute to open-source tools and frameworks in the ML ecosystem. Deploy production-ready solutions using MLOps practices : Docker, Kubernetes, Airflow, MLflow, etc. Work on distributed/cloud systems (AWS, GCP, or Azure) with GPU-accelerated workflows. Evaluate and experiment with open-source LLMs and embeddings models (e.g., LangChain, Haystack, LlamaIndex, HuggingFace). Champion best practices in model validation, reproducibility, and responsible AI. Required Skills & Qualifications 2 - 8 years of experience as a Data Scientist, NLP Engineer, or ML Engineer. Strong grasp of traditional ML algorithms (SVMs, gradient boosting, etc.) and NLP fundamentals (word embeddings, topic modeling, text classification). Proven expertise in modern NLP & GenAI models, including : Transformer architectures (e.g., BERT, GPT, T5) Generative tasks : summarization, QA, chatbots, etc. Fine-tuning & prompt engineering for LLMs Experience with cloud platforms (especially AWS SageMaker, GCP, or Azure ML). Strong coding skills in Python, with libraries like Hugging Face, PyTorch, TensorFlow, Scikit-learn. Experience with open-source frameworks (LangChain, LlamaIndex, Haystack) preferred. Experience in document processing pipelines and understanding structured/unstructured insurance documents is a big plus. Familiar with MLOps tools such as MLflow, DVC, FastAPI, Docker, KubeFlow, Airflow. Familiarity with distributed computing and large-scale data processing (Spark, Hadoop, Databricks). Preferred Qualifications Experience deploying GenAI models in production environments. Contributions to open-source projects in ML/NLP/LLM space. Background in insurance, legal, or financial domain involving text-heavy workflows. Strong understanding of data privacy, ethical AI, and responsible model usage. (ref:hirist.tech)

Posted 1 day ago

Apply

34.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Overview We are looking for a Senior Data Scientist with a strong foundation in machine learning, data analysis, and a growing expertise in LLMs and Gen AI. Responsibilities The ideal candidate will be passionate about uncovering insights from data, proposing impactful use cases, and building intelligent solutions that drive business Responsibilities : Analyze structured and unstructured data to identify trends, patterns, and opportunities. Propose and validate AI/ML use cases based on business data and stakeholder needs. Build, evaluate, and deploy machine learning models for classification, regression, clustering, etc. Work with LLMs and GenAI tools to prototype and integrate intelligent solutions (e.g., chatbots, summarization, content generation). Collaborate with data engineers, product managers, and business teams to deliver end-to-end solutions. Ensure data quality, model interpretability, and ethical AI practices. Document experiments, share findings, and contribute to knowledge sharing within the Skills & Qualifications : Bachelors or Masters degree in Computer Science, Data Science, Statistics, or related field. 34 years of hands-on experience in data science and machine learning. Proficient in Python and ML libraries. Experience with data wrangling, feature engineering, and model evaluation. Exposure to LLMs and GenAI tools (e.g., Hugging Face, LangChain, OpenAI APIs). Familiarity with cloud platforms (AWS, GCP, or Azure) and version control (Git). Strong communication and storytelling skills with a data-driven mindset. (ref:hirist.tech)

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

telangana

On-site

As an AIML - LLM Gen AI Engineer, your primary responsibility will be to design, develop, and implement solutions using transformer-based models for various Natural Language Processing (NLP) tasks. This includes tasks such as text generation, summarization, question answering, classification, and translation. You will be working extensively with transformer models like GPT, BERT, T5, RoBERTa, and similar architectures. Your deep understanding of model architectures, attention mechanisms, and self-attention layers will be crucial in effectively utilizing Large Language Models (LLMs) to generate human-like text. You will lead efforts in fine-tuning pre-trained LLMs and other transformer models on domain-specific datasets to optimize performance for specialized NLP tasks. In your role, you will apply your knowledge of attention mechanisms, context windows, tokenization, and embedding layers in model development and optimization. It will be essential to address and mitigate issues related to biases, hallucinations, and knowledge cutoffs that can impact LLM performance and output quality. You will craft clear, concise, and contextually relevant prompts to guide LLMs towards generating desired outputs, including the use of instruction-based prompting. Additionally, you will implement and experiment with zero-shot, few-shot, and many-shot learning techniques to maximize model performance without extensive retraining. Your iterative approach to prompt engineering strategies will involve refining outputs, rigorously testing model performance, and ensuring consistent and high-quality results. You will also be responsible for creating prompt templates for repetitive tasks that are adaptable to different contexts and inputs. Expertise in chain-of-thought (CoT) prompting will enable you to guide LLMs through complex reasoning tasks by encouraging step-by-step breakdowns. Your contribution will span the entire lifecycle of machine learning models in an NLP context, including training, fine-tuning, and deployment. **Required Skills & Qualifications:** - A minimum of 8 years of experience working with transformer-based models and NLP tasks, focusing on text generation, summarization, question answering, classification, and similar applications. - Proficiency in transformer models such as GPT, BERT, T5, RoBERTa, and foundational models. - Strong familiarity with model architectures, attention mechanisms, and self-attention layers for generating human-like text. - Proven experience in fine-tuning pre-trained models on domain-specific datasets for various NLP tasks. - Knowledge of attention mechanisms, context windows, tokenization, and embedding layers. - Awareness of biases, hallucinations, and knowledge cutoffs affecting LLM performance. - Ability to craft clear, concise, and contextually relevant prompts for LLMs. - Experience in zero-shot, few-shot, and many-shot learning techniques. - Proficiency in Python and NLP libraries like Hugging Face Transformers, SpaCy, NLTK. - Solid experience in training, fine-tuning, and deploying ML models in an NLP context. - Strong problem-solving skills and analytical mindset. - Excellent communication and collaboration abilities for remote or hybrid work environments. - Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, or related quantitative field. This role requires a combination of technical expertise in AI/ML, NLP, and transformer models, along with strong problem-solving and communication skills to achieve effective results in the field.,

Posted 1 day ago

Apply

1.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Risk Management Level Associate Job Description & Summary At PwC, our people in forensic services focus on identifying and preventing fraudulent activities, conducting investigations, and maintaining compliance with regulatory requirements. Individuals in this field play a crucial role in safeguarding organisations against financial crimes and maintaining ethical business practices. In fraud, investigations and regulatory enforcement at PwC, you will focus on identifying and preventing fraudulent activities, conducting investigations, and confirming compliance with regulatory requirements. You will play a crucial role in safeguarding organisations against financial crimes and maintaining ethical business practices. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary PwC’s Corporate Intelligence services in India assist clients in identifying information and intelligence that enables them to make informed decisions before entering new or unknown markets. Corporate intelligence is conducted to identify risks associated with third- party business agents, proposed M&A targets, new employees and other potential targets. It evaluates the background, integrity, reputation and performance track record of an individual, a management group or corporate entity by collecting and analysing information that is available in the public domain, subscribed databases and market sources. *Responsibilities: • Carrying out secondary research in order to identify any red flags associated with the targets that could be potentially damaging for an organization • Carrying out checks to identify information pertaining to background,, shareholding/ownership structure, key personnel, litigation, regulatory non-compliance, material adverse, credit defaults, among others. • Preparing high quality due diligence report with summarization of information obtained from various sources including databases, internet and public domain. • Experience in primary or L2 research, discreet calling, and conducting thorough investigations with confidentiality while gathering essential information (specific to certain roles) • Ability to interpret a complex issue and bring structure to ambiguous issues. • Continuously work with the intelligence gathering team to identify information gaps and identification of relevant sources. *Mandatory skill sets • Ability to work on multiple projects and manage workload to deliver high quality work • Support project partner/directors and managers to provide project updates to internal and external stakeholders as per role level and designation • Possess strong rigor and dedication to meet client deadlines • Along with project work, also understand and rigorously complete all administrative aspects include risk management • Strong communication skills are essential for engaging with both internal and external stakeholders. The ability to articulate messages clearly, concisely, and in a structured manner is paramount. • Ability to review the work done (deliverables) by the team members and guide/train new joiners as well as delegate work with clearly defined timelines, as per role level and designation • Proficient analytical skills, enabling the identification of potential problem solutions • Diligent attention to detail and adept management of sensitive information *Preferred skill sets the role requires the selected candidate to support the project team in carrying out integrity and investigative due diligence by performing research in the public domain and analysing the information gathered *Years of experience required Experience: 1 to 8 years of relevant experience, role to be decided based on relevant experience) *Education Qualification • Bcom, BBA, Any Grad Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Customer Due Diligence (CDD) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Communication, Compliance Oversight, Compliance Risk Assessment, Corporate Governance, Cybersecurity, Data Analytics, Debt Restructuring, Emotional Regulation, Empathy, Evidence Gathering, Financial Crime Compliance, Financial Crime Investigation, Financial Crime Prevention, Financial Record Keeping, Financial Transactions, Forensic Accounting, Forensic Investigation, Fraud Detection, Fraud Investigation, Fraud Prevention, Inclusion, Intellectual Curiosity {+ 7 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 1 day ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are seeking highly skilled and motivated AI Engineers with Strong Python experience and familiar with prompt engineering and LLM integrations to join our Innovations Team. The team is responsible for exploring emerging technologies, building proof-of-concept (PoC) applications, and delivering cutting-edge AI/ML solutions that drive strategic transformation and operational efficiency. About the Role As a core member of the Innovations Team, you will work on AI-powered products, rapid prototyping, and intelligent automation initiatives across domains such as mortgage tech, document intelligence, and generative AI. Responsibilities Design, develop, and deploy scalable AI/ML solutions and prototypes. Build data pipelines, clean datasets, and engineer features for training models. Apply deep learning, NLP, and classical ML techniques. Integrate AI models into backend services using Python (e.g., FastAPI, Flask). Collaborate with cross-functional teams (e.g., UI/UX, DevOps, product managers). Evaluate and experiment with emerging open-source models (e.g., LLaMA, Mistral, GPT). Stay current with advancements in AI/ML and suggest opportunities for innovation. Qualifications Educational Qualification: Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or a related field. Certifications in AI/ML or cloud platforms (Azure ML, TensorFlow Developer, etc.) are a plus. Required Skills Technical Skills: Programming Languages: Python (strong proficiency), experience with NumPy, Pandas, Scikit-learn. AI/ML Frameworks: TensorFlow, PyTorch, HuggingFace Transformers, OpenCV (nice to have). NLP & LLMs: Experience with language models, embeddings, fine-tuning, and vector search. Prompt Engineering: Experience designing and optimizing prompts for LLMs (e.g., GPT, Claude, LLaMA) for various tasks such as summarization, Q&A, document extraction, and multi-agent orchestration. Backend Development: FastAPI or Flask for model deployment and REST APIs. Data Handling: Experience in data preprocessing, feature engineering, and handling large datasets. Version Control: Git and GitHub. Database Experience: SQL and NoSQL databases; vector DBs like FAISS, ChromaDB, or Qdrant preferred. Nice to Have (Optional): Experience with Docker, Kubernetes, or cloud environments (Azure, AWS). Familiarity with LangChain, LlamaIndex, or multi-agent frameworks (CrewAI, AutoGen). Soft Skills: Strong problem-solving and analytical thinking. Eagerness to experiment and explore new technologies. Excellent communication and teamwork skills. Ability to work independently in a fast-paced, dynamic environment. Innovation mindset with a focus on rapid prototyping and proof-of-concepts. Experience Level: 3–7 years, only Work from Office (Chennai location)

Posted 2 days ago

Apply

4.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Role OverviewThe AI Scientist is a technical role responsible for the development of advanced AI components, models and systems. With 4+ years of experience in artificial intelligence and machine learning, this role is focused on hands-on research, solution prototyping, development of enterprise-grade implementation across ML, NLP, Deep Learning, Generative AI, and multi-modal based models applications. The individual is expected to contribute to execute on AI solutions initiatives..Responsibilities1. Generative & Agentic AI· Develop GenAI models/solutions for text generation and content automation. Experience of working at any of the latest AI stacks like Nvidia- Nemo, NIM Microservices, Pytorch, Tensorflow etc.2. Large Language Models (LLMs) Fine-tune and operationalize LLMs (e.g., GPT, Llama, BERT) for NLP tasks using Nemo, NIM, UnSloth etc frameworks· Develop solution based on latest coding standards like Pep-83. Deep Learning and NLP components· Experience in developing NLP components like QnA, chatbots, Summarization etc. Experience in developing deep learning components like Image/Video/Audio processing/classification, OCR based components like Extraction etc.Skills1. Experience in any one Python/Java/R/.Net software languages.2. Experience in any three - Pytorch, TensorFlow, Nemo/NIM, Unsloth, Agentic AI – Lang graph, MS AI Foundry, Hugging Face, Chroma DB/FAISS etc.3. Good to have experience in MLOps and AI deployment infrastructure.4. Good to have experience in AWS/Azure/Google Cloud Services(anyone).Qualifications & Educational Requirements1. Bachelors or master’s in computer science, Artificial Intelligence, or a related field2. 4+ years of experience in AI/ML, NLP, Deep Learning, Gen AI, Model fine tuning, Reinforcement learning, etc.3. Experience in developing AI-driven solutions, with a understanding of the entire AI model lifecycle, from design to deployment and maintenance.4. Desired good communication skills, desire to learn and innovate new solutions in AI domain. Qualifications B. Tech. + M.B.A.

Posted 2 days ago

Apply

10.0 years

2 - 7 Lacs

Chennai

On-site

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* Developer Experience is a growing department within the Global Technology division of Bank of America. We drive modernization of technology tools and processes and Operational Excellence work across Global Technology. The organization operates in a very dynamic and fast-paced global business environment. As such, we value versatility, creativity, and innovation provided through individual contributors and teams that come from diverse backgrounds and experiences. We believe in an Agile SDLC environment with a strong focus on technical excellence and continuous process improvement. Job Description* The Developer Experience Crowdsourcing team has a need for a Technical Program Manager to help us design and implement new processes to enable and drive forward a new way of working. The Technical Program Manager will take overall ownership for project success, including planning, coordinating, and delivery of a defined project which requires engagement from teams across multiple value streams/organizations. Key responsibilities include understanding & at times drafting the technical requirements, communicating the desired program outcomes, coordinating delivery, managing risks, ensuring compliance to standards, and providing visibility into the health of the program. This role ensures execution and delivery meets program goals, timeline, cost. Facilitating sync points between business and technology leaders across multiple organizations, as well as Risk and Compliance partners. The candidate is expected to have a deep understanding of Software development life cycle along with hands-on experience using CI/CD and other DevOps tools. Responsibilities* Documents detailed requirements at Confluence, maintains RTM (Req Traceability Matrix) about the changes. Creates and enriches Jira work items, at epic and story level. Joins refinement calls and provide guidance to team. Leads and maintains the downstream users, sets expectations, and then refines the RTM and changes at confluence. Creates and maintains help guides/docs for users. Collaborates across teams to ensure that what changes/support are expected in each iteration. Joins connects with other POs of upstream and ensures that impact is discussed and documented. Takes care of watching and resolving product related questions/tickets as part of Support Model (Jira service requests) Works closely with Product manager to understand high level Product strategy and architectural direction. Expect to be meeting regularly with the PA team to make sure we are all aligned. Publish monthly process control metrics and support inquiries related to the supporting data. Support process inquiries through data analysis and the summarization of the findings Execute procedural tasks in support of GT-wide standards and process controls. Coordinating and facilitating the program routines –e.g., kick-off, program reviews, status review, stakeholder meetings, change controls, tollgates, etc. Facilitating technical discussions to understand user requirements around SPI process , Risk and governance. Documenting and understanding the solution to drive routines and engagement updates to Customers/Stakeholders. Must have excellent documentation skills. Planning and coordinates program delivery and dependencies across multiple value streams. Facilitating dependency management/risk management/impediment removal for the program. Facilitate the collation of information across workstreams. Facilitate weekly sync meetings. Providing status updates for the program to stakeholders and leadership pertaining to the desired outcomes, delivery, risks/issues, and schedule. Ensuring that execution is aligned with program outcomes by working with the sponsor / stakeholders. Should be a continuous learner with problem-solving mindset. Creates and maintains help guides/docs for users. Expect to be meeting regularly with the team to make sure we are all aligned. With some guidance, create the vision and roadmap for the product to align with strategic direction for the business or technology domain. Communicate the product vision and roadmap to stakeholders and the team. Collaborate with stakeholders to understand their needs and problems. Create and prioritize work for a team, learning to collaborate with cross functional teams. With some guidance, create and prioritize stories in the product backlog. Refine stories with the team to ensure there are enough “ready” stories to load the next 1-2 sprints. Review and accept stories and make on the spot decisions regarding scope and requirements. Work in partnership with the team to ensure that optimum value is obtained through technology and through a good understanding of the business. Requirements* Education* Graduation / Post Graduation Certifications If Any: NA Experience Range* 10 + Years Foundational Skills* 2+ years technical project management experience 3-5 years of technical analyst or business analyst experience in process, risk, and governance. Process orientation – very structured and rigorous when it comes to process execution. Analytical skills – natural curiosity with data and natural problem-solving ability. Strong communication skills – proactively provides visibility into plans and status of work including raising blockers. Experience with policy, standard, and process governance. Familiarity with bank systems and processes for governance – i.e. RISE, Trident, ORCIT, POP, Horizon. Technical Skills – Basic knowledge of CI/CD Tooling: Jira, Jenkins, Artifactory, Tower, Quartz, Endeavor Proficient in digital collaboration with Agile Tools like JIRA, Confluence, SharePoint Proficient in Microsoft Office suite of products, with emphasis on Advanced Excel and PowerPoint Experience with enterprise project management controls. Work with workstream leads / development teams to set up and maintain project information. Manage project work break down structure (wbs) Familiar with the various digital media / communications channels internal to Bank of America Must be a creative and flexible thought leader who can be successful in a fast-changing environment. Has proven track record of preparing materials for all levels within the organization (practitioner through to senior leadership) Experience partnering with Senior Leadership to provide Program and Project level updates. Must have the ability to work independently with minimal supervision. Must possess analytical and problem-solving skills. Excellent oral and written communication skills Excellent time management and prioritization skills Desired Skills* Experience with process mapping from design to implementation to maintenance Excellent organizational and prioritization skills A proactive approach to problem solving and think innovatively. Experience in Continuous Integration and Continuous Deployment Tools Must possess basic knowledge on programing languages (Java / Microsoft) , operating systems, databases and version control systems Proven track record in project delivery in an agile environment experience would be an added advantage. Work Timings* 11:30 AM to 8:30 PM Job Location* Chennai

Posted 2 days ago

Apply

0 years

0 Lacs

India

On-site

The candidate should have experience in AI Development including experience in developing, deploying, and optimizing AI and Generative AI solutions. The ideal candidate will have a strong technical background, hands-on experience with modern AI tools and platforms, and a proven ability to build innovative applications that leverage advanced AI techniques. You will work collaboratively with cross-functional teams to deliver AI-driven products and services that meet business needs and delight end-users. Key Prerequisites  Experience in AI and Generative AI Development  Experience in Design, develop, and deploy AI models for various use cases, such as predictive analytics, recommendation systems, and natural language processing (NLP).  Experience in Building and fine-tuning Generative AI models for applications like chatbots, text summarization, content generation, and image synthesis.  Experience in implementation and optimization of large language models (LLMs) and transformer-based architectures (e.g., GPT, BERT).  Experience in ingestion and cleaning of data  Feature Engineering and Data Engineering  Experience in Design and implementation of data pipelines for ingesting, processing, and storing large datasets.  Experience in Model Training and Optimization  Exposure to deep learning models and fine-tuning pre-trained models using frameworks like TensorFlow, PyTorch, or Hugging Face.  Exposure to optimization of models for performance, scalability, and cost efficiency on cloud platforms (e.g., AWS SageMaker, Azure ML, Google Vertex AI).  Hands-on experience in monitoring and improving model performance through retraining and evaluation metrics like accuracy, precision, and recall. AI Tools and Platform Expertise  OpenAI, Hugging Face  MLOps tools  Generative AI-specific tools and libraries for innovative applications. Technical Skills 1. Strong programming skills in Python (preferred) or other languages like Java, R, or Julia. 2. Expertise in AI frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, and Hugging Face. 3. Proficiency in working with transformer-based models (e.g., GPT, BERT, T5, DALL-E). 4. Experience with cloud platforms (AWS, Azure, Google Cloud) and containerization tools (Docker, Kubernetes). 5. Solid understanding of databases (SQL, NoSQL) and big data processing tools (Spark, Hadoop).

Posted 2 days ago

Apply

0 years

0 Lacs

India

Remote

Hi, We have a 6-month (possibilities of further extension) Contract requirement for AI Developer OFFSHORE REMOTE Europe time zone -Central European Time (CET). Working hours will be 5 am/ 5:30am/6 am to 2 pm/2:30 pm / 3 pm IST Position: AI Developer Duration: 6-month (possibilities of further extension) Annual Salary: DOE Work Location: REMOTE Required Experience: 5 yrs Primary Skills Knowledge of Python AWS Lambda AWS Bedrock Detailed JD as follows: 1. Prompt Engineering Techniques – Advanced Design effective prompts for large language models to optimize accuracy, reduce hallucinations, and guide reasoning using few-shot, zero-shot, and chain-of-thought approaches. Work with foundation models via APIs (e.g., OpenAI, AWS Bedrock) to build real-world applications across domains. 2. Generative Modeling Techniques – Intermediate Implement and experiment with generative models like GPT, BERT, or other Transformer-based architectures. Apply pre-trained models in NLP, content generation, or summarization tasks using platforms like Hugging Face or AWS Bedrock. 3. Agentic AI and Reinforcement Learning – Intermediate Understand and integrate basic agentic frameworks that combine reasoning, memory, and planning. Work with simulation environments (e.g., OpenAI Gym) and apply standard reinforcement learning algorithms for prototype development. 4. Building Responsible and Ethical AI – Intermediate Implement fairness-aware model evaluation and explainability practices using tools like SHAP or LIME. Follow best practices for privacy, bias mitigation, and governance in AI systems. 5. Cloud Deployment and MLOps – Intermediate Deploy AI models using AWS Lambda , AWS Bedrock , and SageMaker for serving and scalability. Participate in building MLOps pipelines and model versioning using MLflow or similar tools. 6. Machine Learning Frameworks (PyTorch/TensorFlow) – Beginner Train and fine-tune small-scale models using PyTorch or TensorFlow under guidance or for prototyping. Leverage existing models via transfer learning for quick deployment. 7. Data Preprocessing and Feature Engineering – Beginner Clean and transform raw data for modeling tasks. Perform exploratory data analysis (EDA) and develop basic feature extraction pipelines. 8. Containerization and CI/CD – Beginner Learn and assist in containerizing ML applications using Docker. Support implementation of CI/CD pipelines and Git-based workflow integrations.

Posted 2 days ago

Apply

0 years

0 Lacs

India

On-site

Dear All, We are seeking a highly capable Machine Learning Engineer with deep expertise in fine-tuning Large Language Models (LLMs) and Vision-Language Models (VLMs) for intelligent document processing. This role requires strong knowledge of PEFT techniques (LoRA, QLoRA), quantization , transformer architectures , prompt engineering , and orchestration frameworks like LangChain . You’ll work on building and scaling end-to-end document processing workflows using both open-source and commercial models (OpenAI, Google, etc.), with an emphasis on performance, reliability, and observability. Key Responsibilities: Fine-tune and optimize open-source and commercial LLMs/VLMs (e.g., LLaMA,Cohere, Gemini, GPT-4) for structured and unstructured document processing tasks. Apply advanced PEFT techniques (LoRA, QLoRA) and model quantization to enable efficient deployment and experimentation. Design LLM-based document intelligence pipelines for tasks like OCR extraction, entity recognition, key-value pairing, summarization, and layout understanding. Develop and manage prompting techniques (zero-shot, few-shot, chain-of-thought, self-consistency) tailored to document use-cases. Implement LangChain -based workflows integrating tools, agents, and vector stores for RAG-style processing. Monitor experiments and production models using Weights & Biases (W&B) or similar ML observability tools. Work with OpenAI (GPT series) , Google PaLM / Gemini , and other LLM/VLM APIs for hybrid system design. Collaborate with cross-functional teams to deliver scalable, production-ready ML systems and continuously improve model performance. Build reusable, well-documented code and maintain a high standard of reproducibility and traceability. Required Skills & Experience: Hands-on experience with transformer architectures and libraries like HuggingFace Transformers. Deep knowledge of fine-tuning strategies for large models, including LoRA , QLoRA , and other PEFT approaches. Experience in prompt engineering and developing advanced prompting strategies. Familiarity with LangChain , vector databases (e.g., FAISS, Pinecone), and tool/agent orchestration. Strong applied knowledge of OpenAI , Google (Gemini/PaLM) , and other foundational LLM/VLM APIs. Proficiency in model training, tracking, and monitoring using tools like Weights & Biases (W&B) . Solid understanding of deep learning , machine learning , natural language processing , and computer vision concepts. Experience working with document AI models (e.g., LayoutLM, Donut, Pix2Struct) and OCR tools (Tesseract, EasyOCR, etc.). Proficient in Python , PyTorch , and related ML tooling. Nice-to-Have: Experience with multi-modal architectures for document + image/text processing. Knowledge of RAG systems , embedding models , and custom vector store integrations. Experience in deploying ML models via FastAPI , Triton , or similar frameworks. Contributions to open-source AI tools or model repositories. Exposure to MLOps , CI/CD pipelines , and data versioning. Qualifications: Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Why Join Us? Work on cutting-edge GenAI and Document AI use-cases. Collaborate in a fast-paced, research-driven environment. Flexible work arrangements and growth-focused culture. Opportunity to shape real-world applications of LLMs and VLMs.

Posted 2 days ago

Apply

2.0 years

0 Lacs

India

On-site

The Role We are hiring an AI/ML Developer (India), to join our India team, in support of a large global client! You will be responsible for developing, deploying, and maintaining AI and machine learning models. Your expertise in Python, cloud services, databases, and big data technologies will be instrumental in creating scalable and efficient AI applications. What You Will Be Doing •Develop, train, and deploy machine learning models for predictive analytics, classification, and clustering. •Implement AI-based solutions using frameworks such as TensorFlow, PyTorch, and Scikit-learn. •Work with cloud platforms including AWS (SageMaker, Lambda, S3), Azure, and Google Cloud (Vertex AI). •Integrate and fine-tune Hugging Face transformer models (e.g., BERT, GPT) for NLP tasks such as text classification, summarization, and sentiment analysis. •Develop AI automation solutions, including chatbot implementations using Microsoft Teams and Azure AI. •Work with big data technologies such as Apache Spark and Snowflake for large-scale data processing and analytics. •Design and optimize ETL pipelines for data quality management, transformation, and validation. •Utilize SQL, MySQL, PostgreSQL, and MongoDB for database management and query optimization. •Create interactive data visualizations using Tableau and Power BI to drive business insights. •Work with Large Language Models (LLMs) for AI-driven applications, including fine-tuning, training, and deploying model for conversational AI, text generation, and summarization. •Develop and implement Agentic AI systems, enabling autonomous decision-making AI agents that can adapt, learn, and optimize tasks in real-time. What You Bring Along •2+ years of experience applying AI to practical uses. •Strong programming skills in Python, SQL, and experience with ML frameworks such as TensorFlow, PyTorch, or Scikit-learn. •Knowledge of basic algorithms and object-oriented and functional design principles •Proficiency in using data analytics libraries like Pandas, NumPy, Matplotlib, and Seaborn. •Hands-on experience with cloud platforms such as AWS, Azure, and Google Cloud. •Experience with big data processing using Apache Spark and Snowflake. •Knowledge of NLP and AI model implementations using Hugging Face and cloud-based AI services. •Strong understanding of database management, query optimization, and data warehousing. •Experience with data visualization tools such as Tableau and Power BI. •Ability to work in a collaborative environment and adapt to new AI technologies. •Strong analytical and problem solving skills. Education: •Bachelor’s degree in computer science, Data Science, AI/ML, or a related field.

Posted 2 days ago

Apply

3.0 years

0 Lacs

India

On-site

Key Responsibilities: Design and implement modular, reusable AI agents capable of autonomous decision-making using LLMs, APIs, and tools like LangChain, AutoGen, or Semantic Kernel. Engineer prompt strategies for task-specific agent workflows (e.g., document classification, summarization, labeling, sentiment detection). Integrate ML models (NLP, CV, RL) into agent behavior pipelines to support inference, learning, and feedback loops. Contribute to multi-agent orchestration logic including task delegation, tool selection, message passing, and memory/state management. Collaborate with MLOps, data engineering, and product teams to deploy agents at scale in production environments. Develop and maintain agent evaluations, unit tests, and automated quality checks for reliability and interpretability. Monitor and refine agent performance using logging, observability tools, and feedback signals. Required Qualifications: Bachelor’s or Master’s in Computer Science, AI/ML, Data Science, or related field. 3+ years of experience in developing AI/ML systems; 1+ year in agent-based architectures or LLM-enabled automation. Proficiency in Python and ML libraries (PyTorch, TensorFlow, scikit-learn). Experience with LLM frameworks (LangChain, AutoGen, OpenAI, Anthropic, Hugging Face Transformers). Strong grasp of NLP, prompt engineering, reinforcement learning, and decision systems. Knowledge of cloud environments (AWS, Azure, GCP) and CI/CD for AI systems. Preferred Skills: Familiarity with multi-agent frameworks and agent orchestration design patterns. Experience in building autonomous AI applications for data governance, annotation, or knowledge extraction. Background in human-in-the-loop systems, active learning, or interactive AI workflows. Understanding of vector databases (e.g., FAISS, Pinecone) and semantic search.

Posted 2 days ago

Apply

5.0 years

0 Lacs

India

Remote

Senior NLP Engineer Location: [Remote] Department: AI / Data Science Engineering Reports to: Director of Machine Learning or VP of Engineering 🔍 Role Summary We are hiring a Senior NLP Engineer to design and deploy advanced natural language processing solutions that unlock clinical and operational insights from healthcare data. Your work will directly support AI-driven personalized care programs for chronic conditions such as CKD, supporting care teams, patient outreach, and decision-making workflows. This position demands current, hands-on experience in production-grade healthcare NLP—candidates must demonstrate real-world implementations, not just theoretical exposure. 🚀 Your Day-to-Day Collaborate with NLP team to design, build, and maintain efficient, reusable, and reliable code Architect scalable NLP pipelines using transformer models for tasks such as summarization, medical NER, classification, and retrieval Fine-tune models (e.g., BioGPT, ClinicalBERT, LoRA, RAG) on clinical and conversational datasets Implement classical NLP algorithms and data structures for specialized use cases Own architectural decisions and contribute to design standards, governance, and cross-team practices Monitor and improve model performance for robustness, privacy, and regulatory alignment Identify and resolve bottlenecks in the NLP stack Support code quality, organization, CI/CD pipelines, and AWS deployment automation Mentor and guide other NLP engineers, and share knowledge through tech talks and internal forums 🧰 You’re a Perfect Match If You Have 5+ years of commercial experience in Python development and NLP 2+ years of current real-world experience applying NLP in healthcare environments Deep theoretical and practical understanding of classical NLP and neural network–based NLP methods Strong foundation in classical ML algorithms and modern deep learning architectures Bachelor’s or higher degree in Computer Science, AI, or related technical field Upper-Intermediate+ level of English with excellent written communication Proficiency in Python NLP libraries: spaCy, NLTK, Scikit-learn, Keras, Hugging Face Transformers Solid grasp of algorithmic fundamentals and design patterns Git-savvy, experienced with DevOps tools and AWS infrastructure Knowledge of concurrency, high availability, scalability, and secure system design Ability to communicate architectural decisions, trade-offs, and emerging technology considerations Comfortable writing maintainable, modular Python code with OOP and functional programming principles Committed to lifelong learning and experimentation with new technologies Strong mentorship mindset and experience presenting at internal or external tech talks

Posted 2 days ago

Apply

4.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Overview: We are looking for a Senior Data Scientist with a strong foundation in machine learning, data analysis, and a growing expertise in LLMs and Gen AI. The ideal candidate will be passionate about uncovering insights from data, proposing impactful use cases, and building intelligent solutions that drive business value. Key Responsibilities: Analyze structured and unstructured data to identify trends, patterns, and opportunities. Propose and validate AI/ML use cases based on business data and stakeholder needs. Build, evaluate, and deploy machine learning models for classification, regression, clustering, etc. Work with LLMs and GenAI tools to prototype and integrate intelligent solutions (e.g., chatbots, summarization, content generation). Collaborate with data engineers, product managers, and business teams to deliver end-to-end solutions. Ensure data quality, model interpretability, and ethical AI practices. Document experiments, share findings, and contribute to knowledge sharing within the team Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or related field. 3–4 years of hands-on experience in data science and machine learning. Proficient in Python and ML libraries Experience with data wrangling, feature engineering, and model evaluation. Exposure to LLMs and GenAI tools (e.g., Hugging Face, LangChain, OpenAI APIs). Familiarity with cloud platforms (AWS, GCP, or Azure) and version control (Git). Strong communication and storytelling skills with a data-driven mindset.

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies