Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
GAQ326R190 Mission As the Staff People Business Partner for India, you will have the unique opportunity to drive meaningful impact across our largest and most dynamic region in APJ. In this pivotal role, you will navigate complex organizational challenges by partnering closely with leaders across India-based teams, acting as a strategic thought partner, consultant, and champion for talent strategy and people initiatives. You will serve as a trusted advisor on all aspects of organizational effectiveness—including organizational planning and design, performance management, career development, leadership coaching, employee relations, and compensation. Your expertise will help build scalable, progressive, and high-performing organizations. In close collaboration with your People Partner leader, you will embody and advocate for our company’s principles, values, and policies, fostering a global, inclusive, and high-performance work environment that empowers every employee to thrive. Outcomes Serve as a trusted advisor to India senior leadership and global leaders with India-based teams, delivering impactful solutions that benefit both the business and employees while enabling scalable growth. Facilitate and manage core people programs, policies, and procedures for the India team—including, but not limited to, performance management, culture surveys, talent management, career development, compensation, benefits and rewards, development programs, and change management. Design and implement effective change management strategies and learning programs to promote organizational health. Leverage data and insights to develop and align talent strategies that directly support business objectives and drive organizational success. Lead the execution of key organizational initiatives and goals by applying effective planning and project management methodologies, ensuring alignment with overall business objectives. Deliver on initiatives and goals through thoughtful organizational planning and project management Act as the primary point of contact between business units and central People Operations, Benefits, Payroll, and other cross-functional teams. Clearly communicate business-specific people priorities and advocate for integrating these needs into centralized programs and policies. Provide expert support and consultation across the People team, fostering collaboration and driving cross-functional initiatives aimed at organizational improvement. Partner with the Employee Relations team to address and resolve employee relations matters, including participating in investigations, managing disciplinary actions, and facilitating performance management discussions. Contribute to or support APJ initiatives as needed Competencies 5+ yrs of HR experience that shows proven success as a strategic partner working with managers up through the VP+ level Proactive, resilient, and able to thrive in a fast-paced, evolving environment. In-depth knowledge of Human Resources practices and legal requirements in India Strong organizational skills and detail orientation Highly adaptable; drives change and influences leaders during rapid growth, especially those new to local norms Strong verbal and written communicator; effectively interprets and conveys ideas, information, instructions, policies and procedures Strong judgment in decision-making and problem-solving in ambiguous situations Skilled in data analysis to generate actionable insights Strong sense of urgency with the ability to handle multiple competing priorities Excellent computer skills, including proficiency in Google Workspace and Microsoft Office Suite About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Join us as a Principal Engineer This is a challenging role that will see you design and engineer software with the customer or user experience as the primary objective With your software development background, you’ll be working with architects to help define major components of the business-wide target architecture and roadmap You’ll gain valuable senior stakeholder exposure as well as the opportunity to hone your technical talents and leadership skills We're offering this role at director level What you'll do As a Principal Engineer, you’ll be creating great customer outcomes via engineering and innovative solutions to existing and new challenges, and technology designs which are innovative, customer centric, high performance, secure and robust. You’ll be leading the more significant, complex and technically challenging assignments, coordinating multiple feature teams, making sure that their technical journeys support realisation of the targets, and deliver the values of the relevant metrics published to our investors. You’ll Also Be Defining, creating and providing oversight and governance of engineering and design solutions with a focus on end-to-end automation, simplification, resilience, security, performance, scalability and reusability Working within a platform or feature team along with software engineers to design and engineer complex software, scripts and tools to enable the delivery of bank platforms, applications and services, acting as a point of contact for solution design considerations Defining and developing architecture models and roadmaps of application and software components to meet business and technical requirements, driving common usability across products and domains Influencing the development of strategies and architecture at domain and enterprise levels, identifying transformational opportunities for the businesses and technology areas The skills you'll need You’ll come with significant experience in software engineering, software or database design and architecture, as well as experience of developing software within a DevOps and Agile framework. Along with an expert understanding of the latest market trends, technologies and tools, you’ll bring significant and demonstrable experience of implementing programming best practice, especially around scalability, automation, virtualisation, optimisation, availability and performance. You’ll Also Need Proven proficiency in Java, .net, Python, Angular A strong background in cloud platform such as AWS A background of working with AI/ML frameworks such as TensorFlow, PyTorch, Scikit-learn, MLflow Significant and demonstrable experience of test-driven development and using automated test frameworks, mocking and stubbing and unit testing tools The ability to rapidly and effectively understand and translate product and business requirements into technical solution Show more Show less
Posted 3 days ago
6.0 years
0 Lacs
India
On-site
Job description Job Title: Python Fullstack Developer Experience Level: 6+ years Location: Bangalore, Hyderabad, Chennai, Pune, Noida, Trivandrum, Kochi Employment Type: Full-time Job Mode: Hybrid About the Role: We are looking for a passionate and versatile Software Engineer to join our Innovation Team. This role is ideal for someone who thrives in a fast-paced, exploratory environment and is excited about building next-generation solutions using emerging technologies like Generative AI and advanced web frameworks. Key Responsibilities: Design, develop, and maintain scalable front-end applications using React . Build and expose RESTful APIs using Python with Flask or FastAPI . Integrate back-end logic with SQL databases and ensure data flow efficiency. Collaborate on cutting-edge projects involving Generative AI technologies. Deploy and manage applications in a Microsoft Azure cloud environment. Work closely with cross-functional teams including data scientists, product owners, and UX designers to drive innovation from concept to delivery. Must-Have Skills: Strong proficiency in React.js and modern JavaScript frameworks. Hands-on experience with Python , especially using Flask or FastAPI for web development. Good understanding of SQL and relational database concepts. Exposure to Generative AI frameworks and tools. Basic understanding of Microsoft Azure services and deployment processes. Good-to-Have Skills: Knowledge of Machine Learning & AI workflows. Experience working with NoSQL databases like MongoDB or Cosmos DB. Familiarity with MLOps practices and tools (e.g., MLflow, Kubeflow). Understanding of CI/CD pipelines using tools like GitHub Actions, Azure DevOps, or Jenkins. Skills Python,React,SQL basics,Gen AI Show more Show less
Posted 3 days ago
3.0 years
0 Lacs
Greater Chennai Area
On-site
Chennai / Bangalore / Hyderabad Who We Are Tiger Analytics is a global leader in AI and analytics, helping Fortune 1000 companies solve their toughest challenges. We offer full-stack AI and analytics services & solutions to empower businesses to achieve real outcomes and value at scale. We are on a mission to push the boundaries of what AI and analytics can do to help enterprises navigate uncertainty and move forward decisively. Our purpose is to provide certainty to shape a better tomorrow. Our team of 4000+ technologists and consultants are based in the US, Canada, the UK, India, Singapore and Australia, working closely with clients across CPG, Retail, Insurance, BFS, Manufacturing, Life Sciences, and Healthcare. Many of our team leaders rank in Top 10 and 40 Under 40 lists, exemplifying our dedication to innovation and excellence. We are a Great Place to Work-Certified™ (2022-24), recognized by analyst firms such as Forrester, Gartner, HFS, Everest, ISG and others. We have been ranked among the ‘Best’ and ‘Fastest Growing’ analytics firms lists by Inc., Financial Times, Economic Times and Analytics India Magazine. Curious about the role? What your typical day would look like? We are looking for a Senior Analyst or Machine Learning Engineer who will work on a broad range of cutting-edge data analytics and machine learning problems across a variety of industries. More specifically, you will Engage with clients to understand their business context. Translate business problems and technical constraints into technical requirements for the desired analytics solution. Collaborate with a team of data scientists and engineers to embed AI and analytics into the business decision processes. What do we expect? 3+ years of experience with at least 1+ years of relevant DS experience. Proficient in a structured Python, Pyspark, Machine Learning (Experience in productionizing models) Proficient in AWS cloud technologies is mandatory Experience and good understanding with Sagemaker/Data Bricks Experience in MLOPS frameworks (e.g Mlflow or Kubeflow) Follows good software engineering practices and has an interest in building reliable and robust software. Good understanding of DS concepts and DS model lifecycle. Working knowledge of Linux or Unix environments ideally in a cloud environment. Model deployment / model monitoring experience (Preferably in Banking Domain) CI/CD pipeline creation is good to have Excellent written and verbal communication skills B.Tech from Tier-1 college / M.S or M. Tech is preferred You are important to us, let’s stay connected! Every individual comes with a different set of skills and qualities so even if you don’t tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow. We are an equal-opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust, respect, and encourage people to grow the way they desire. Note: The designation will be commensurate with expertise and experience. Compensation packages are among the best in the industry. Additional Benefits: Health insurance (self & family), virtual wellness platform, and knowledge communities. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role : MLOps Engineer Location - Chennai - CKC Mode of Interview - In Person Data - 7th June 2025 (Saturday) Key Words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required Experience And Qualifications Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage. Show more Show less
Posted 3 days ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary We are seeking a highly experienced and customer-focused Presales Architect to join our Solution Engineering team. The ideal candidate will have a strong background in AWS IaaS, PaaS, and SaaS services , deep expertise in cloud architecture , and solid exposure to data platforms , including Amazon Redshift , AI/ML workloads , and modern data architectures . Familiarity with Azure and Google Cloud Platform (GCP) is a strong advantage. This role is a strategic blend of technical solutioning , customer engagement , and sales support , playing a critical role in the pre-sales cycle by understanding customer requirements, designing innovative solutions, and aligning them with the company’s service offerings. Key Responsibilities Pre-Sales and Solutioning: Engage with enterprise customers to understand their technical requirements and business objectives. Architect end-to-end cloud solutions on AWS , covering compute, storage, networking, DevOps, and security. Develop compelling solution proposals, high-level designs, and reference architectures that address customer needs. Support RFI/RFP responses, create technical documentation, and deliver presentations and demos to technical and non-technical audiences. Collaborate with Sales, Delivery, and Product teams to ensure alignment of proposed solutions with client expectations. Conduct technical workshops, proof of concepts (PoCs), and technical validations. Technical Expertise Deep hands-on knowledge and architecture experience with AWS services : IaaS: EC2, VPC, S3, EBS, ELB, Auto Scaling, etc. PaaS: RDS, Lambda, API Gateway, Fargate, DynamoDB, Aurora, Step Functions. SaaS & Security: AWS Organizations, IAM, AWS WAF, CloudTrail, GuardDuty. Understanding of multi-cloud strategies ; exposure to Azure and GCP cloud services including hybrid architectures is a plus. Strong knowledge of DevOps practices and tools like Terraform, CloudFormation, Jenkins, GitOps, etc. Proficiency in architecting solutions that meet scalability , availability , and security requirements. Data Platform & AI/ML Experience in designing data lakes , data pipelines , and analytics platforms on AWS. Hands-on expertise in Amazon Redshift , Athena , Glue , EMR , Kinesis , and S3-based architectures . Familiarity with AI/ML solutions using SageMaker , AWS Comprehend , or other ML frameworks. Understanding of data governance , data cataloging , and security best practices for analytics workloads. Required Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field. 10+ years of experience in IT, with 5+ years in cloud architecture and pre-sales roles. AWS Certified Solutions Architect – Professional (or equivalent certification) is preferred. Strong presentation skills and experience interacting with CXOs, Architects, and DevOps teams. Ability to translate technical concepts into business value propositions. Excellent communication, proposal writing, and stakeholder management skills. Nice To Have Experience with Azure (e.g., Synapse, AKS, Azure ML) or GCP (e.g., BigQuery, Vertex AI) . Familiarity with industry-specific solutions (e.g., fintech, healthcare, retail cloud transformations). Exposure to AI/ML MLOps pipelines and orchestration tools like Kubeflow , MLflow , or Airflow . Show more Show less
Posted 3 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Junior Data Scientist Location: Bangalore Reporting to: Senior Manager – Analytics Purpose of the role The Global GenAI Team at Anheuser-Busch InBev (AB InBev) is tasked with constructing competitive solutions utilizing GenAI techniques. These solutions aim to extract contextual insights and meaningful information from our enterprise data assets. The derived data-driven insights play a pivotal role in empowering our business users to make well-informed decisions regarding their respective products. In the role of a Machine Learning Engineer (MLE), you will operate at the intersection of: LLM-based frameworks, tools, and technologies Cloud-native technologies and solutions Microservices-based software architecture and design patterns As an additional responsibility, you will be involved in the complete development cycle of new product features, encompassing tasks such as the development and deployment of new models integrated into production systems. Furthermore, you will have the opportunity to critically assess and influence the product engineering, design, architecture, and technology stack across multiple products, extending beyond your immediate focus. Key tasks & accountabilities Large Language Models (LLM): Experience with LangChain, LangGraph Proficiency in building agentic patterns like ReAct, ReWoo, LLMCompiler Multi-modal Retrieval-Augmented Generation (RAG): Expertise in multi-modal AI systems (text, images, audio, video) Designing and optimizing chunking strategies and clustering for large data processing Streaming & Real-time Processing: Experience in audio/video streaming and real-time data pipelines Low-latency inference and deployment architectures NL2SQL: Natural language-driven SQL generation for databases Experience with natural language interfaces to databases and query optimization API Development: Building scalable APIs with FastAPI for AI model serving Containerization & Orchestration: Proficient with Docker for containerized AI services Experience with orchestration tools for deploying and managing services Data Processing & Pipelines: Experience with chunking strategies for efficient document processing Building data pipelines to handle large-scale data for AI model training and inference AI Frameworks & Tools: Experience with AI/ML frameworks like TensorFlow, PyTorch Proficiency in LangChain, LangGraph, and other LLM-related technologies Prompt Engineering: Expertise in advanced prompting techniques like Chain of Thought (CoT) prompting, LLM Judge, and self-reflection prompting Experience with prompt compression and optimization using tools like LLMLingua, AdaFlow, TextGrad, and DSPy Strong understanding of context window management and optimizing prompts for performance and efficiency 3. Qualifications, Experience, Skills Level of educational attainment required (1 or more of the following) Bachelor's or masterʼs degree in Computer Science, Engineering, or a related field. Previous Work Experience Required Proven experience of 3+ years in developing and deploying applications utilizing Azure OpenAI and Redis as a vector database. Technical Skills Required Solid understanding of language model technologies, including LangChain, OpenAI Python SDK, LammaIndex, OLamma, etc. Proficiency in implementing and optimizing machine learning models for natural language processing. Experience with observability tools such as mlflow, langsmith, langfuse, weight and bias, etc. Strong programming skills in languages such as Python and proficiency in relevant frameworks. Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). And above all of this, an undying love for beer! We dream big to create future with more cheer Show more Show less
Posted 3 days ago
10.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Summary We are seeking a highly experienced and customer-focused Presales Architect to join our Solution Engineering team. The ideal candidate will have a strong background in AWS IaaS, PaaS, and SaaS services , deep expertise in cloud architecture , and solid exposure to data platforms , including Amazon Redshift , AI/ML workloads , and modern data architectures . Familiarity with Azure and Google Cloud Platform (GCP) is a strong advantage. This role is a strategic blend of technical solutioning , customer engagement , and sales support , playing a critical role in the pre-sales cycle by understanding customer requirements, designing innovative solutions, and aligning them with the company’s service offerings. Key Responsibilities Pre-Sales and Solutioning: Engage with enterprise customers to understand their technical requirements and business objectives. Architect end-to-end cloud solutions on AWS , covering compute, storage, networking, DevOps, and security. Develop compelling solution proposals, high-level designs, and reference architectures that address customer needs. Support RFI/RFP responses, create technical documentation, and deliver presentations and demos to technical and non-technical audiences. Collaborate with Sales, Delivery, and Product teams to ensure alignment of proposed solutions with client expectations. Conduct technical workshops, proof of concepts (PoCs), and technical validations. Technical Expertise Deep hands-on knowledge and architecture experience with AWS services : IaaS: EC2, VPC, S3, EBS, ELB, Auto Scaling, etc. PaaS: RDS, Lambda, API Gateway, Fargate, DynamoDB, Aurora, Step Functions. SaaS & Security: AWS Organizations, IAM, AWS WAF, CloudTrail, GuardDuty. Understanding of multi-cloud strategies ; exposure to Azure and GCP cloud services including hybrid architectures is a plus. Strong knowledge of DevOps practices and tools like Terraform, CloudFormation, Jenkins, GitOps, etc. Proficiency in architecting solutions that meet scalability , availability , and security requirements. Data Platform & AI/ML Experience in designing data lakes , data pipelines , and analytics platforms on AWS. Hands-on expertise in Amazon Redshift , Athena , Glue , EMR , Kinesis , and S3-based architectures . Familiarity with AI/ML solutions using SageMaker , AWS Comprehend , or other ML frameworks. Understanding of data governance , data cataloging , and security best practices for analytics workloads. Required Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field. 10+ years of experience in IT, with 5+ years in cloud architecture and pre-sales roles. AWS Certified Solutions Architect – Professional (or equivalent certification) is preferred. Strong presentation skills and experience interacting with CXOs, Architects, and DevOps teams. Ability to translate technical concepts into business value propositions. Excellent communication, proposal writing, and stakeholder management skills. Nice To Have Experience with Azure (e.g., Synapse, AKS, Azure ML) or GCP (e.g., BigQuery, Vertex AI) . Familiarity with industry-specific solutions (e.g., fintech, healthcare, retail cloud transformations). Exposure to AI/ML MLOps pipelines and orchestration tools like Kubeflow , MLflow , or Airflow . Show more Show less
Posted 4 days ago
3.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
About Beco Beco ( letsbeco.com ) is a fast-growing Mumbai-based consumer-goods company on a mission to replace everyday single-use plastics with planet-friendly, bamboo- and plant-based alternatives. From reusable kitchen towels to biodegradable garbage bags, we make sustainable living convenient, affordable and mainstream. Our founding story began with a Mumbai beach clean-up that opened our eyes to the decades-long life of a single plastic wrapper—sparking our commitment to “Be Eco” every day. Our mission: “To craft, support and drive positive change with sustainable & eco-friendly alternatives—one Beco product at a time.” Backed by marquee climate-focused VCs and now 50 + employees, we are scaling rapidly across India’s top marketplaces, retail chains and D2C channels. Why we’re hiring Sustainability at scale demands operational excellence. As volumes explode, we need data-driven, self-learning systems that eliminate manual grunt work, unlock efficiency and delight customers. You will be the first dedicated AI/ML Engineer at Beco—owning the end-to-end automation roadmap across Finance, Marketing, Operations, Supply Chain and Sales. Responsibilities Partner with functional leaders to translate business pain-points into AI/ML solutions and automation opportunities. Own the complete lifecycle: data discovery, cleaning, feature engineering, model selection, training, evaluation, deployment and monitoring. Build robust data pipelines (SQL/BigQuery, Spark) and APIs to integrate models with ERP, CRM and marketing automation stacks. Stand up CI/CD + MLOps (Docker, Kubernetes, Airflow, MLflow, Vertex AI/SageMaker) for repeatable training and one-click releases. Establish data-quality, drift-detection and responsible-AI practices (bias, transparency, privacy). Mentor analysts & engineers; evangelise a culture of experimentation and “fail-fast” learning—core to Beco’s GSD (“Get Sh#!t Done”) values. Must-have Qualifications 3 + years hands-on experience delivering ML, data-science or intelligent-automation projects in production. Proficiency in Python (pandas, scikit-learn, PyTorch/TensorFlow) and SQL; solid grasp of statistics, experimentation and feature engineering. Experience building and scaling ETL/data pipelines on cloud (GCP, AWS or Azure). Familiarity with modern Gen-AI & NLP stacks (OpenAI, Hugging Face, RAG, vector databases). Track record of collaborating with cross-functional stakeholders and shipping iteratively in an agile environment. Nice-to-haves Exposure to e-commerce or FMCG supply-chain data. Knowledge of finance workflows (Reconciliation, AR/AP, FP&A) or RevOps tooling (HubSpot, Salesforce). Experience with vision models (Detectron2, YOLO) and edge deployment. Contributions to open-source ML projects or published papers/blogs. What Success Looks Like After 1 Year 70 % reduction in manual reporting hours across finance and ops. Forecast accuracy > 85 % at SKU level, slashing stock-outs by 30 %. AI chatbot resolves 60 % of tickets end-to-end, with CSAT > 4.7/5. At least two new data-products launched that directly boost topline or margin. Life at Beco Purpose-driven team obsessed with measurable climate impact. Entrepreneurial, accountable, bold” culture—where winning minds precede outside victories. Show more Show less
Posted 4 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
EXL (NASDAQ: EXLS) is a $7 billion public-listed NASDAQ company and a rapidly expanding global digital data-led AI transformation solutions company with double digit growth. EXL Digital division spearheads the development and implementation of Generative AI (GenAI) business solutions for our clients in Banking & Finance, Insurance, and Healthcare. As a global leader in analytics, digital transformation, and AI innovation, EXL is committed to helping clients unlock the potential of generative AI to drive growth, efficiency, and innovation. Job Summary We are seeking a highly skilled AI/ML Engineer - Generative AI to design, develop, and deploy production-grade AI systems and agentic applications. The ideal candidate will have a strong background in Python 3.11+, deep learning, large language models, and distributed systems, with experience building performant, clean, and scalable services. Key Responsibilities Build and maintain high-performance REST/WebSocket APIs using FastAPI (Pydantic v2). Implement and optimize agentic AI systems using LangGraph, AutoGen, and LangChain. Architect real-time event-driven microservices using Apache Kafka 4.0 and KRaft. Design clean, testable services using SOLID principles, Python async, and type hints. Integrate vector databases like Pinecone and Weaviate for embedding storage and retrieval. Implement graph databases like Neo4j for knowledge graph-based use cases. Manage experiment tracking and model lifecycle using MLflow 3.0 or Weights & Biases. Build and deploy containers using Docker, GitHub Actions, and Kubernetes (nice-to-have). Maintain CI/CD pipelines and infrastructure as code with Git and optionally Terraform. Stay current with trends in GenAI, deep learning, and orchestration frameworks. Minimum Qualifications Bachelor's degree in computer science, Data Science, or related field. 5+ years of experience in AI/ML engineering with focus on LLMs and NLP. 2–3 years of hands-on experience with GenAI and LLMs (e.g., GPT, Claude, LLaMA3). Proficiency in Python 3.11+ (async, typing, OOP, SOLID principles). Experience with FastAPI, Pydantic v2, PyTorch 2.x, Hugging Face Transformers. Working knowledge of agentic frameworks like LangChain, LangGraph, or AutoGen. Experience building REST/WebSocket APIs and microservices with Kafka streams. Proficient in SQL, Pandas, and NumPy for data manipulation. Preferred Qualifications Master’s or PhD degree in Computer Science, Data Science, or related field. Familiarity with Graph DBs such as Neo4j for knowledge graphs. Experience with Vector DBs like Pinecone, Weaviate. Proficiency in MLflow 3.0 or Weights & Biases for experiment tracking. Experience with CI/CD pipelines, containerization (Docker), and orchestration (K8s) and automated deployment workflows. Exposure to Infrastructure as Code (IaC) using Terraform. Knowledge of advanced optimization, quantization, and fine-tuning techniques. Skills and Competencies Proven ability to architect GenAI solutions and multi-agent systems. Strong testing skills (unit, integration, performance). Excellent communication and cross-functional collaboration. Strong analytical and problem-solving skills. Leadership and mentoring capability for engineering teams. Show more Show less
Posted 4 days ago
3.0 years
0 Lacs
Delhi, India
On-site
Job Title : GenAI / ML Engineer Function : Research & Development Location : Delhi/Bangalore (3 days in office) About the Company: Elucidata is a TechBio Company headquartered in San Francisco. Our mission is to make life sciences data AI-ready. Elucidata's Elucidata’s LLM-powered platform Polly, helps research teams wrangle, store, manage and analyze large volumes of biomedical data. We are at the forefront of driving GenAI in life sciences R&D across leading BioPharma companies like Pfizer, Janssen, NextGen Jane and many more. We were recognised as the 'Most Innovative Biotech Company, 2024', by Fast Company. We are a 120+ multi-disciplinary team of experts based across the US and India. In September 2022, we raised $16 million in our Series A round led by Eight Roads, F-Prime, and our existing investors Hyperplane and IvyCap. About the Role: We are looking for a GenAI / ML Engineer to join our R&D team and work on cutting-edge applications of LLMs in biomedical data processing . In this role, you'll help build and scale intelligent systems that can extract, summarize, and reason over biomedical knowledge from large bodies of unstructured text, including scientific publications, EHR/EMR reports, and more. You’ll work closely with data scientists, biomedical domain experts, and product managers to design and implement reliable GenAI-powered workflows — from rapid prototypes to production-ready solutions. This is a highly strategic role as we continue to invest in agentic AI systems and LLM-native infrastructure to power the next generation of biomedical applications. Key Responsibilities: Build and maintain LLM-powered pipelines for entity extraction, ontology normalization, Q&A, and knowledge graph creation using tools like LangChain, LangGraph, and CrewAI. Fine-tune and deploy open-source LLMs (e.g., LLaMA, Gemma, DeepSeek, Mistral) for biomedical applications. Define evaluation frameworks to assess accuracy, efficiency, hallucinations, and long-term performance; integrate human-in-the-loop feedback. Collaborate cross-functionally with data scientists, bioinformaticians, product teams, and curators to build impactful AI solutions. Stay current with the LLM ecosystem and drive adoption of cutting-edge tools, models, and methods. Qualifications : 2–3 years of experience as an ML engineer, data scientist, or data engineer working on NLP or information extraction. Strong Python programming skills and experience building production-ready codebases. Hands-on experience with LLM frameworks and tooling (e.g., LangChain, HuggingFace, OpenAI APIs, Transformers). Familiarity with one or more LLM families (e.g., LLaMA, Mistral, DeepSeek, Gemma) and prompt engineering best practices. Strong grasp of ML/DL fundamentals and experience with tools like PyTorch, or TensorFlow. Ability to communicate ideas clearly, iterate quickly, and thrive in a fast-paced, product-driven environment. Good to Have (Preferred but Not Mandatory) Experience working with biomedical or clinical text (e.g., PubMed, EHRs, trial data). Exposure to building autonomous agents using CrewAI or LangGraph. Understanding of knowledge graph construction and integration with LLMs. Experience with evaluation challenges unique to GenAI workflows (e.g., hallucination detection, grounding, traceability). Experience with fine-tuning, LoRA, PEFT, or using embeddings and vector stores for retrieval. Working knowledge of cloud platforms (AWS/GCP) and MLOps tools (MLflow, Airflow etc.). Contributions to open-source LLM or NLP tooling We are proud to be an equal-opportunity workplace and are an affirmative action employer. We are committed to equal employment opportunities regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status. Show more Show less
Posted 4 days ago
2.5 - 5.0 years
5 - 11 Lacs
India
On-site
We are looking for an experienced AI Engineer to join our team. The ideal candidate will have a strong background in designing, deploying, and maintaining advanced AI/ML models with expertise in Natural Language Processing (NLP), Computer Vision, and architectures like Transformers and Diffusion Models. You will play a key role in developing AI-powered solutions, optimizing performance, and deploying and managing models in production environments. Key Responsibilities AI Model Development and Optimization: Design, train, and fine-tune AI models for NLP, Computer Vision, and other domains using frameworks like TensorFlow and PyTorch. Work on advanced architectures, including Transformer-based models (e.g., BERT, GPT, T5) for NLP tasks and CNN-based models (e.g., YOLO, VGG, ResNet) for Computer Vision applications. Utilize techniques like PEFT (Parameter-Efficient Fine-Tuning) and SFT (Supervised Fine-Tuning) to optimize models for specific tasks. Build and train RLHF (Reinforcement Learning with Human Feedback) and RL-based models to align AI behavior with real-world objectives., Explore multimodal AI solutions combining text, vision, and audio using generative deep learning architectures. Natural Language Processing (NLP): Develop and deploy NLP solutions, including language models, text generation, sentiment analysis, and text-to-speech systems. Leverage advanced Transformer architectures (e.g., BERT, GPT, T5) for NLP tasks. AI Model Deployment and Frameworks: Deploy AI models using frameworks like VLLM, Docker, and MLFlow in production-grade environments. Create robust data pipelines for training, testing, and inference workflows. Implement CI/CD pipelines for seamless integration and deployment of AI solutions. Production Environment Management: Deploy, monitor, and manage AI models in production, ensuring performance, reliability, and scalability. Set up monitoring systems using Prometheus to track metrics like latency, throughput, and model drift. Data Engineering and Pipelines: Design and implement efficient data pipelines for preprocessing, cleaning, and transformation of large datasets. Integrate with cloud-based data storage and retrieval systems for seamless AI workflows. Performance Monitoring and Optimization: Optimize AI model performance through hyperparameter tuning and algorithmic improvements. Monitor performance using tools like Prometheus, tracking key metrics (e.g., latency, accuracy, model drift, error rates etc.) Solution Design and Architecture: Collaborate with cross-functional teams to understand business requirements and translate them into scalable, efficient AI/ML solutions. Design end-to-end AI systems, including data pipelines, model training workflows, and deployment architectures, ensuring alignment with business objectives and technical constraints. Conduct feasibility studies and proof-of-concepts (PoCs) for emerging technologies to evaluate their applicability to specific use cases. Stakeholder Engagement: Act as the technical point of contact for AI/ML projects, managing expectations and aligning deliverables with timelines. Participate in workshops, demos, and client discussions to showcase AI capabilities and align solutions with client needs. Experience: 2.5 - 5 years of experience Salary : 5-11 LPA Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹1,100,000.00 per year Schedule: Day shift Work Location: In person
Posted 4 days ago
4.0 - 5.0 years
12 - 20 Lacs
Gāndhīnagar
On-site
Key Responsibilities: Design, develop, and deploy AI models and algorithms to solve business problems. Work with machine learning frameworks like TensorFlow, PyTorch, or Scikit-learn. Train, test, and validate models using large datasets. Integrate AI solutions into existing products and applications. Collaborate with data scientists, software engineers, and product teams to build scalable AI solutions. Monitor model performance and continuously improve accuracy and efficiency. Stay updated with the latest AI trends, tools, and best practices. Ensure AI models are ethical, unbiased, and secure. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. 4 to 5 Years of Proven experience in developing and deploying AI/ML solutions. Proficiency in Python and libraries like NumPy, Pandas, OpenCV, etc. Solid understanding of machine learning, deep learning, and NLP techniques. Experience with cloud platforms (AWS, Azure, or Google Cloud) is a plus. Strong problem-solving skills and ability to translate business needs into technical solutions. Excellent communication and collaboration skills. Preferred Qualifications: Experience with data preprocessing, feature engineering, and model tuning. Familiarity with reinforcement learning or generative AI models. Knowledge of MLOps tools and pipelines (e.g., MLflow, Kubeflow). Hands-on experience in deploying AI applications to production environments. Job Types: Full-time, Permanent Pay: ₹1,200,000.00 - ₹2,000,000.00 per year Benefits: Flexible schedule Paid sick time Paid time off Provident Fund Location Type: In-person Schedule: Day shift Fixed shift Monday to Friday Ability to commute/relocate: Gandhinagar, Gujarat: Reliably commute or planning to relocate before starting work (Required) Experience: Android Development: 3 years (Required) Work Location: In person
Posted 4 days ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Junior Data Scientist Location: Bangalore Reporting to: Senior Manager – Analytics Purpose of the role The Global GenAI Team at Anheuser-Busch InBev (AB InBev) is tasked with constructing competitive solutions utilizing GenAI techniques. These solutions aim to extract contextual insights and meaningful information from our enterprise data assets. The derived data-driven insights play a pivotal role in empowering our business users to make well-informed decisions regarding their respective products. In the role of a Machine Learning Engineer (MLE), you will operate at the intersection of: LLM-based frameworks, tools, and technologies Cloud-native technologies and solutions Microservices-based software architecture and design patterns As an additional responsibility, you will be involved in the complete development cycle of new product features, encompassing tasks such as the development and deployment of new models integrated into production systems. Furthermore, you will have the opportunity to critically assess and influence the product engineering, design, architecture, and technology stack across multiple products, extending beyond your immediate focus. Key tasks & accountabilities Large Language Models (LLM): Experience with LangChain, LangGraph Proficiency in building agentic patterns like ReAct, ReWoo, LLMCompiler Multi-modal Retrieval-Augmented Generation (RAG): Expertise in multi-modal AI systems (text, images, audio, video) Designing and optimizing chunking strategies and clustering for large data processing Streaming & Real-time Processing: Experience in audio/video streaming and real-time data pipelines Low-latency inference and deployment architectures NL2SQL: Natural language-driven SQL generation for databases Experience with natural language interfaces to databases and query optimization API Development: Building scalable APIs with FastAPI for AI model serving Containerization & Orchestration: Proficient with Docker for containerized AI services Experience with orchestration tools for deploying and managing services Data Processing & Pipelines: Experience with chunking strategies for efficient document processing Building data pipelines to handle large-scale data for AI model training and inference AI Frameworks & Tools: Experience with AI/ML frameworks like TensorFlow, PyTorch Proficiency in LangChain, LangGraph, and other LLM-related technologies Prompt Engineering: Expertise in advanced prompting techniques like Chain of Thought (CoT) prompting, LLM Judge, and self-reflection prompting Experience with prompt compression and optimization using tools like LLMLingua, AdaFlow, TextGrad, and DSPy Strong understanding of context window management and optimizing prompts for performance and efficiency 3. Qualifications, Experience, Skills Level of educational attainment required (1 or more of the following) Bachelor's or masterʼs degree in Computer Science, Engineering, or a related field. Previous Work Experience Required Proven experience of 1+ years in developing and deploying applications utilizing Azure OpenAI and Redis as a vector database. Technical Skills Required Solid understanding of language model technologies, including LangChain, OpenAI Python SDK, LammaIndex, OLamma, etc. Proficiency in implementing and optimizing machine learning models for natural language processing. Experience with observability tools such as mlflow, langsmith, langfuse, weight and bias, etc. Strong programming skills in languages such as Python and proficiency in relevant frameworks. Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). And above all of this, an undying love for beer! We dream big to create future with more cheer Show more Show less
Posted 4 days ago
12.0 years
2 - 4 Lacs
Noida
On-site
Hello! You've landed on this page, which means you're interested in working with us. Let's take a sneak peek at what it's like to work at Innovaccer. Product at Innovaccer Our product team is a collaborative group of talented professionals who transform ideas into real-life solutions. They guide the creation, development, and launch of new products, ensuring alignment with our overall business strategy. Additionally, we are leveraging AI across all our solutions, revolutionizing healthcare and shaping the future to make a meaningful impact on the world.You’ll have the opportunity to build the foundation for intelligence in healthcare, shaping how LLMs and AI agents power clinical decision-making, operations, and patient engagement — safely, responsibly, and at scale.Join a team that values speed, openness, and ownership — and be part of the company that's changing healthcare through data and AI. About the Role We are looking for a visionary Director of Product Management-AI Platform to define, shape, and drive the strategy for Innovaccer’s AI Platform-a foundational layer that will power intelligent applications,LLM-based agents, and precision workflows across the healthcare ecosystem.You will lead the development of modular infrastructure and frameworks that enable safe,scalable, and context-aware AI-including SLMs, LLMs, and autonomous agents. Your leadership will guide how we abstract complexity, orchestrate models,and ensure responsible deployment across real-world clinical and operational settings.You’ll sit at the intersection of data, ML, engineering, and healthcare delivery-translating cutting-edge AI into actionable, productized value. A Day in the Life Own the Vision Define and communicate the long-term product vision and roadmap for the AI Platform Align product strategy with Innovaccer’s AI-first approach and broader business goals Drive clarity around the role of LLMs, agent frameworks, and contextual AI within Innovaccer’s platform ecosystem Build the Foundation Architect and deliver core platform capabilities such as: Model Context Protocol layers for injecting structured healthcare data into AI workflows Agent frameworks with planning, memory, and tools integration Evaluation pipelines for safety, fairness, and domain alignment Model orchestration, fine-tuning, and inference APIs Lead cross-functional delivery with engineering, MLOps, and design teams to bring products from concept to scale Lead in the AI Community Represent Innovaccer in industry conversations on LLMs, RAG, agent safety, and applied healthcare AI Stay ahead of advancements in GenAI, healthcare-specific LLMs, multimodal models, and regulatory AI frameworks (e.g., HIPAA, FDA, EU AI Act) Champion responsible AI and standardization efforts through internal policy and external partnership Enable Internal & External Builders Deliver reusable toolkits, SDKs, and APIs to enable internal teams and customers to build healthcare-native AI products Partner with clinical informatics, data science, and platform engineering to ensure contextual accuracy and safety & collaborate with customer-facing teams to drive adoption and value realization What You Need 12+ years of product management experience, with 5+ years in AI/ML platform or data infrastructure products Proven experience building and scaling platforms for LLMs, SLMs, AI agents, or multi-modal AI applications Deep understanding of Agent architectures and orchestration,Retrieval-augmented generation (RAG),Vector stores, embeddings, prompt engineering,Model governance and evaluation Hands-on familiarity with platforms like LangChain, Hugging Face, Pinecone, Weaviate, MLflow, and cloud AI stacks (AWS, GCP, Azure) Ability to communicate complex concepts to both technical and business audiences Experience working with healthcare data (FHIR, HL7, CCD, EHRs, claims, etc.) Knowledge of AI safety, explainability, and compliance in regulated environments Published thought leadership or contributions to AI communities, open-source, or technical working groups We offer competitive benefits to set you up for success in and outside of work. Here’s What We Offer Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days. Parental Leave: Experience one of the industry's best parental leave policies to spend time with your new addition. Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered. Health Insurance: We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury. Pet-Friendly Office*: Spend more time with your treasured friends, even when you're away from home. Bring your furry friends with you to the office and let your colleagues become their friends, too. *Noida office only Creche Facility for children*: Say goodbye to worries and hello to a convenient and reliable creche facility that puts your child's well-being first. *India offices Where and how we work Our Noida office is situated in a posh techspace, equipped with various amenities to support our work environment. Here, we follow a five-day work schedule, allowing us to efficiently carry out our tasks and collaborate effectively within our team.Innovaccer is an equal opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer: Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details. About Innovaccer Innovaccer activates the flow of healthcare data, empowering providers, payers, and government organizations to deliver intelligent and connected experiences that advance health outcomes. The Healthcare Intelligence Cloud equips every stakeholder in the patient journey to turn fragmented data into proactive, coordinated actions that elevate the quality of care and drive operational performance. Leading healthcare organizations like CommonSpirit Health, Atlantic Health, and Banner Health trust Innovaccer to integrate a system of intelligence into their existing infrastructure— extending the human touch in healthcare. For more information, visit www.innovaccer.com. Check us out on YouTube , Glassdoor , LinkedIn , Instagram , and the Web .
Posted 4 days ago
5.0 - 7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Software Engineer – Backend SOL00054 Job Type: Full Time Location: Hyderabad, Telangana Experience Required: 5-7 Years CTC : 13 - 17LPA Job Description : Our client, headquartered in the USA with offices globally is looking for a Backend Software Engineer to join our team responsible for building the core backend infrastructure for our MLOps platform on AWS . The systems you help build will enable feature engineering, model deployment, and model inference at scale – in both batch and online modes. You will collaborate with a distributed cross-functional team to design and build scalable, reliable systems for machine learning workflows. Key Responsibilities: Design, develop, and maintain backend components of the MLOps platform hosted on AWS . Build and enhance RESTful APIs and microservices using Python frameworks like Flask , Django , or FastAPI . Work with WSGI/ASGI web servers such as Gunicorn and Uvicorn . Implement scalable and performant solutions using concurrent programming (AsyncIO) . Develop automated unit and functional tests to ensure code reliability. Collaborate with DevOps engineers to integrate CI/CD pipelines and ensure smooth deployments. Participate in on-call rotation to support production issues and ensure high system availability. Mandatory Skills: · Strong backend development experience using Python with Flask , Django , or FastAPI . · Experience working with WSGI/ASGI web servers (e.g., Gunicorn, Uvicorn). · Hands-on experience with AsyncIO or other asynchronous programming models in Python. · Proficiency with unit and functional testing frameworks . · Experience working with AWS (or at least one public cloud platform). · Familiarity with CI/CD practices and tooling. Nice to have Skills: · Experience developing Kafka client applications in Python. · Familiarity with MLOps platforms like AWS SageMaker , Kubeflow , or MLflow . · Exposure to Apache Spark or similar big data processing frameworks. · Experience with Docker and container platforms such as AWS ECS or EKS . · Familiarity with Terraform , Jenkins , or other DevOps/IaC tools. · Knowledge of Python packaging (Wheel, PEX, Conda). · Experience with metaprogramming in Python. · Education: · Bachelor’s degree in Computer Science, Engineering, or a related field. Show more Show less
Posted 4 days ago
15.0 years
0 Lacs
Nagpur, Maharashtra, India
On-site
Job description Job Title: Tech Lead (AI/ML) – Machine Learning & Generative AI Location: Nagpur (Hybrid / On-site) Experience: 8–15 years Employment Type: Full-time Job Summary: We are seeking a highly experienced Python Developer with a strong background in traditional Machine Learning and growing proficiency in Generative AI to join our AI Engineering team. This role is ideal for professionals who have delivered scalable ML solutions and are now expanding into LLM-based architectures, prompt engineering, and GenAI productization. You’ll be working at the forefront of applied AI, driving both model performance and business impact across diverse use cases. Key Responsibilities: Design and develop ML-powered solutions for use cases in classification, regression, recommendation, and NLP. Build and operationalize GenAI solutions, including fine-tuning, prompt design, and RAG implementations using models such as GPT, LLaMA, Claude, or Gemini. Develop and maintain FastAPI-based services that expose AI models through secure, scalable APIs. Lead data modeling, transformation, and end-to-end ML pipelines, from feature engineering to deployment. Integrate with relational (MySQL) and vector databases (e.g., ChromaDB, FAISS, Weaviate) to support semantic search, embedding stores, and LLM contexts. Mentor junior team members and review code, models, and system designs for robustness and maintainability. Collaborate with product, data science, and infrastructure teams to translate business needs into AI capabilities. Optimize model and API performance, ensuring high availability, security, and scalability in production environments. Core Skills & Experience: Strong Python programming skills with 5+ years of applied ML/AI experience. Demonstrated experience building and deploying models using TensorFlow, PyTorch, scikit-learn, or similar libraries. Practical knowledge of LLMs and GenAI frameworks, including Hugging Face, OpenAI, or custom transformer stacks. Proficient in REST API design using FastAPI and securing APIs in production environments. Deep understanding of MySQL (query performance, schema design, transactions). Hands-on with vector databases and embeddings for search, retrieval, and recommendation systems. Strong foundation in software engineering practices: version control (Git), testing, CI/CD. Preferred/Bonus Experience: Deployment of AI solutions on cloud platforms (AWS, GCP, Azure). Familiarity with MLOps tools (MLflow, Airflow, DVC, SageMaker, Vertex AI). Experience with Docker, Kubernetes, and container orchestration. Understanding of prompt engineering, tokenization, LangChain, or multi-agent orchestration frameworks. Exposure to enterprise-grade AI applications in BFSI, healthcare, or regulated industries is a plus. What We Offer: Opportunity to work on a cutting-edge AI stack integrating both classical ML and advanced GenAI. High autonomy and influence in architecting real-world AI solutions. A dynamic and collaborative environment focused on continuous learning and innovation. Show more Show less
Posted 4 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Key Responsibilities: Design and develop a modular, scalable AI platform to serve foundation model and RAG-based applications. Build pipelines for embedding generation , document chunking , and indexing . Develop integrations with vector databases like Pinecone , Weaviate , Chroma , or FAISS . Orchestrate LLM flows using tools like LangChain , LlamaIndex , and OpenAI APIs . Implement RAG architectures to combine generative models with structured and unstructured knowledge sources. Create robust APIs and developer tools for easy adoption of AI models across teams. Build observability and monitoring into AI workflows for performance, cost, and output quality. Collaborate with DevOps, Data Engineering, and Product to align platform capabilities with business use cases. Core Skill Set: Strong experience in Python, with deep familiarity in ML/AI frameworks (PyTorch, Hugging Face, TensorFlow). Experience building LLM applications , particularly using LangChain , LlamaIndex , and OpenAI or Anthropic APIs . Practical understanding of vector search , semantic retrieval , and embedding models . Familiarity with AI platform tools (e.g., MLflow, Kubernetes, Airflow, Prefect, Ray Serve). Hands-on with cloud infrastructure (AWS, GCP, Azure) and containerization (Docker, Kubernetes). Solid grasp of RAG architecture design , prompt engineering , and model evaluation . Understanding of MLOps, CI/CD, and data pipelines in production environments. Preferred Qualifications: Experience designing and scaling internal ML/AI platforms or LLMOps tools. Experience with fine-tuning LLMs or customizing embeddings for domain-specific applications. Contributions to open-source AI platform components. Knowledge of data privacy, governance, and responsible AI practices. What You’ll Get: A high-impact role building the core AI infrastructure of our company. Flexible work environment and competitive compensation. Access to cutting-edge foundation models and tooling. Opportunity to shape the future of applied AI within a fast-moving team. Show more Show less
Posted 4 days ago
5.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Company Description Technocratic Solutions is a trusted and renowned provider of technical resources on a contract basis, serving businesses globally. With a dedicated team of developers, we deliver top-notch software solutions in cutting-edge technologies such as PHP, Java, JavaScript, Drupal, QA, Blockchain AI, and more. Our mission is to empower businesses worldwide by offering high-quality technical resources that meet project requirements and objectives. We prioritize exceptional customer service and satisfaction, delivering our services quickly, efficiently, and cost-effectively. Join us and experience the difference of working with a reliable partner driven by excellence and focused on your success. Job Title: AI/ML Engineer – Generative AI, Databricks, R Programming Location: Delhi NCR / Pune Experience Level: 5 years Job Summary: We are seeking a highly skilled and motivated AI/ML Engineer with hands-on experience in Generative AI, Databricks, and R programming to join our advanced analytics team. The ideal candidate will be responsible for designing, building, and deploying intelligent solutions that drive innovation, automation, and insight generation using modern AI/ML technologies. --- Key Responsibilities: Develop and deploy scalable ML and Generative AI models using Databricks (Spark-based architecture). Build pipelines for data ingestion, transformation, and model training/inference on Databricks. Implement and fine-tune Generative AI models (e.g., LLMs, diffusion models) for various use cases like content generation, summarization, and simulation. Leverage R for advanced statistical modeling, data visualization, and integration with ML pipelines. Collaborate with data scientists, data engineers, and product teams to translate business needs into technical solutions. Ensure reproducibility, performance, and governance of AI/ML models. Stay updated with the latest trends and technologies in AI/ML and GenAI and apply them where applicable. --- Required Skills & Qualifications: Bachelor's/Master’s degree in Computer Science, Data Science, Statistics, or a related field. 5 years of hands-on experience in Machine Learning/AI, with at least 2 year in Generative AI. Proficiency in Databricks, including Spark MLlib, Delta Lake, and MLflow. Strong command of R programming, especially for statistical modeling and data visualization (ggplot2, dplyr, caret, etc.). Experience with LLMs, transformers (HuggingFace, LangChain, etc.), and other GenAI frameworks. Familiarity with Python, SQL, and cloud platforms (AWS/Azure/GCP) is a plus. Excellent problem-solving, communication, and collaboration skills. Preferred: Certifications in Databricks, ML/AI (e.g., Azure/AWS ML), or R. Experience in regulated industries (finance, healthcare, etc.). Exposure to MLOps, CI/CD for ML, and version control (Git). --- What We Offer: Competitive salary and benefits Flexible work environment Opportunities for growth and learning in cutting-edge AI/ML Collaborative and innovative team culture --- Would you like this tailored to a specific company, industry, or seniority level (e.g., Lead, Junior, Consultant)? Show more Show less
Posted 4 days ago
5.0 years
0 Lacs
India
On-site
Senior AI/ML Engineer Experience: 5+ years Mode of Engagement: Full-time / Part-time No of Positions: 2 Educational Qualification: B.E./B.Tech/M.E./M.Tech in Computer Science, AI/ML, or related field Industry: IT – AI/ML Services Notice Period: Immediate or 15 days preferred What We Are Looking For: Experience in Chatbot agents, agents which can read data and detect issues/risk/modify data as per input and document data extraction - accurately extract required data from pdf even input pdf is inconsistent Strong experience (5+ years) in backend development using Python (preferred), Java, or Node.js. 2+ years of hands-on experience building LLM-powered applications and LLM agents. Expertise in AI/ML system architecture including Model Context Protocol (MCP) and agent-based reasoning. Proven track record with Docker, Kubernetes, and cloud-based deployment of ML models. Strong collaboration, communication, and problem-solving abilities. Responsibilities: Design, develop, and deploy LLM-based applications and intelligent agent systems. Architect MCP systems to orchestrate interactions between LLMs, APIs, and databases. Containerize AI/ML models and manage deployments using Docker and Kubernetes. Develop and maintain scalable backend systems and data integration. Collaborate with cross-functional teams on technical specifications and product goals. Stay up to date with advancements in LLMs, planning frameworks, and agent tools. Troubleshoot AI models or deployment issues and ensure high availability of systems. Mentor junior engineers and contribute to team-wide knowledge-sharing. Qualifications: Bachelor’s or master's in computer science or relevant technical discipline. 5+ years in backend software development; 2+ years in LLM technologies. Experience with agent-based frameworks like LangChain, LlamaIndex, or AutoGen. Solid foundation in container orchestration and CI/CD using Docker & Kubernetes. Familiarity with MLOps tools like MLflow, Kubeflow, or Seldon Core is a plus. Cloud experience (AWS/GCP/Azure) preferred; Git proficiency required. Added advantages: publications, open-source contributions, or fine-tuning LLMs. Show more Show less
Posted 4 days ago
10.0 years
0 Lacs
India
Remote
Full Stack AI Developer – LLM & Workflow Automation 📍 Remote | 🕒 8–10 Years | 🧠 Python, React, LLMs, n8n 🧩 About the Role: We’re hiring a Full Stack AI Developer to build next-gen applications that combine intelligent chatbots, LLM workflows, and seamless UI/UX interfaces. You’ll own features end-to-end, from backend APIs and AI integrations to frontend web experiences. 🎯 Key Responsibilities: Design, develop, and deploy full-stack AI-powered applications Build responsive web interfaces using React (or Angular/Vue) Integrate LLMs (OpenAI, Claude, etc.) for smart assistants, summarization, etc. Create and orchestrate automation flows with n8n Build and maintain APIs using Python (FastAPI/Django) Deploy solutions in cloud-native environments (AWS, Azure) Work with cross-functional teams on feature delivery, testing, and scaling ✅ Must-Have Skills: 8–10 years of full-stack development experience Strong in Python for backend and AI integrations (FastAPI, Django) Proficient in React.js (or Angular/Vue) for building modern UIs Hands-on experience with n8n automation workflows Experience integrating LLMs (OpenAI, LangChain, GPT, Claude) REST APIs, webhooks, and third-party integrations Cloud platforms (AWS, Azure, or GCP) CI/CD pipelines, Docker, and SQL/NoSQL databases 🌟 Nice to Have: Experience with MLOps (MLflow, SageMaker, Kubeflow) Familiarity with RAG pipelines, vector DBs (FAISS, Pinecone) Semantic Kernel or multi-agent LLM frameworks Azure certifications This is a remote offshore position, with exciting long-term projects and the chance to work with a dynamic, global tech team. To apply: Send your resume to info@ribbitzllc.com Show more Show less
Posted 4 days ago
0.0 - 3.0 years
0 Lacs
Gandhinagar, Gujarat
On-site
Key Responsibilities: Design, develop, and deploy AI models and algorithms to solve business problems. Work with machine learning frameworks like TensorFlow, PyTorch, or Scikit-learn. Train, test, and validate models using large datasets. Integrate AI solutions into existing products and applications. Collaborate with data scientists, software engineers, and product teams to build scalable AI solutions. Monitor model performance and continuously improve accuracy and efficiency. Stay updated with the latest AI trends, tools, and best practices. Ensure AI models are ethical, unbiased, and secure. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. 4 to 5 Years of Proven experience in developing and deploying AI/ML solutions. Proficiency in Python and libraries like NumPy, Pandas, OpenCV, etc. Solid understanding of machine learning, deep learning, and NLP techniques. Experience with cloud platforms (AWS, Azure, or Google Cloud) is a plus. Strong problem-solving skills and ability to translate business needs into technical solutions. Excellent communication and collaboration skills. Preferred Qualifications: Experience with data preprocessing, feature engineering, and model tuning. Familiarity with reinforcement learning or generative AI models. Knowledge of MLOps tools and pipelines (e.g., MLflow, Kubeflow). Hands-on experience in deploying AI applications to production environments. Job Types: Full-time, Permanent Pay: ₹1,200,000.00 - ₹2,000,000.00 per year Benefits: Flexible schedule Paid sick time Paid time off Provident Fund Location Type: In-person Schedule: Day shift Fixed shift Monday to Friday Ability to commute/relocate: Gandhinagar, Gujarat: Reliably commute or planning to relocate before starting work (Required) Experience: Android Development: 3 years (Required) Work Location: In person
Posted 4 days ago
0.0 - 6.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Designation: Senior Analyst – Data Science Level: L2 Experience: 4 to 6 years Location: Chennai Job Description: We are seeking an experienced MLOps Engineer with 4-6 years of experience to join our dynamic team. In this role, you will build and maintain robust machine learning infrastructure that enables our data science team to deploy and scale models for credit risk assessment, fraud detection, and revenue forecasting. The ideal candidate has extensive experience with MLOps tools, production deployment, and scaling ML systems in financial services environments. Responsibilities: Design, build, and maintain scalable ML infrastructure for deploying credit risk models, fraud detection systems, and revenue forecasting models to production Implement and manage ML pipelines using Metaflow for model development, training, validation, and deployment Develop CI/CD pipelines for machine learning models ensuring reliable and automated deployment processes Monitor model performance in production and implement automated retraining and rollback mechanisms Collaborate with data scientists to productionize models and optimize them for performance and scalability Implement model versioning, experiment tracking, and metadata management systems Build monitoring and alerting systems for model drift, data quality, and system performance Manage containerization and orchestration of ML workloads using Docker and Kubernetes Optimize model serving infrastructure for low-latency predictions and high throughput Ensure compliance with financial regulations and implement proper model governance frameworks Skills: 4-6 years of professional experience in MLOps, DevOps, or ML engineering, preferably in fintech or financial services Strong expertise in deploying and scaling machine learning models in production environments Extensive experience with Metaflow for ML pipeline orchestration and workflow management Advanced proficiency with Git and version control systems, including branching strategies and collaborative workflows Experience with containerization technologies (Docker) and orchestration platforms (Kubernetes) Strong programming skills in Python with experience in ML libraries (pandas, numpy, scikit-learn) Experience with CI/CD tools and practices for ML workflows Knowledge of distributed computing and cloud-based ML infrastructure Understanding of model monitoring, A/B testing, and feature store management. Additional Skillsets: Experience with Hex or similar data analytics platforms Knowledge of credit risk modeling, fraud detection, or revenue forecasting systems Experience with real-time model serving and streaming data processing Familiarity with MLFlow, Kubeflow, or other ML lifecycle management tools Understanding of financial regulations and model governance requirements Job Snapshot Updated Date 13-06-2025 Job ID J_3745 Location Chennai, Tamil Nadu, India Experience 4 - 6 Years Employee Type Permanent
Posted 4 days ago
1.0 - 4.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Role : Data Scientist Experience : 1 to 4 Years Work Mode : WFO / Hybrid /Remote if applicable Immediate Joiners Preferred Required Skills & Qualification : An ideal candidate will have experience, as we are building an AI-powered workforce intelligence platform that helps businesses optimize talent strategies, enhance decision making, and drive operational efficiency. Our software leverages cutting-edge AI, NLP, and data science to extract meaningful insights from vast amounts of structured and unstructured workforce data. As part of our new AI team, you will have the opportunity to work on real-world AI applications, contribute to innovative NLP solutions, and gain hands on experience in building AI-driven products from the ground up. Required Skills & Qualification Strong experience in Python programming 1-3 years of experience in Data Science/NLP (Freshers with strong NLP projects are welcome). Proficiency in Python, PyTorch, Scikit-learn, and NLP libraries (NLTK, SpaCy, Hugging Face). Basic knowledge of cloud platforms (AWS, GCP, or Azure). Experience with SQL for data manipulation and analysis. Assist in designing, training, and optimizing ML/NLP models using PyTorch, NLTK, Scikit- learn, and Transformer models (BERT, GPT, etc.). Familiarity with MLOps tools like Airflow, MLflow, or similar. Experience with Big Data processing (Spark, Pandas, or Dask). Help deploy AI/ML solutions on AWS, GCP, or Azure. Collaborate with engineers to integrate AI models into production systems. Expertise in using SQL and Python to clean, preprocess, and analyze large datasets. Learn & Innovate Stay updated with the latest advancements in NLP, AI, and ML frameworks. Strong analytical and problem-solving skills. Willingness to learn, experiment, and take ownership in a fast-paced startup environment. Nice To Have Requirements For The Candidate Desire to grow within the company Team player and Quicker learner Performance-driven Strong networking and outreach skills Exploring aptitude & killer attitude Ability to communicate and collaborate with the team at ease. Drive to get the results and not let anything get in your way. Critical and analytical thinking skills, with a keen attention to detail. Demonstrate ownership and strive for excellence in everything you do. Demonstrate a high level of curiosity and keep abreast of the latest technologies & tools Ability to pick up new software easily and represent yourself peers and co-ordinate during meetings with Customers. What We Offer We offer a market-leading salary along with a comprehensive benefits package to support your well-being. Enjoy a hybrid or remote work setup that prioritizes work-life balance and personal well being. We invest in your career through continuous learning and internal growth opportunities. Be part of a dynamic, inclusive, and vibrant workplace where your contributions are recognized and rewarded. We believe in straightforward policies, open communication, and a supportive work environment where everyone thrives. (ref:hirist.tech) Show more Show less
Posted 4 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title Data Scientist (AI for Computer Vision) Job Description We are seeking an experienced Data Scientist specializing in AI for Computer Vision to join our dynamic team. Your primary responsibilities will include developing, fine-tuning, and optimizing AI models for computer vision applications, driving innovation in various healthcare applications. You will work closely with cross-functional teams, including machine learning engineers, software developers, and product managers, to deliver state-of-the-art AI solutions. Your role: Explore and develop innovative Artificial Intelligence (AI) algorithms for healthcare applications Create and refine AI algorithms for pre- and post-processing of images and videos, focusing on data from various imaging modalities Develop and implement machine learning and deep learning techniques for segmentation, classification, and statistical modeling. Demonstrate expertise in image processing, object detection, segmentation, and classification. Proficient in Python programming Possess a strong understanding of algorithms and frameworks such as TensorFlow, PyTorch, and Keras Experienced with version control systems (e.g., Git) and software development practices Develop and Optimize Computer Vision Models: Design, train, and fine-tune DL models for real-world applications Data Preparation & Engineering: Gather, clean, and preprocess large-scale image and video datasets for training and evaluation of computer vision models Experimentation & Model Evaluation: Conduct A/B testing and assess model performance using quantitative metrics (e.g., IoU, mAP, precision, recall) Research & Innovation: Stay updated with the latest advancements in computer vision, deep learning, and related technologies Deployment & Scaling: Work with ML engineers to deploy models into production environments using cloud platforms (AWS, Azure) and frameworks like TensorFlow, PyTorch, and OpenCV Collaboration & Communication: Work closely with cross-functional teams to integrate computer vision solutions into business processes and applications. You're the right fit if: Bachelor’s or master’s Degree: In computer science, AI, Data Science, Machine Learning, or a related field Experience: 3+ years in machine learning, deep learning, or AI research, with at least 1 year of hands-on experience in developing computer vision-based AI applications Programming Proficiency: Strong proficiency in Python and ML frameworks like TensorFlow and PyTorch Domain Knowledge: Knowledge of computer vision, natural language processing (NLP), or multimodal AI applications Technical Skills: Familiarity with computer vision techniques and fine-tuning of models Problem-Solving Skills: Strong problem-solving skills and the ability to work in a fast-paced, research-driven environment MLOps Tools: Hands-on experience with MLOps tools (e.g., MLflow, Kubeflow, Docker, Kubernetes) Ethical AI: Understanding of ethical AI and bias mitigation in computer vision models. Publications and Contributions: Strong publication record or contributions to open-source AI projects. How We Work Together We believe that we are better together than apart. For our office-based teams, this means working in-person at least 3 days per week. this role is an office role. About Philips We are a health technology company. We built our entire company around the belief that every human matters, and we won't stop until everybody everywhere has access to the quality healthcare that we all deserve. Do the work of your life to help the lives of others. Learn more about our business. Discover our rich and exciting history. Learn more about our purpose. If you’re interested in this role and have many, but not all, of the experiences needed, we encourage you to apply. You may still be the right candidate for this or other opportunities at Philips. Learn more about our culture of impact with care here. Show more Show less
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2