Jobs
Interviews

2068 Inference Jobs - Page 27

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description Chatbot / Conversational AI Technical Leader Responsibilities Provides technical leadership in Chat Bot/Voice Bot Space. Implement a Chat Bot for Various Business Adapt quickly to change in requirements and be willing to work with different technologies if required Lead the effort to build, implement and support the data infrastructure Manage the Intelligence Automation Engineer and vendor partners, ensuring to prioritize projects according to customer and internal needs, and develops top-quality data pipelines using industry best practices Own most deliverables for the ITSM Chat Bot team from a delivery perspective Lead cross-functional team members and stakeholders throughout projects of varying scope and complexity Gather requirements from business and IT users and Come up with Estimation. Responsible for the design, development, and implementation of chatbot & voicebot agents using the Azure cloud services and Genesys/Amazon connect/Smartassist/Avaya Interface and liaise with both business partners and (potentially) external vendors Required Skills And Qualifications At least 5 years of experience implementing Chatbot technologies Knowledgeable in basic concepts of NLP and NLU – Intent Classification, Keyword/Entity extraction , Text Similarity, Text Pre-processing, dialog flows, speech to text, text to speech and telephony systems. Basic knowledge of Machine Learning concepts – Training, Accuracy Evaluation Development experience in NodeJS, REST Services. Experience of working in cloud environment like Azure, AWS, Google Cloud Platform, IBM Cloud Ability to embed chatbot in multiple channels like Website, SMS, Email, Skype, Facebook Messenger, MS Teams, WhatsApp etc. Knowledge on all phases of software development, including UI design and development, microservices design and development, relational and non-relational DBs, APIs and external integration, quality assurance, validation documentation, security, and infrastructure. Knowledge on business functions and user stories, decompose them into technical specifications, and develop working application code for a cloud environment. Hands on experience in building applications using Java Script frameworks (NodeJS), AngularJS/ReactJS, SQL and No SQL Databases Experience with JSON Knowledge of analytics / visualization via dashboards and reporting tools Education Requirements Bachelor's degree in Computer Science, Engineering, Statistics, Technical Science, or 3+ years of IT/Programming experience Minimum 2 years of experience in solutioning for Artificial Intelligence use cases, plus web application development & systems integration experience e.g. REST/SOAP Prior solutioning experience with No SQL databases, integrating unstructured data Preferred Skills Hands on experience in one or more of the following AI technologies: Language – Natural Language Processing, Natural Language Understanding, Speech to Text, Text to Speech, Sentiment Analysis, Language Detection, Classification, Telephony channel experience AI solutions – Virtual Agents, intelligent case processing, Video Analytics, inference engines, stream monitoring, intelligent search, ontologies/knowledge representations, voice technologies (Speech To Text & Text To Speech), Custom Language Model creation. Knowledge and experience in some of the key AI platforms, e.g., Kore.ai, Servicenow VA, IBM Watson, Microsoft Azure Cognitive Services, Google Dialogflow Web UI and dashboard design experience Experience working in a DevOps environment, and using industry standard tools (GIT, JIRA) Able to explain technical concepts in a non-technical language Professional Skill Requirements Proven success in contributing to a team-oriented environment Proven ability to work creatively and analytically in a problem-solving environment

Posted 1 month ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Uber Eats is seeking a highly skilled and motivated Staff Data Scientist to join our Search Team. In this role, you will be a key driver in enhancing the search experience for millions of users across the globe. You'll bring deep statistical expertise to guide decision-making, improve product performance, and ensure our evaluations and insights are rooted in methodological rigor. What You'll Do Conduct robust statistical analyses on complex datasets to identify product opportunities, shape roadmap priorities, and optimize user experiences in search. Design and evaluate A/B tests and quasi-experiments, applying best practices in experimental methodology to ensure high-quality, unbiased insights. Build and maintain statistical frameworks for defining and measuring ground truth - identifying reliable signals to evaluate search relevance, personalization, and user satisfaction. Apply advanced sampling strategies to construct representative datasets for both offline and online evaluation pipelines, ensuring scalability and statistical power. Develop rigorous evaluation metrics that reflect real-world product performance and align closely with user and business goals. Lead initiatives to strengthen causal inference practices across the team, applying methods like matching, regression discontinuity, and difference-in-differences where appropriate. Partner with product managers, engineers, and other scientists to translate open-ended product questions into structured analytical approaches. Provide mentorship and technical leadership to other scientists, promoting a culture of statistical excellence and continuous learning. What You'll Need: M.S. or Bachelor's degree in Statistics, Economics, Mathematics, Operations Research, Computer Science, or a related quantitative field. 10+ years of industry experience in data science or applied analytics, ideally with experience in consumer products, search, or recommendation systems. Deep expertise in ground truth design, and evaluation methodologies for complex user-facing systems. Proven experience with statistical sampling techniques and working with offline evaluation pipelines to assess models and product performance. Strong fluency in experimentation design, causal inference, and observational data analysis. Proficiency with tools like SQL, Python, and R for data manipulation, modeling, and visualization. Excellent communication skills - able to present statistical findings clearly and influence product and engineering decisions through data. A strong product sense and the ability to balance analytical rigor with practical business impact.

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Company Description Symbiosis AI is a pioneering company transforming industries with advanced AI solutions. Our offerings include LLM Fusion for superior model orchestration, VectorStore for efficient vector embeddings storage, and scalable AI inference with InferGen and InferRAG. We also provide customized solutions for businesses, all designed to deliver unparalleled performance, scalability, and cost-effectiveness. Internship Details Duration: 3 Months Unpaid Internship Mode: Hybrid (2-3 days per week in-office at Horamavu, Bangalore) Schedule: Flexible timings, Monday-Saturday, Full-time role Role Description This is a full-time hybrid role for a Social Media Marketing Intern at Symbiosis AI, where your presence is required for 2-3 days per week at our office in Horamavu, Bangalore. The intern will be responsible for social media marketing, social media content creation, digital marketing, and communication tasks to support Symbiosis AI's online presence and brand promotion. Qualifications Social Media Marketing and Social Media Content Creation skills Digital Marketing and Marketing skills Strong communication skills Experience with social media platforms and content creation tools Creativity and ability to think outside the box Ability to work independently and remotely Pursuing or completed undergraduate studies in Marketing, Communication, or related field

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About The Role We are looking for an exceptional data scientist to be a part of the CRM platform data science team. As a Data Scientist in this role, you will be responsible for leveraging data-driven insights and advanced analytics techniques to drive marketing strategies and optimize our product offerings. You will collaborate with cross-functional teams, including applied and data science, marketing, product management, and engineering, to develop data-driven solutions that enhance customer experiences and maximize business outcomes. What The Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Collaborate with measurement and optimization teams to design experiments and share readouts. Develop metrics and dashboards to monitor product performance, customer engagement, and conversion rates, and provide insights for continuous improvement. Collaborate with product management teams to prioritize product roadmap initiatives based on data-driven insights and market demand. Partner with internal stakeholders including Operations, Product and the Marketing Technology team to develop marketing strategies and budget decisions based on data insights Collaborate with marketing applied scientists on ML centered initiatives for comms and growth platform Basic Qualifications M.S., or Bachelor's degree in Statistics, Economics, Machine Learning, Operations Research, or other quantitative fields. Knowledge of underlying mathematical foundations of statistics, optimization, economics, and analytics. Knowledge of experimental design and analysis. Strong experience in data analysis, statistical modeling, and machine learning techniques, with a proven track record of solving complex business problems Meticulous attention to detail and rigorous data quality assessment to drive accurate and reliable insights Excellent communication skills and stakeholder management Advanced SQL expertise with a strong focus on time and space optimization Proficiency in Python, and experience working with data manipulation and analysis libraries (e.g., Pandas, NumPy, scikit-learn). Demonstrated experience working with big data frameworks (e.g. Hadoop, Spark) and prior experience in building long running / stable data pipelines Solid understanding of data visualization principles and experience with visualization tools (e.g., Tableau, Looker) to effectively communicate insights. Excellent judgment, critical-thinking, and decision-making skills Preferred Qualifications Prior experience in analysis of product usage patterns, root-cause determination, and customer feedback to identify opportunities for product enhancements & new feature development. Work experience in applying advanced analytics, applied science, or causal inference to marketing problems. Demonstrated capacity to clearly and concisely communicate complex business activities, technical requirements, and recommendations.

Posted 1 month ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Title: Senior Product Manager Location: Bangalore Reporting to: Senior Manager – Product & Data Science Purpose of the role The Global TestOps Team at Anheuser-Busch InBev (AB InBev) is responsible for driving a culture of experimentation and A/B Testing, enabling teams to make data-driven decisions with confidence. By implementing the right experimentation frameworks, TestOps ensures that hypotheses are measured in an unbiased way, isolating the impact of decisions which can fuel business growth. The TestOps Product aims to bring this centralized framework to teams across different business domains, to seamlessly design and analyze experiments. By embedding experimentation into regular decision-making, the product empowers ABInBev to drive continuous improvement, reduce risk and accelerate growth through relevant insights. We are seeking a strategic and analytical Senior Product Manager to lead the evolution of the TestOps Experimentation Platform, ensuring that the product’s capabilities are expanded to meet user needs and drive adoption. The ideal candidate will collaborate with cross-functional teams to enhance platform functionality, streamline experimentation workflows, and embed a culture of test-and-learn across the organization. Key Responsibilities User Personas & User Journeys: Detail out user personas and map out user journeys to enable change management and user adoption. Voice of the Customer: Deeply understand the users and act as their advocate within the organization. Product Research: Conduct research within similar products to identify opportunities where our product can be improved. Feedback Analysis: Build and implement a process to incorporate iterative user feedback. Opportunity Sizing: Conduct opportunity sizing for all domains to determine their potential value impact. Product Roadmap Development: Build use case level product roadmaps and align these with leadership to ensure strategic alignment and goal setting. Product Marketing: Create personalized product marketing and collateral plans in conjunction with product releases to drive user adoption and engagement. Product Documentation: Create easy-to-understand guides and product tutorials for every major feature and functionality. Qualifications Level Of Educational Attainment Required Bachelorʼs degree in business, Economics, Engineering, or a related field; an MBA from top colleges in India is a plus. Previous Work Experience Proven experience of 6+ years as a Product Manager, with a track record of successfully bringing innovative products to market. Preferred Skills Proven ability to manage cross-functional teams and lead projects to completion. Understanding of change management principles and practices. Proficiency in creating and executing user adoption strategies. Strong strategic thinking and analytical skills, with the ability to make data-driven decisions. Strong stakeholder management skills - comfortable working with both technical and business teams. Strong problem-solving, communication, and storytelling abilities. Experience and Understanding of AB Testing, Causal Inference, and statistical experimentation. And above all of this, an undying love for beer! We dream big to create future with more cheers.

Posted 1 month ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Cloud AI Engineer We're looking for a highly skilled and experienced Cloud AI Engineer to join our dynamic team. In this role, you'll be instrumental in designing, developing, and deploying cutting-edge artificial intelligence and machine learning solutions leveraging the full suite of Google Cloud Platform (GCP) services. Objectives of this role Lead the end-to-end development cycle of AI applications, from conceptualization and prototyping to deployment and optimization, with a core focus on LLM-driven solutions. Architect and implement highly performant and scalable AI services, effectively integrating with GCP's comprehensive AI/ML ecosystem. Collaborate closely with product managers, data scientists, and MLOps engineers to translate complex business requirements into tangible, AI-powered features. Continuously research and apply the latest advancements in LLM technology, prompt engineering, and AI frameworks to enhance application capabilities and performance. ## Responsibilities Develop and deploy production-grade AI applications and microservices primarily using Python and FastAPI, ensuring robust API design, security, and scalability. Design and implement end-to-end LLM pipelines, encompassing data ingestion, processing, model inference, and output generation. Utilize Google Cloud Platform (GCP) services extensively, including Vertex AI (Generative AI, Model Garden, Workbench), Cloud Functions, Cloud Run, Cloud Storage, and BigQuery, to build, train, and deploy LLMs and AI models. Expertly apply prompt engineering techniques and strategies to optimize LLM responses, manage context windows, and reduce hallucinations. Implement and manage embeddings and vector stores for efficient information retrieval and Retrieval-Augmented Generation (RAG) patterns. Work with advanced LLM orchestration frameworks such as LangChain, LangGraph, Google ADK, and CrewAI to build sophisticated multi-agent systems and complex AI workflows. Integrate AI solutions with other enterprise systems and databases, ensuring seamless data flow and interoperability. Participate in code reviews, establish best practices for AI application development, and contribute to a culture of technical excellence. Keep abreast of the latest advancements in GCP AI/ML services and broader AI/ML technologies, evaluating and recommending new tools and approaches. ## Required skills and qualifications Two or more years of hands-on experience as an AI Engineer with a focus on building and deploying AI applications, particularly those involving Large Language Models (LLMs). Strong programming proficiency in Python, with significant experience in developing web APIs using FastAPI. Demonstrable expertise with Google Cloud Platform (GCP), specifically with services like Vertex AI (Generative AI, AI Platform), Cloud Run/Functions, and Cloud Storage. Proven experience in prompt engineering, including advanced techniques like few-shot learning, chain-of-thought prompting, and instruction tuning. Practical knowledge and application of embeddings and vector stores for semantic search and RAG architectures. Hands-on experience with at least one major LLM orchestration framework (e.g., LangChain, LangGraph, CrewAI). Solid understanding of software engineering principles, including API design, data structures, algorithms, and testing methodologies. Experience with version control systems (Git) and CI/CD pipelines. Preferred Skills And Qualifications Bachelor’s or Master's degree in Computer Science Good To Have Experience with MLOps practices for deploying, monitoring, and maintaining AI models in production. Understanding of distributed computing and data processing technologies. Contributions to open-source AI projects or a strong portfolio showcasing relevant AI/LLM applications. Excellent analytical and problem-solving skills with a keen attention to detail. Strong communication and interpersonal skills, with the ability to explain complex technical concepts to non-technical stakeholders.

Posted 1 month ago

Apply

5.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About The Position Chevron invites applications for the role of AI/ML Engineer within our Enterprise AI team in India. This position is integral to designing and developing AI/ML models that significantly accelerate the delivery of business value. We are looking for a Machine Learning Engineer with the ability to bring their expertise, innovative attitude, and excitement for solving complex problems with modern technologies and approaches. We are looking for those few individuals with a passion for exploring, innovating, and delivering innovative Data Science solutions that provide immense value to our business. The expectation for this role is 5-10 years of relevant experience. Key Responsibilities Transform data science prototypes into appropriate scale solutions in a production environment Orchestrate and configure infrastructure that assists Data Scientists and analysts in building low latency, scalable and resilient machine learning, and optimization workloads into an enterprise software product Combine expertise in mathematics, statistics, computer science, and domain knowledge to create advanced AI/ML models. Collaborate closely with the AI Technical Manager and GCC Petro-technical professionals and data engineers to integrate and scale models into the business framework. Identify data, appropriate technology, and architectural design patterns to solve business challenges using Chevron approved standard analytical tools and AI design patterns and architectures Partner with Data Scientists and Chevron IT Foundational services to implement complex algorithms and models into enterprise scale machine learning pipelines Run machine learning experiments and fine-tune algorithms to ensure optimal performance Consistently deliver complex, innovative, and complete solutions, driving them through design, planning, development, and deployment that simplify business processes and workflows to drive business value Work collaboratively with a large variety of different teams, including data scientists, data engineers, and solution architects from various organizations within business units and IT Required Qualifications Minimum 5 years’ experience in Object Oriented Design and/or Functional Programming in Python. 5 - 10 years of experience Mature software engineering skills, such as source control versioning, requirement spec, architecture, and design review, testing methodologies, CI/CD, etc. Must have a disciplined, methodical, minimalist approach to designing and constructing layered software components that can be embedded within larger frameworks or applications. Experience implementing machine learning frameworks and libraries such as MLflow Experience with containers and container managements (docker, Kubernetes) Experience developing cloud first solutions using Microsoft Azure Services including building machine learning pipelines in Azure Machine Learning and/or Fabric, Hands-on experience in deploying machine learning pipelines with Azure Machine Learning SDK Working knowledge of mathematics (primarily linear algebra, probability, statistics), and algorithms. Proficient at orchestrating large-scale ML/DL jobs, leveraging big data tooling and modern container orchestration infrastructure, to tackle distributed training and massive parallel model executions on cloud infrastructure. Experience designing custom APIs for machine learning models for training and inference processes and designing, implementing, and delivering frameworks for MLOps. Experience with model lifecycle management and automation to support retraining and model monitoring Experience implementing and incorporating ML models on unstructured data using cognitive services and/or computer vision as part of AI solutions and workflows. History of working with large scale model optimization and hyperparameter tuning, applied to ML/DL models. Knowledge of enterprise SaaS complexities including security/access control, scalability, high availability, concurrency, online diagnoses, deployment, upgrade/migration, internationalization, and production support. Knowledge of data engineering and transformation tools and patterns such as Databricks, Spark, Azure Data Factory Ability to engage other technical experts at all organizational levels and assess opportunities to apply machine learning and analytics to improve business workflows and deliver information and insight to support business decisions. Ability to communicate in a clear, concise, and understandable manner both orally and in writing. Chevron ENGINE supports global operations, supporting business requirements across the world. Accordingly, the work hours for employees will be aligned to support business requirements. The standard work week will be Monday to Friday. Working hours are 8:00am to 5:00pm or 1.30pm to 10.30pm. Chevron participates in E-Verify in certain locations as required by law.

Posted 1 month ago

Apply

10.0 years

0 Lacs

Greater Hyderabad Area

On-site

Vertafore is a leading technology company whose innovative software solutions are advancing the insurance industry.. Our suite of products provides solutions to our customers that help them better manage their business, boost their productivity and efficiencies, and lower costs while strengthening relationships. Our mission is to move InsurTech forward by putting people at the heart of the industry. We are leading the way with product innovation, technology partnerships, and focusing on customer success. Our fast-paced and collaborative environment inspires us to create, think, and challenge each other in ways that make our solutions and our teams better. We are headquartered in Denver, Colorado, with offices across the U.S., Canada, and India. Job Description We are seeking a highly skilled Sr. Tech Lead - Full Stack (Java & React) for the Certificate product development. This role requires in-depth knowledge of Java for back-end development and React for front-end development, with the ability to work across the full stack. As a Sr. Tech Lead, you will drive technical directions, oversee development, mentor a team of developers, and ensure that solutions are scalable, performant, and meet high-quality standards. Key Responsibilities Essential job functions included but are not limited to the following: Technical Leadership: Lead and oversee the design, development, and implementation of full-stack solutions for the Certificate project using Java and React. Architecture & Design: Define and guide the architectural patterns, including microservices architecture for the back end and component-based architecture for the front end. Full-Stack Development: Participate in hands-on coding across the front-end (React) and back-end (Java/Spring Boot) layers, ensuring code quality, scalability, and maintainability. Collaboration: Work closely with cross-functional teams, including product managers, UI/UX designers, QA engineers, and DevOps teams, to deliver high-quality software. Code Reviews & Best Practices: Conduct code reviews, enforce coding standards, and ensure adherence to best practices, design patterns, and performance optimizations. Mentorship: Mentor and guide junior and mid-level developers, providing technical oversight and problem-solving assistance. DevOps & Deployment: Collaborate with DevOps teams to define deployment pipelines, CI/CD practices, and automation to improve the efficiency of development and delivery. Innovation & Continuous Improvement: Stay updated with the latest in full-stack technologies and frameworks and integrate new practices into the project. Required Technical Skills Back-End Development (Java): Java (8+): Expertise in Java 8+ and its modern features (e.g., Streams, Lambda expressions, Optional, Functional interfaces). Hands-on experience building enterprise-grade applications using Java. Spring Framework: Proficiency in Spring Boot for building microservices and RESTful APIs. Experience with Spring Core, Spring MVC, Spring Data, and Spring Security. Understanding of dependency injection and aspect-oriented programming (AOP). Microservices Architecture: Expertise in designing and developing microservices using Spring Boot, adhering to best practices like service discovery, load balancing, and fault tolerance. Familiarity with tools like Spring Cloud, Netflix OSS, and Kubernetes for microservices orchestration. Kafka (Event Streaming & Messaging): Experience with Apache Kafka for building distributed, event-driven systems. Proficiency in using Kafka for real-time data streaming, event sourcing, and message-based communication between microservices. Database Management: Strong knowledge of SQL/NoSQL databases like MySQL, PostgreSQL, MongoDB, or Cassandra. Experience with JPA/Hibernate for ORM and understanding of database optimization techniques, query performance tuning, and designing efficient data models. APIs & Integrations: Proficiency in designing RESTful APIs and working with API specifications and documentation tools like Swagger/OpenAPI. Experience with OAuth 2.0, JWT for authentication and authorization mechanisms. Front-End Development (React / Angular): React/ Angular: Strong knowledge of React, including new features like Concurrent Rendering and Automatic Batching. Expertise in building and optimizing applications with React functional components and leveraging React Hooks for state and side effects management (e.g., useState, useEffect, useCallback, useMemo). Provider in React & Context API: Experience with setting up Context Providers and effectively using React Context API for managing and passing data across component trees without prop drilling. React Query: Proficiency in using React Query for efficient data fetching, caching, and synchronizing server state in React applications. Formik & Yup Validation: Strong experience in building and managing forms in React using Formik, including handling complex form states and validations. Expertise in using Yup for schema-based validation of forms, ensuring comprehensive client-side form validation. TypeScript: Strong hands-on experience with TypeScript for building type-safe React applications. Deep understanding of TypeScript features like interfaces, generics, type inference, and modules for ensuring code reliability and scalability. HTML/CSS & UI Frameworks: HTML/CSS: Strong understanding of semantic HTML and modern CSS for responsive web design. MaterialUI & Tailwind CSS: Experience with MaterialUI and Tailwind CSS for building modern, user-friendly, and scalable UI components. Full-Stack Development: Node.js & Express (Optional but Preferred): Familiarity with Node.js and Express for building lightweight back-end services or integrating with Java-based microservices. Version Control & CI/CD: Proficiency with Git and working with branching strategies such as GitFlow. Experience setting up CI/CD pipelines using tools like Jenkins or GitLab CI, ensuring automated testing and seamless deployments. Testing: Familiarity with Playwright or Selenium for end-to-end testing. Performance & Security: Performance Optimization: Experience with optimizing application performance, including JVM tuning, caching strategies, and improving query performance in databases. Security Best Practices: Strong understanding of security best practices for both front-end and back-end, including secure coding, protecting APIs, and using libraries like Spring Security for authentication and authorization. Soft Skills: Leadership & Communication: Strong leadership and decision-making abilities, with a proven track record of leading teams in delivering complex, high-quality software solutions. Excellent communication skills to work effectively with global teams and non-technical stakeholders. Problem-Solving & Innovation: Ability to approach problems creatively and analytically, ensuring continuous improvements in the software development lifecycle. Collaboration Tools: Proficiency in using tools like JIRA and Teams to manage Agile workflows and communicate effectively across global teams. Agile Methodologies: Experience working in Agile/Scrum environments, actively participating in sprint planning, retrospectives, and iterative development cycles. Qualifications Bachelor’s or master’s degree in computer science, Engineering, or a related field. Have an intense passion for building software and mentoring their teams. Are very strong in both design and implementation at an enterprise level. 10 - 15 years of professional experience with the above-mentioned tech stack. Are in-tune with high performance and high availability design/programming. Have experience in security best practices for software and data platforms. Design 'perfect-fit' solutions with engineering teams and business units to drive business growth and customer delight. Enjoy solving problems through the entire application stack. Are interested and capable of learning other programming languages as needed.

Posted 1 month ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Scope: Lead the research, design, and development of advanced AI and ML models powering a cutting edge AI-driven no-code development platform and a scalable AI inference and training orchestration system. Responsible for building scalable ML pipelines, optimizing models for production, mentoring team members, and translating research innovations into impactful product features aligned with business goals. Job Responsibilities: • Design and implement state-of-the-art machine learning and deep learning models for NLP, computer vision, and generative AI relevant to no-code AI coding and AI orchestration platforms. • Develop, optimize, and fine-tune large-scale models, including transformer-based architectures and generative models. • Architect and manage end-to-end machine learning pipelines: data processing, training, evaluation, deployment, and continuous monitoring. • Collaborate closely with software engineering teams to productionize models ensuring reliability, scalability, and performance. • Research and integrate cutting-edge AI techniques and algorithms to maintain product competitiveness. • Lead AI research efforts contributing to intellectual property generation, patents, and academic publications. • Provide technical leadership and mentorship to junior AI/ML team members. • Collaborate cross-functionally with product managers, UX designers, and engineers to deliver AI-powered product features. • Maintain up-to-date knowledge of AI research trends and technologies, assessing their applicability. • Ensure compliance with data privacy and security standards in AI model development. Good to have skills: • Experience with AI-driven no-code platforms or automated code generation. • Familiarity with AI workflow orchestration frameworks like LangChain, Crew AI, or similar. • Knowledge of probabilistic modeling and uncertainty quantification. • Hands-on experience with MLOps tools and practices, including CI/CD, model versioning, and monitoring. • Familiarity with cloud platforms (AWS, GCP, Azure) and container orchestration (Docker, Kubernetes). • Contributions to open-source AI projects or patent filings. • Understanding of AI ethics, data privacy (GDPR, SOC2) compliance. • Strong academic research background with publications in top-tier AI/ML conferences. Qualification and Experience: • PhD in Computer Science, Electrical Engineering, Statistics, Mathematics, or related fields with a specialization in Artificial Intelligence, Machine Learning, or Deep Learning. • Strong research publication record in reputed AI/ML conferences (NeurIPS, ICML, ICLR, CVPR, ACL). • Demonstrated experience in developing and deploying deep learning models including transformers, CNNs, RNNs, GNNs, and generative AI models. • Proven skills in NLP and/or computer vision. • Hands-on experience with Python and ML frameworks such as PyTorch, TensorFlow, JAX. • Experience building scalable ML pipelines and applying MLOps best practices. • Knowledge of distributed training, GPU acceleration, and cloud infrastructure is highly desirable. • Excellent problem-solving, analytical, and communication skills. • Experience mentoring or leading junior AI researchers/engineers is a plus. • Prior exposure to AI-driven no-code platforms, AI orchestration frameworks, or automated code generation technologies is beneficial.

Posted 1 month ago

Apply

3.0 years

0 Lacs

India

Remote

We run a recruitment agency and help our clients hire top 1% talent. We are looking for a Statistician to write, review, and validate prompt-based questions designed to train AI. Overview 💲 Pay – $50-$70 per hour 🕒 Flexible workload: 10–20 hours per week, with potential to increase to 40 hours 🌍 Fully remote and asynchronous—work on your own schedule 📅 Minimum duration: 1–2 months, with potential for extension Your expertise in statistical modeling, inference, probability, experimental design, and data interpretation will ensure each prompt and response is analytically rigorous and educationally sound. We welcome statisticians from academic, governmental, healthcare, or industry backgrounds to help create training materials that reflect real-world data challenges and methodology. You’ll have the opportunity to apply your statistical judgment to shape how advanced AI models interpret and generate quantitative insights. You are a good fit if you: Have over 3+ years of industry or academic experience applying statistics in real-world scenarios. Have a bachelor's degree in statistics, applied mathematics, or a related field. Have a strong background in one or more of the following areas: descriptive and inferential statistics, hypothesis testing and confidence intervals, linear and generalized linear models, survey sampling and experimental design, or multivariate analysis/Bayesian inference/survival analysis Demonstrate excellent verbal and written communication skills Have a strong attention to detail Demonstrate strong analytical reasoning and communication skills, and have a keen attention to detail. Preferred Qualification Proficiency in R, Python, or SAS is highly encouraged Experience with real-world datasets is a plus

Posted 1 month ago

Apply

0 years

0 - 0 Lacs

Cochin

Remote

https://pmspace.ai/ Company Profile: At pmspace.ai, we’re building next-generation AI tools for project management intelligence. Our platform leverages graph databases, NLP, and large language models (LLMs) to transform complex project data into actionable insights. Join us to pioneer cutting-edge solutions in a fast-paced, collaborative environment. Role Overview We seek a Python Developer with expertise in graph databases (Neo4j), RAG pipelines, and vLLM optimization. You’ll design scalable AI systems, enhance retrieval-augmented workflows, and deploy high-performance language models to power our project analytics engine. Key Responsibilities Architect and optimize graph database systems (Neo4j) to model project knowledge networks and relationships. Build end-to-end RAG (Retrieval-Augmented Generation) pipelines for context-aware AI responses. Implement and fine-tune vLLM for efficient inference of large language models (LLMs). Develop Python-based microservices for data ingestion, processing, and API integrations (FastAPI, Flask). Collaborate with ML engineers to deploy transformer models (e.g., BERT, GPT variants) and vector databases. Monitor system performance, conduct A/B tests, and ensure low-latency responses in production. Required Skills Proficiency in Python and AI/ML libraries (PyTorch, TensorFlow, Hugging Face Transformers). Hands-on experience with graph databases, especially Neo4j (Cypher queries, graph algorithms). Demonstrated work on RAG pipelines (retrieval, reranking, generation) using frameworks like LangChain or LlamaIndex. Experience with vLLM or similar LLM optimization tools (quantization, distributed inference). Knowledge of vector databases (e.g., FAISS, Pinecone) and embedding techniques. Familiarity with cloud platforms (AWS/GCP/Azure) and containerization (Docker, Kubernetes). Job Type: Full-time Pay: ₹5,000.00 - ₹7,000.00 per month Schedule: Day shift Work Location: Remote Expected Start Date: 01/08/2025

Posted 1 month ago

Apply

4.0 years

40 - 50 Lacs

Gurugram, Haryana, India

Remote

Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, NumPy, OpenCV, PIL, PyTorch Crop.Photo is Looking for: Our engineers don’t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges — this role is for you. You’ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What You’ll Do Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Who You Are You’ve built systems — not just prototypes You care about both ML results and the system’s behavior in production You’re comfortable taking a rough business goal and shaping the technical path to get there You’re energized by product-focused AI work — things that users feel and rely on You’ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 month ago

Apply

8.0 years

0 - 0 Lacs

Bengaluru

On-site

Senior Data Scientist About the Role: We are seeking a highly skilled Senior Data Scientist with expertise in Python, Machine Learning (ML), Natural Language Processing (NLP), Generative AI (GenAI), and Azure Cloud Services . The ideal candidate will be responsible for designing, developing, and deploying advanced AI/ML models to drive data-driven decision-making. This role requires strong analytical skills, proficiency in AI/ML technologies, and experience with cloud-based solutions. Key Responsibilities: · Design and develop ML, NLP, and GenAI models to solve complex business problems. · Build, train, and optimize AI models using Python and relevant ML frameworks. · Implement Azure AI/ML services for scalable deployment of models. · Develop and integrate APIs for real-time model inference and decision-making. · Work with large-scale data to extract insights and drive strategic initiatives. · Collaborate with cross-functional teams, including Data Engineers, Software Engineers, and Product Teams , to integrate AI/ML solutions into applications. · Implement CI/CD pipelines to automate model training, deployment, and monitoring. · Ensure adherence to software engineering best practices and Agile methodologies in AI/ML projects. · Stay updated on cutting-edge AI/ML advancements and continuously enhance models and algorithms. · Conduct research on emerging AI/ML trends and contribute to the development of innovative solutions. · Provide technical mentorship and guidance to junior data scientists. · Optimize model performance and scalability in a production environment . Required Skills & Qualifications: · Proficiency in Python and ML frameworks like TensorFlow, PyTorch, or Scikit-learn. · Hands-on experience in NLP techniques , including transformers, embeddings, and text processing. · Expertise in Generative AI models (GPT, BERT, LLMs, etc.). · Strong knowledge of Azure AI/ML services , including Azure Machine Learning, Azure Cognitive Services, and Azure Databricks. · Experience in developing APIs for model deployment and integration. · Familiarity with CI/CD pipelines for AI/ML models. · Strong understanding of software engineering principles and best practices. · Experience working in an Agile development environment . · Excellent problem-solving skills and ability to work in a fast-paced, dynamic environment. · Strong background in statistical analysis, data mining, and data visualization . Preferred Qualifications: · Experience in MLOps and automation of model lifecycle management. · Knowledge of vector databases and retrieval-augmented generation (RAG) techniques. · Exposure to big data processing frameworks (Spark, Hadoop). · Strong ability to communicate complex ideas to technical and non-technical stakeholders. · Experience with Graph Neural Networks (GNNs) and recommendation systems . · Familiarity with AutoML frameworks and hyperparameter tuning strategies. Job Types: Full-time, Part-time, Permanent, Contractual / Temporary Pay: ₹400.00 - ₹450.00 per hour Schedule: Day shift Experience: Senior Data Scientist: 8 years (Required) ML, NLP, and GenAI models: 8 years (Required) Python: 8 years (Required) Azure AI/ML services: 8 years (Required) Work Location: In person

Posted 1 month ago

Apply

3.0 - 5.0 years

2 - 4 Lacs

Bengaluru

Remote

Way of working - Remote : Employees will have the freedom to work remotely all through the year. These employees, who form a large majority, will come together in their base location for a week, once every quarter. Job Profile : Data Scientist II Location : Bangalore | Karnataka Years of Experienc e : 3 - 5 ABOUT THE TEAM & ROLE: Data Science at Swiggy Data Science and applied ML is ingrained deeply in decision making and product development at Swiggy. Our data scientists work closely with cross-functional teams to ship end-to-end data products, from formulating the business problem in mathematical/ML terms to iterating on ML/DL methods to taking them to production. We own or co-own several initiatives with a direct line of sight to impact on customer experience as well as business metrics. We also encourage open sharing of ideas and publishing in internal and external avenues What will you get to do here? You will leverage your strong ML/DL/Statistics background to build new and next generation of ML based solutions to improve the quality of ads recommendation and leverage various optimization techniques to improve the campaign performance. You will mine and extract relevant information from Swiggy's massive historical data to help ideate and identify solutions to business and CX problems. You will work closely with engineers/PMs/analysts on detailed requirements, technical designs, and implementation of end-to-end inference solutions at Swiggy scale. You will stay abreast with the latest in ML research for Ads Bidding algorithms, Recommendation Systems related areas and help adapt it to Swiggy's problem statements. You will publish and talk about your work in internal and external forums to both technical and layman audiences. Opportunity to work on challenging and impactful projects in the logistics domain. Collaborative and supportive work environment that fosters learning and growth. Conduct data analysis and modeling to identify opportunities for optimization and automation. What qualities are we looking for? Bachelors or Masters degree in a quantitative field with 3-5 years of industry/research lab experience Experience in Generative AI, Applied Mathematics, Machine Learning, Statistics Required: Excellent problem solving skills, ability to deconstruct and formulate solutions from first-principles Required: Depth and hands-on experience in applying ML/DL, statistical techniques to business problems Preferred: Experience working with ‘big data’ and shipping ML/DL models to production Required: Strong proficiency in Python, SQL, Spark, Tensorflow Required: Strong spoken and written communication skills Big plus: Experience in the space of ecommerce and logistics Experience in Agentic AI , LLMS and NLP, Previous experience in deep learning, operations research, and working in startup or product-based consumer/internet companies is preferred Excellent communication and collaboration skills, with the ability to work effectively in a team environment Visit our tech blogs to learn more about some the challenges we deal with: https://bytes.swiggy.com/the-swiggy-delivery-challenge-part-one-6a2abb4f82f6 https://bytes.swiggy.com/how-ai-at-swiggy-is-transforming-convenience-eae0a32055ae https://bytes.swiggy.com/decoding-food-intelligence-at-swiggy-5011e21dbc86 We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, disability status, or any other characteristic protected by the law.

Posted 1 month ago

Apply

2.0 years

3 - 5 Lacs

Bengaluru

On-site

Company: AHIPL Agilon Health India Private Limited Job Posting Location: India_Bangalore Job Title: Prospective Chart Reviewer-6 Job Description: Essential Job Functions: Performs pre-visit medical record reviews to identify chronic conditions reported in prior years Identify diagnoses that lack supporting documentation Prioritizes clinical alerts and presents those that are strongly suggestive of an underlying condition Present information to providers in a concise complete manner All other duties as assigned Other Job Functions: Understand, adhere to, and implement the Company’s policies and procedures. Provide excellent customer services skills, including consistently displaying awareness and sensitivity to the needs of internal and/or external clients. Proactively ensuring that these needs are met or exceeded. Take personal responsibility for personal growth including acquiring new skills, knowledge, and information. Engage in excellent communication which includes listening attentively and speaking professionally. Set and complete challenging goals. Demonstrate attention to detail and accuracy in work product by meeting productivity standards and maintaining a company standard of accuracy Qualifications: Minimum Experience 2+ years of clinical experience required Advanced level of clinical knowledge associated with chronic disease states required Relevant chart review experience required Education/Licensure: Medical Doctor or Nurse required Coding Certification through AHIMA or AAPC preferred Skills and Abilities: Language Skills: Strong communication skills both written and verbal to work with multiple internal and external clients in a fast-paced environment Mathematical Skills: Ability to work with mathematical concepts such as probability and statistical inference. Ability to apply concepts such as fractions, percentages, ratios, and proportions to practical situations. Reasoning Ability: Ability to apply principles of logical or scientific thinking to a wide range of intellectual and practical problems. Computer Skills: Ability to create and maintain documents using Microsoft Office (Word, Excel, Outlook, PowerPoint) Location: India_Bangalore

Posted 1 month ago

Apply

4.0 years

0 Lacs

Madurai

On-site

Job Location: Madurai Job Experience: 4-15 Years Model of Work: Work From Office Technologies: Artificial Intelligence Machine Learning Functional Area: Software Development Job Summary: Job Title: ML Engineer – TechMango Location: TechMango, Madurai Experience: 4+ Years Employment Type: Full-Time Role Overview We are seeking an experienced Machine Learning Engineer with strong proficiency in Python, time series forecasting, MLOps, and deployment using AWS services. This role involves building scalable machine learning pipelines, optimizing models, and deploying them in production environments. Key Responsibilities: Core Technical Skills Languages & Databases Programming Language: Python Databases: SQL Core Libraries & Tools Time Series & Forecasting: pmdarima, statsmodels, Prophet, GluonTS, NeuralProphet Machine Learning Models: State-of-the-art ML models, including boosting and ensemble methods Model Explainability: SHAP, LIME Deep Learning & Data Processing Frameworks: PyTorch, PyTorch Forecasting Libraries: Pandas, NumPy, PySpark, Polars (optional) Hyperparameter Tuning Tools: Optuna, Amazon SageMaker Automatic Model Tuning Deployment & MLOps Model Deployment: Batch & real-time with API endpoints Experiment Tracking: MLFlow Model Serving: TorchServe, SageMaker Endpoints / Batch Containerization & Pipelines Containerization: Docker Orchestration: AWS Step Functions, SageMaker Pipelines AWS Cloud Stack SageMaker (Training, Inference, Tuning) S3 (Data Storage) CloudWatch (Monitoring) Lambda (Trigger-based inference) ECR / ECS / Fargate (Container Hosting) Candidate Requirements Strong problem-solving and analytical mindset Hands-on experience with end-to-end ML project lifecycle Familiarity with MLOps workflows in production environments Excellent communication and documentation skills Comfortable working in agile, cross-functional teams

Posted 1 month ago

Apply

0 years

0 Lacs

India

On-site

**Who you are** You’ve stepped beyond traditional QA—you test AI agents, not just UI clicks. You build automated tests that check for **hallucinations, bias, adversarial inputs**, prompt chain integrity, model outputs, and multi-agent orchestration failures. You script Python tests and use Postman/Selenium/Playwright for UI/API, and JMeter or k6 for load. You understand vector databases and can test embedding correctness and data flows. You can ask, “What happens when two agents clash?” or “If one agent hijacks context, does the system fail?” and then write tests for these edge cases. You’re cloud-savvy—Azure or AWS—and integrate tests into CI/CD. You debug failures in agent-manager systems and help triage model logic vs infra issues. You take ownership of AI test quality end-to-end. --- **What you’ll actually do** You’ll design **component & end-to-end tests** for multi-agent GenAI workflows (e.g., planner + execution + reporting agents). You’ll script pytest + Postman + Playwright suites that test API functionality, failover logic, agent coordination, and prompt chaining. You’ll simulate coordination failures, misalignment, hallucinations in agent dialogues. You’ll run load tests on LLM endpoints, track latency and cost. You’ll validate that vector DB pipelines (Milvus/FAISS/Pinecone) return accurate embeddings and retrieval results. You’ll build CI/CD pipelines (Azure DevOps, GitHub Actions, Jenkins) that gate merges based on model quality thresholds. You’ll implement drift, bias, hallucination metrics, and create dashboards for QA monitoring. You’ll occasion a human-in-the-loop sanity check for critical agent behavior. You’ll write guides so others understand how to test GenAI pipelines. --- **Skills and knowledge** • Python automation—pytest/unittest for component & agent testing • Postman/Newman, Selenium/Playwright/Cypress for UI/API test flows • Load/performance tools—JMeter, k6 for inference endpoints • SQL/NoSQL and data validation for vector DB pipelines • Vector DB testing—Milvus, FAISS, Pinecone embeddings/retrieval accuracy • GenAI evaluation—hallucinations, bias/fairness, embedding similarity (BLEU, ROUGE), adversarial/prompt injection testing • Multi-agent testing—understand component/unit tests per agent, inter-agent communications, coordination failure tests, message passing or blackboard rhythm, emergent behavior monitoring • CI/CD integration—Azure DevOps/GitHub Actions/Jenkins pipelines, gating on quality metrics • Cloud awareness—testing in Azure/AWS/GCP, GenAI endpoints orchestration and failure mode testing • Monitoring & observability—drift, latency, hallucination rate dashboards • Soft traits—detail oriented, QA mindset, self-driven, cross-functional communicator, ethical awareness around AI failures.

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

Ready to be pushed beyond what you think you’re capable of? At Coinbase, our mission is to increase economic freedom in the world. It’s a massive, ambitious opportunity that demands the best of us, every day, as we build the emerging onchain platform — and with it, the future global financial system. To achieve our mission, we’re seeking a very specific candidate. We want someone who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system. We want someone who is eager to leave their mark on the world, who relishes the pressure and privilege of working with high caliber colleagues, and who actively seeks feedback to keep leveling up. We want someone who will run towards, not away from, solving the company’s hardest problems. Our work culture is intense and isn’t for everyone. But if you want to build the future alongside others who excel in their disciplines and expect the same from you, there’s no better place to be. While many roles at Coinbase are remote-first, we are not remote-only. In-person participation is required throughout the year. Team and company-wide offsites are held multiple times annually to foster collaboration, connection, and alignment. Attendance is expected and fully supported. As a Staff Machine Learning Platform Engineer at Coinbase, you will play a pivotal role in building an open financial system. The team builds the foundational components for training and serving ML models at Coinbase. Our platform is used to combat fraud, personalize user experiences, and to analyze blockchains. We are a lean team, so you will get the opportunity to apply your software engineering skills across all aspects of building ML at scale, including stream processing, distributed training, and highly available online services. What you’ll be doing (ie. job duties): Form a deep understanding of our Machine Learning Engineers’ needs and our current capabilities and gaps. Mentor our talented junior engineers on how to build high quality software, and take their skills to the next level. Continually raise our engineering standards to maintain high-availability and low-latency for our ML inference infrastructure that runs both predictive ML models and LLMs. Optimize low latency streaming pipelines to give our ML models the freshest and highest quality data. Evangelize state-of-the-art practices on building high-performance distributed training jobs that process large volumes of data. Build tooling to observe the quality of data going into our models and to detect degradations impacting model performance. What we look for in you (ie. job requirements): 5+ yrs of industry experience as a Software Engineer. You have a strong understanding of distributed systems. You lead by example through high quality code and excellent communication skills. You have a great sense of design, and can bring clarity to complex technical requirements. You treat other engineers as a customer, and have an obsessive focus on delivering them a seamless experience. You have a mastery of the fundamentals, such that you can quickly jump between many varied technologies and still operate at a high level. Nice to Have: Experience building ML models and working with ML systems. Experience working on a platform team, and building developer tooling. Experience with the technologies we use (Python, Golang, Ray, Tecton, Spark, Airflow, Databricks, Snowflake, and DynamoDB). Job #: GPBE06IN *Answers to crypto-related questions may be used to evaluate your onchain experience #LI-Remote Commitment to Equal Opportunity Coinbase is committed to diversity in its workforce and is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, creed, gender, national origin, age, disability, veteran status, sex, gender expression or identity, sexual orientation or any other basis protected by applicable law. Coinbase will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law. For US applicants, you may view the Know Your Rights notice here . Additionally, Coinbase participates in the E-Verify program in certain locations, as required by law. Coinbase is also committed to providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation because of a disability for any part of the employment process, please contact us at accommodations[at]coinbase.com to let us know the nature of your request and your contact information. For quick access to screen reading technology compatible with this site click here to download a free compatible screen reader (free step by step tutorial can be found here). Global Data Privacy Notice for Job Candidates and Applicants Depending on your location, the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) may regulate the way we manage the data of job applicants. Our full notice outlining how data will be processed as part of the application procedure for applicable locations is available here. By submitting your application, you are agreeing to our use and processing of your data as required. For US applicants only, by submitting your application you are agreeing to arbitration of disputes as outlined here.

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At Electronix.AI, we’re building tools for the engineers of tomorrow, tools that understand, automate, and accelerate the hardware development cycle. Website : Electronix AI | Accelerate Hardware Design Decisions We are now looking for an AI Full-stack Engineering Intern to join our team! This is your opportunity to be part of a fast-paced startup building AI-powered tools for the hardware world. You'll work with people who value autonomy, rapid prototyping, and deep problem-solving, and you'll help shape systems used in the field, not just in the cloud. ⚡What You’ll Be Doing (Responsibilities): Develop and optimize backend services using Python (FastAPI, HuggingFace Transformers, OpenCV). Design and deploy APIs for AI-powered automation, video analytics, and search applications. Develop and deploy robust on-premises solutions that integrate seamlessly with existing infrastructure. Implement and manage DevOps pipelines, ensuring continuous integration and deployment workflows across cloud and on-prem environments. Ensure scalability, security, and reliability across infrastructure and applications. Work with databases and caching layers like MySQL, Elasticsearch, Redis, and Vector DBs (Qdrant, ChromaDB, etc.). Develop frontend components using TypeScript, React, and GraphQL to create intuitive user experiences. Troubleshoot and resolve complex system performance issues, particularly for on-prem deployments and high-performance computing environments. (Nice to Have) Experience with Kubernetes for container orchestration ⚡ What we need to see: Backend : Python (FastAPI, HuggingFace Transformers, OpenCV), MySQL, RabbitMQ, GraphQL Front-End: React, TypeScript AI & Search: Elasticsearch, Redis, Vector DBs (Qdrant, ChromaDB) Cloud & DevOps: Docker, AWS, Azure, CI/CD, On-Prem Deployments Bonus : Kubernetes, GPU-based workloads (NVIDIA GPUs preferred) ⚡ Internship Details: Duration : 4-6 months Location : Onsite/Hybrid (min 3x a week) - Bengaluru (Jayanagar/Richmond Town) Stipend : Monetary compensation included. Additional Perks : Early access to product decisions, real-world deployment experience, high ownership. ⚡ Good to Have: These are extras that help us spot naturally curious, hands-on builders. None are hard requirements especially the hardware items, which are purely optional bonuses. Hands-On AI Exploration: Personal / academic projects showing end-to-end use of modern AI stacks—e.g., fine-tuning or serving models with Hugging Face Transformers, building RAG pipelines, or experimenting with OpenAI, Gemini, or Ollama APIs. Evidence you can move beyond tutorials: custom data pipelines, evaluation scripts, or deployment artefacts that solve real problems. Tooling & Framework Depth Comfort with open-source LLM toolchains such as LangChain, LlamaIndex, FastEmbed, or Haystack. Experience running or optimising models on GPU/CPU edge devices; bonus points for on-device inference tricks (quantisation, pruning, TensorRT, ONNX, GGUF). Familiarity with vector databases (Qdrant, Chroma, Weaviate) and search frameworks (Elastic, OpenSearch). (Optional Bonus) Hardware-Aware Mindset : Purely a plus, great if you have it, absolutely fine if you don’t. Basic exposure to the semiconductor / EDA landscape, PCB design, or HDLs (Verilog/VHDL). Past tinkering that bridges software with sensors, FPGAs, Raspberry Pi, Jetson, or lab equipment. Appreciation of constraints unique to on-prem or embedded deployments: latency, memory, thermals and how they influence architecture. MLOps & Dev-Infra Curiosity Initial exposure to MLOps concepts: experiment tracking (Weights & Biases, MLflow), model registry, automated evaluation. Comfort scripting IaC with Terraform/Ansible or writing GitHub Actions to ship prototypes rapidly. Show-Don’t-Tell Proof An active GitHub with readable READMEs, issues, and commit history reflecting iterative learning. Blog posts, lightning talks, or demo videos explaining challenges, trade-offs, and learnings. Clear communication counts. Contributions : big or small, to open-source projects in AI, DevOps, or hardware realms. ⚡ Personal Traits We Value: Relentless curiosity: you ask why and keep digging. Bias for rapid prototyping: build, test, iterate. System thinking: see the whole stack and optimize the right layer. Collaborative clarity: explain complex ideas simply and receive feedback constructively. Bring tangible evidence: code, designs, write-ups, documentation - showcasing how you learn and build. We’re excited to see what you’ve been tinkering with! ⚡ Hiring Process: GitHub Review Profile Shortlisting Technical Task/Assignment Deep Dive Interview Join the Team!

Posted 1 month ago

Apply

13.0 years

0 Lacs

Gurugram, Haryana, India

On-site

We are seeking an experienced Cloud AIOps Architect to lead the design and implementation of advanced AI-driven operational systems across multi-cloud and hybrid cloud environments. This role demands a blend of technical expertise, innovation, and leadership to develop scalable solutions for complex IT systems with a focus on automation, machine learning, and operational efficiency. Responsibilities Architect and design the AIOps solution leveraging AWS, Azure, and Cloud Agnostic services, ensuring portability and scalability Develop an end-to-end automated machine learning (ML) pipeline from data ingestion, DataOps, model training, to inference pipelines across multi-cloud environments Design hybrid architectures leveraging cloud-native services like Amazon SageMaker, Azure Machine Learning, and Kubernetes for development, model deployment, and orchestration Design and implement ChatOps integration, allowing users to interface with the platform through Slack, Microsoft Teams, or similar communication platforms Leverage Jupyter Notebooks in AWS SageMaker, Azure Machine Learning Studio, or cloud-agnostic environments to create model prototypes and experiment with datasets Lead the design of classification models and other ML models using AWS SageMaker training jobs, Azure ML training jobs, or open-source tools in a Kubernetes container Implement automated rule management systems using Python in containers deployed to AWS ECS/EKS, Azure AKS, or Kubernetes for cloud-agnostic solutions Architect the integration of ChatOps backend services using Python containers running in AWS ECS/EKS, Azure AKS, or Kubernetes for real-time interactions and updates Oversee the continuous deployment and retraining of models based on updated data and feedback loops, ensuring models remain efficient and adaptive Design platform-agnostic solutions to ensure that the system can be ported across different cloud environments or run in hybrid clouds (on-premises and cloud) Requirements 13+ years of overall experience and 7+ years of experience in AIOps, Cloud Architecture, or DevOps roles Hands-on experience with AWS services such as SageMaker, S3, Glue, Kinesis, ECS, EKS Strong experience with Azure services such as Azure Machine Learning, Blob Storage, Azure Event Hubs, Azure AKS Hands-on experience working on the design, development, and deployment of contact centre solutions at scale Proficiency in container orchestration (e.g., Kubernetes) and experience with multi-cloud environments Experience with machine learning model training, deployment, and data management across cloud-native and cloud-agnostic environments Expertise in implementing ChatOps solutions using platforms like Microsoft Teams, Slack, and integrating them with AIOps automation Familiarity with data lake architectures, data pipelines, and inference pipelines using event-driven architectures Strong programming skills in Python for rule management, automation, and integration with cloud services Experience in Kafka, Azure DevOps, and AWS DevOps for CI/CD pipelines

Posted 1 month ago

Apply

0 years

0 - 0 Lacs

Bengaluru, Karnataka, India

On-site

As an AI Research Apprentice you'll push the frontiers of generative and multimodal learning that power our autonomous robots. You will prototype diffusion-based vision models, vision-language architectures (VLAs/VLMs) and automated data-annotation pipelines that turn raw site footage into training gold. Key Responsibilities Design and train diffusion-based generative models for realistic, high-resolution synthetic data Build compact Vision-Language Models (VLMs) to caption, query and retrieve job-site scenes for downstream perception tasks Develop Vision-Language Alignment (VLA) objectives that link textual work-orders with pixel-level segmentation masks Architect large-scale auto-annotation pipelines that transform unlabeled images / point-clouds into high-quality labels with minimal human input Benchmark model performance on accuracy, latency and memory for deployment on Jetson-class hardware; compress with distillation or LoRA Collaborate with perception and robotics teams to integrate research prototypes into live ROS 2 stacks Qualifications & Skills Strong foundation in deep learning, probabilistic modeling and computer vision (coursework or research projects) Hands-on experience with diffusion models (e.g., DDPM, Latent Diffusion) in PyTorch or JAX Familiarity with multimodal transformers / VLMs (CLIP, BLIP, Flamingo, LLaVA, etc.) and contrastive pre-training objectives Working knowledge of data-centric AI: active learning, self-training, pseudo-labeling and large-scale annotation pipelines Solid coding skills in Python, PyTorch / Lightning, plus git-driven workflows; bonus for C++ and CUDA kernels Bonus: experience with on-device inference (TensorRT, ONNX Runtime) & synthetic data tools (Isaac Sim) Why Join Us Research bleeding-edge generative & multimodal tech and watch it land on real construction robots Publish, patent and open-source: we encourage conference submissions and community engagement Help build a company from the ground up—your experiments can become flagship product features Requirements PyTorch or JAX C++ CUDA kernels ONNX Runtime TensorRT Isaac Sim Latent Diffusion

Posted 1 month ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Role Overview We are seeking an innovative and passionate Machine Learning Engineer specializing in Computer Vision to design, develop, and deploy cutting-edge computer vision models and algorithms for real-world robotic applications. In this role, you will work alongside a talented team to leverage state-of-the-art deep learning techniques to solve complex vision problems, from object detection to image segmentation and 3D perception. Your contributions will have a direct impact on shaping the future of intelligent systems in robotics. This position is on-site in Pune and requires immediate availability. Please apply only if you are available to join within one month. Essential Qualifications Educational Background - B.Tech., B.S., or B.E. in CSE, EE, ECE, ME, Data Science, AI, or related fields with ≥ 2 years of hands-on experience in Computer Vision and Machine Learning. - M.S. or M.Tech. in the same disciplines with ≥ 2 years of practical experience in Computer Vision and Python programming. Technical Expertise - Strong experience developing deep learning models and algorithms for computer vision tasks (e.g., object detection, image classification, segmentation, keypoint detection). - Proficiency with Python and ML frameworks such as PyTorch , TensorFlow , or Keras . - Experience with OpenCV for image processing and computer vision pipelines. - Solid understanding of convolutional neural networks (CNNs) and other vision-specific architectures (e.g., YOLO, Mask R-CNN, EfficientNet , etc.). - Ability to build, test, and deploy robust models with PyTest or PyUnit testing frameworks. - Hands-on experience with data augmentation, transformation, and preprocessing techniques for visual data. - Familiarity with version control using Git . Desirable Skills Experience with 3D vision , stereo vision , or depth sensing technologies. Familiarity with ROS2 for integrating vision systems into robotic platforms. Understanding of sensor fusion techniques, including LiDAR, depth cameras, and IMUs. Exposure to MLOps for deploying and maintaining computer vision models in production environments. Knowledge of CMake for building and integrating machine learning and vision-based solutions. Experience working with cloud-based solutions for computer vision, including cloud inference services. Ability to work with CUDA , GPU-accelerated libraries , and distributed computing environments. Location On-site in Pune , India. Immediate availability required. Why Join Us? Shape the future of robotics and AI, focusing on state-of-the-art computer vision applications. Work in a dynamic and collaborative environment with a team of highly motivated engineers. Competitive salary and benefits package, along with performance-based incentives. Opportunities for growth and professional development, including mentorship from industry experts. If you're passionate about combining deep learning and computer vision to build intelligent systems that perceive and interact with the world, we want to hear from you!

Posted 1 month ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

• Develop strategies/solutions to solve problems in logical yet creative ways, leveraging state-of-the-art machine learning, deep learning and GEN AI techniques. • Technically lead a team of data scientists to produce project deliverables on time and with high quality. • Identify and address client needs in different domains, by analyzing large and complex data sets, processing, cleansing, and verifying the integrity of data, and performing exploratory data analysis (EDA) using state-of-the-art methods. • Select features, build and optimize classifiers/regressors, etc. using machine learning and deep learning techniques. • Enhance data collection procedures to include information that is relevant for building analytical systems, and ensure data quality and accuracy. • Perform ad-hoc analysis and present results in a clear manner to both technical and non-technical stakeholders. • Create custom reports and presentations with strong data visualization and storytelling skills to effectively communicate analytical conclusions to senior officials in a company and other stakeholders. • Expertise in data mining, EDA, feature selection, model building, and optimization using machine learning and deep learning techniques. • Strong programming skills in Python. • Excellent communication and interpersonal skills, with the ability to present complex analytical concepts to both technical and non-technical stakeholders. Primary Skills : - Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc. - Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc). - Proficient in coding in common data science language & tools such as R, Python. - Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc. - Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc. - Exposure or knowledge in cloud (Azure/AWS). - Experience on deployment of model in production.

Posted 1 month ago

Apply

1.0 years

0 Lacs

Greater Kolkata Area

Remote

Company Overview : At Growth Loops Technology, we are at the forefront of AI innovation, leveraging cutting-edge machine learning and natural language processing (NLP) techniques to build transformative products. We are looking for an experienced and passionate LLM Engineer to join our team and help us develop and optimize state-of-the-art language models that push the boundaries of what's possible with AI. Job Description : As an LLM Engineer, you will be responsible for designing, building, and fine-tuning large-scale language models (LLMs) to solve complex real-world problems. You will work alongside data scientists, machine learning engineers, and product teams to ensure our models are not only accurate but also efficient, scalable, and capable of handling diverse use cases. The ideal candidate will have a strong background in natural language processing, deep learning, and large-scale distributed systems. You should be passionate about advancing the field of AI and have hands-on experience with LLMs, such as GPT, BERT, or similar architectures. Key Responsibilities : Model Development : Design, develop, and fine-tune large language models (LLMs) for various applications, including text generation, translation, summarization, and question answering. Research & Innovation : Stay up to date with the latest advancements in NLP and LLM architectures, and propose new approaches to improve model performance and efficiency. Optimization : Implement optimization techniques to reduce computational resource requirements and improve model inference speed without sacrificing accuracy or performance. Scalability : Develop strategies for training and deploying models at scale, ensuring robustness and reliability in production environments. Collaboration : Work closely with cross-functional teams (data science, software engineering, product) to integrate LLM capabilities into our products and solutions. Evaluation & Benchmarking : Establish and maintain rigorous testing, validation, and benchmarking procedures to assess model quality, performance, and generalization. Model Explainability : Develop methods to improve the interpretability and explainability of language models, ensuring that outputs can be understood and trusted by end-users. Qualifications : Education : Bachelor's, Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Experience : Proven experience (1+ years) working with NLP, deep learning, and LLM architectures (e.g., GPT, BERT, T5, etc.). Expertise in programming languages such as Python and experience with machine learning frameworks like TensorFlow, PyTorch, or JAX. Solid understanding of transformer models, attention mechanisms, and the architecture of large-scale neural networks. Experience with distributed computing, GPU acceleration, and cloud-based machine learning platforms (e.g., AWS, GCP, Azure). Familiarity with model deployment tools and practices (e.g., TensorFlow Serving, or Hugging Face). Skills : Strong problem-solving skills and ability to work on complex, ambiguous tasks. Solid understanding of model evaluation metrics for NLP tasks. Experience with large datasets and parallel computing for training and fine-tuning LLMs. Familiarity with optimization techniques such as pruning, quantization, or knowledge distillation. Excellent communication skills, both written and verbal, with the ability to explain complex technical concepts to non-technical stakeholders. Nice to Have : Experience with reinforcement learning or few-shot learning in the context of language models. Contributions to open-source projects or publications in top-tier AI/ML conferences (e.g., NeurIPS, ACL, ICML). What We Offer : Competitive salary and benefits package Flexible work schedule with remote work options Opportunity to work on cutting-edge AI technology with a passionate team A collaborative and inclusive work culture focused on innovation

Posted 1 month ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

This role is for one of Weekday's clients Min Experience: 6 years Location: Bangalore JobType: full-time Requirements About the Role: We are looking for an experienced Market Mix Modelling (MMM) Specialist with deep expertise in marketing analytics and advanced statistical modelling. This role is critical in supporting strategic business decisions through the development and interpretation of MMM models, with a strong focus on optimizing media spend and maximizing ROI. The ideal candidate will bring strong Python proficiency, domain knowledge in retail and FMCG/CPG sectors, and the ability to communicate complex insights to senior stakeholders. Key Responsibilities: 🔹 Market Mix Modelling Development Design, build, and maintain robust Market Mix Models to assess the impact of various marketing channels on sales and business KPIs. Apply both short-term and long-term modelling techniques to evaluate marketing efficiency and brand equity effects. Implement Adstock and other transformation methods to accurately reflect delayed media effects and diminishing returns. 🔹 Optimization & Strategy Perform budget allocation and media optimization simulations using model outputs to identify the most efficient marketing mix. Develop scenarios and ROI simulations to guide media planning and resource allocation. Partner with media and strategy teams to deliver actionable recommendations for campaign optimization. 🔹 Analytics & Interpretation Analyze model outputs and translate them into clear, concise business insights. Use Bayesian methods and other advanced statistical techniques for model enhancement and credibility. Ensure model diagnostics, validation, and updates are well documented and regularly performed. 🔹 Stakeholder Management Present analytical findings to business and marketing stakeholders in an easy-to-understand manner. Collaborate with cross-functional teams including marketing, finance, and sales to gather inputs and validate assumptions. Provide consultative support on marketing performance and strategy development. Required Skills & Qualifications: Bachelor's or Master's degree in Statistics, Economics, Data Science, Marketing Analytics, or a related quantitative field. 6+ years of hands-on experience in building and implementing Market Mix Models, preferably in the retail or FMCG/CPG industry. Proficiency in Python for data processing, modelling, and data visualization. In-depth knowledge of statistical modelling techniques, marketing response functions, and causal inference. Practical experience in Adstock, decay curves, and various transformation techniques used in MMM. Exposure to Bayesian modelling frameworks and/or MCMC techniques is highly desirable. Strong knowledge of campaign planning, digital and offline media metrics, and marketing attribution. Preferred Experience: Previous experience working with syndicated retail data (e.g., Nielsen, IRI). Familiarity with marketing effectiveness platforms or MMM software/tools. Experience with dashboarding tools for MMM output visualization is a plus (e.g., Tableau, Power BI).

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies