Jobs
Interviews

2002 Inference Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 10.0 years

0 Lacs

Chandigarh

On-site

bebo Technologies is a leading complete software solution provider. bebo stands for 'be extension be offshore'. We are a business partner of QASource, inc. USA[www.QASource.com]. We offer outstanding services in the areas of software development, sustenance engineering, quality assurance and product support. bebo is dedicated to provide high-caliber offshore software services and solutions. Our goal is to 'Deliver in time-every time'. For more details visit our website: www.bebotechnologies.com Let's have a 360 tour of our bebo premises by clicking on below link: https://www.youtube.com/watch?v=S1Bgm07dPmMKey Required Skills: Bachelor's or Master’s degree in Computer Science, Data Science, or related field. 7–10 years of industry experience, with at least 5 years in machine learning roles. Advanced proficiency in Python and common ML libraries: TensorFlow, PyTorch, Scikit-learn. Experience with distributed training, model optimization (quantization, pruning), and inference at scale. Hands-on experience with cloud ML platforms: AWS (SageMaker), GCP (Vertex AI), or Azure ML. Familiarity with MLOps tooling: MLflow, TFX, Airflow, or Kubeflow; and data engineering frameworks like Spark, dbt, or Apache Beam. Strong grasp of CI/CD for ML, model governance, and post-deployment monitoring (e.g., data drift, model decay). Excellent problem-solving, communication, and documentation skills.

Posted 19 hours ago

Apply

6.0 years

3 - 3 Lacs

Hyderābād

On-site

Overview: Do you want to work in a fun and supportive environment? At erwin by Quest we know that companies with a strong positive culture perform so much better. That is why every day we strive to create a collaborative and inclusive working environment where our people can feel empowered to succeed. erwin by Quest is an award-winning Data Modelling software provider offering a broad selection of solutions that solve some of the most common and most challenging Data Governance problems. We are currently looking for Software Dev Senior Engineer to join us. Responsibilities: You will have the freedom to think and work together in a self-organizing agile team You will be committed to contribute to collaborative design, development and bug fixing efforts You will develop clean code, practice pair programming, participate in code reviews You will collaborate with the international customers and colleagues Qualifications: A minimum of 6+ Years of Full Stack Java Development experience. Strong knowledge of Data Structures and Algorithms, System Design. Expertise in Java 8+ and its modern features (eg, Streams, Lambda Expressions, Optional, Functional Interfaces) Hands-on experience building enterprise-grade applications using Java, Spring Framework (Spring Boot, Spring JDBC, Spring Security) Proficiency in Spring Boot for building microservices and RESTful APIs is a plus. Experience with Spring Core, Spring MVC, Spring Data, and Spring Security. Strong knowledge of SQL databases like Postgres, SQL Server. Experience with JPA/Hibernate for ORM and understanding of database optimization techniques, query performance tuning, and designing efficient models. Proficiency in designing RESTful APIs and working with API specifications and documentation tools like Swagger/OpenAPI Experience with OAuth 2.0, JWT for authentication and authorization mechanisms. Strong knowledge of React, Redux Toolkit Expertise in building and optimizing applications with React functional components and leveraging React Hooks for state and side effects management Provider in React and Context API Strong hands-on experience with TypeScript for building type-safe React applications. Deep understanding of TypeScript features like interfaces, generics, type inference, etc Strong understanding of semantic HTML and modern CSS for responsive design Familiarity with Material UI and Tailwind CSS for building modern, user-friendly, and scalable UI components. Proficiency with Git and working with branching strategies Experience with optimizing application performance, including JVM tuning, caching strategies, and improving query performance in databases Strong understanding of security best practices for both frontend and backend, including secure coding and protecting APIs. Familiarity with cloud services (Azure, AWS, GCP) is a plus. Company Description: At Quest, we create and manage the software that makes the benefits of new technology real. Companies turn to us to manage, modernize, and secure their business, from on-prem to in-cloud, from the heart of the network to the vulnerable endpoints. From complex challenges like Active Directory management and Office 365 migration to database and systems management to redefining security, and hundreds of needs in between, we help you conquer your next challenge now. We’re not the company that makes big promises. We’re the company that fulfills them. We’re Quest: Where Next Meets Now. Why work with us! Life at Quest means collaborating with dedicated professionals with a passion for technology When we see something that could be improved, we get to work inventing the solution Our people demonstrate our winning culture through positive and meaningful relationships We invest in our people and offer a series of programs that enables them to pursue a career that fulfills their potential Our team members’ health and wellness are our priority as well as rewarding them for their hard work Quest is an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations, and ordinances. Come join us. For more information, visit us on the web at http://www.quest.com/careers .

Posted 19 hours ago

Apply

3.0 years

7 - 15 Lacs

Hyderābād

On-site

We are seeking an experienced Generative AI Developer with strong Python skills and proven hands-on experience building and deploying AI/ML models . In this role, you will work on designing, developing, and deploying innovative Generative AI solutions that create real business impact. You’ll contribute at every stage — from research and prototyping to production deployment. Key Responsibilities: Design, build, and deploy AI/ML models, with a strong focus on Generative AI (e.g., LLMs, diffusion models). Develop robust Python backend services to support AI applications in production. Integrate AI/ML models with scalable APIs and pipelines for real-world use cases. Optimize model performance, accuracy, and inference speed for production workloads. Experiment with model fine-tuning, prompt engineering, and dataset curation. Monitor deployed models and ensure smooth operation and updates. Collaborate with product managers, data scientists, and engineers to translate business needs into AI solutions. Document processes, maintain code quality, and follow best practices for reproducibility and scalability. Required Skills & Qualifications: Bachelor’s/Master’s degree in Computer Science, Data Science, AI/ML, or related field. Strong proficiency in Python and relevant frameworks (FastAPI, Flask, Django). Hands-on experience building, training, and deploying AI/ML models into production. Familiarity with Generative AI frameworks (Hugging Face Transformers, LangChain, OpenAI APIs, etc.). Good understanding of NLP, LLMs, and modern AI techniques. Experience with RESTful APIs, cloud services (AWS/GCP/Azure), and CI/CD workflows. Excellent problem-solving, debugging, and troubleshooting skills. Ability to work independently and deliver reliable results on time. Excellent communication and teamwork skills. Job Types: Full-time, Permanent Pay: ₹700,000.00 - ₹1,500,000.00 per year Benefits: Cell phone reimbursement Health insurance Internet reimbursement Leave encashment Life insurance Paid sick time Paid time off Provident Fund Schedule: Day shift Monday to Friday Ability to commute/relocate: Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Are you immediate Joiner? Experience: Python: 3 years (Required) Gen AI: 1 year (Required) Work Location: In person

Posted 19 hours ago

Apply

4.0 years

4 - 8 Lacs

Noida

On-site

Position Overview- We are looking for an experienced AI Engineer to design, build, and optimize AI-powered applications, leveraging both traditional machine learning and large language models (LLMs). The ideal candidate will have a strong foundation in LLM fine-tuning, inference optimization, backend development, and MLOps, with the ability to deploy scalable AI systems in production environments. ShyftLabs is a leading data and AI company, helping enterprises unlock value through AI-driven products and solutions. We specialize in data platforms, machine learning models, and AI-powered automation, offering consulting, prototyping, solution delivery, and platform scaling. Our Fortune 500 clients rely on us to transform their data into actionable insights. Key Responsibilities: Design and implement traditional ML and LLM-based systems and applications. Optimize model inference for performance and cost-efficiency. Fine-tune foundation models using methods like LoRA, QLoRA, and adapter layers. Develop and apply prompt engineering strategies including few-shot learning, chain-of-thought, and RAG. Build robust backend infrastructure to support AI-driven applications. Implement and manage MLOps pipelines for full AI lifecycle management. Design systems for continuous monitoring and evaluation of ML and LLM models. Create automated testing frameworks to ensure model quality and performance. Basic Qualifications: Bachelor’s degree in Computer Science, AI, Data Science, or a related field. 4+ years of experience in AI/ML engineering, software development, or data-driven solutions. LLM Expertise Experience with parameter-efficient fine-tuning (LoRA, QLoRA, adapter layers). Understanding of inference optimization techniques: quantization, pruning, caching, and serving. Skilled in prompt engineering and design, including RAG techniques. Familiarity with AI evaluation frameworks and metrics. Experience designing automated evaluation and continuous monitoring systems. Backend Engineering Strong proficiency in Python and frameworks like FastAPI or Flask. Experience building RESTful APIs and real-time systems. Knowledge of vector databases and traditional databases. Hands-on experience with cloud platforms (AWS, GCP, Azure) focusing on ML services. MLOps & Infrastructure Familiarity with model serving tools (vLLM, SGLang, TensorRT). Experience with Docker and Kubernetes for deploying ML workloads. Ability to build monitoring systems for performance tracking and alerting. Experience building evaluation systems using custom metrics and benchmarks. Proficient in CI/CD and automated deployment pipelines. Experience with orchestration tools like Airflow. Hands-on experience with LLM frameworks (Transformers, LangChain, LlamaIndex). Familiarity with LLM-specific monitoring tools and general ML monitoring systems. Experience with distributed training and inference on multi-GPU environments. Knowledge of model compression techniques like distillation and quantization. Experience deploying models for high-throughput, low-latency production use. Research background or strong awareness of the latest developments in LLMs. Tools & Technologies We Use Frameworks: PyTorch, TensorFlow, Hugging Face Transformers Serving: vLLM, TensorRT-LLM, SGlang, OpenAI API Infrastructure: Docker, Kubernetes, AWS, GCP Databases: PostgreSQL, Redis, Vector Databases We are proud to offer a competitive salary alongside a strong healthcare insurance and benefits package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.

Posted 19 hours ago

Apply

5.0 years

4 Lacs

Ahmedabad

On-site

We are hiring a Senior Software Development Engineer for our platform. We are helping enterprises and service providers build their AI inference platforms for end users. As a Senior Software Engineer, you will take ownership of backend-heavy, full-stack feature development—building robust services, scalable APIs, and intuitive frontends that power the user experience. You’ll contribute to the core of our enterprise-grade AI platform, collaborating across teams to ensure our systems are performant, secure, and built to last. This is a high-impact, high-visibility role working at the intersection of AI infrastructure, enterprise software, and developer experience. Responsibilities: Design, develop and maintain databases, system APIs, system integrations, machine learning pipelines and web user interfaces. Scale algorithms designed by data scientists for deployment in high-performance environments. Develop and maintain continuous integration pipelines to deploy the systems. Design and implement scalable backend systems using Golang, C++, Go,Python. Model and manage data using relational (e.g., PostgreSQL , MySQL). Build frontend components and interfaces using TypeScript, and JavaScript when needed. Participate in system architecture discussions and contribute to design decisions. Write clean, idiomatic, and well-documented Go code following best practices and design patterns. Ensure high code quality through unit testing, automation, code reviews, and documentation Communicate technical concepts clearly to both technical and non-technical stakeholders. Qualifications and Criteria: 5–10 years of professional software engineering experience building enterprise-grade platforms. Deep proficiency in Golang , with real-world experience building production-grade systems. Solid knowledge of software architecture, design patterns, and clean code principles. Experience in high-level system design and building distributed systems. Expertise in Python and backend development with experience in PostgreSQL or similar databases. Hands-on experience with unit testing, integration testing, and TDD in Go. Strong debugging, profiling, and performance optimization skills. Excellent communication and collaboration skills. Hands-on experience with frontend development using JavaScript, TypeScript , and HTML/CSS. Bachelor's degree or equivalent experience in a quantitative field (Computer Science, Statistics, Applied Mathematics, Engineering, etc.). Skills: Understanding of optimisation, predictive modelling, machine learning, clustering and classification techniques, and algorithms. Fluency in a programming language (e.g. C++, Go, Python, JavaScript, TypeScript, SQL). Docker, Kubernetes, and Linux knowledge are an advantage. Experience using Git. Knowledge of continuous integration (e.g. Gitlab/Github). Basic familiarity with relational databases, preferably PostgreSQL. Strong grounding in applied mathematics. A firm understanding of and experience with the engineering approach. Ability to interact with other team members via code and design documents. Ability to work on multiple tasks simultaneously. Ability to work in high-pressure environments and meet deadlines. Compensation: Commensurate with experience Position Type: Full-time ( In House ) Location: Ahmedabad / Jamnagar Gujarat India. Submission Requirements CV All academic transcripts Submit to chintanit22@gmail.com , dipakberait@gmail.com with the name of the position you wish to apply for in the subject line. Job Type: Full-time Pay: From ₹40,000.00 per month Benefits: Paid sick time Location Type: In-person Schedule: Day shift Monday to Friday Experience: Full-stack development: 5 years (Preferred) Work Location: In person

Posted 19 hours ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

We're Hiring: AI/ML Intern @ Atomo Location: Gandhinagar ,Gujarat Duration: 6 months At Atomo , we’re building AI-powered edge devices for automation, smart infrastructure, and industrial IoT — made in India, designed for the world. We’re looking for a passionate AI/ML Intern to join our team and work on cutting-edge applications like: On-device inference (NPU optimization) Predictive maintenance models Energy efficiency & anomaly detection Real-time data processing on the edge,video and image analysis What You’ll Do: Build, train, and optimize ML models for embedded deployment Work closely with our firmware and hardware teams Evaluate model performance across edge devices (Electron, Proton, Neutron) Contribute to real-world, production-grade features You Should Have: Background in Computer Science, AI/ML, or related field Experience with Python, TensorFlow / PyTorch Familiarity with edge ML (TensorFlow Lite, ONNX, etc.) Interest in embedded systems, IoT, or edge computing Why Atomo? Hands-on work with real devices Learn how AI meets hardware Be part of India’s deep-tech innovation story Strong chance of PPO (pre-placement offer) for top performers #AIIntern #MLIntern #EdgeAI #IoT #Internship #Hiring #MadeInIndia #DeepTech #CareersAtAtomo

Posted 20 hours ago

Apply

2.0 - 5.0 years

0 Lacs

New Delhi, Delhi, India

On-site

About the Role We are seeking a highly motivated and creative Platform Engineer with a true research mindset. This is a unique opportunity to move beyond traditional development and step into a role where you will ideate, prototype, and build production-grade applications powered by Generative AI. You will be a core member of a platform team, responsible for developing both internal and customer-facing solutions that are not just functional but intelligent. If you are passionate about the MERN stack, Python, and the limitless possibilities of Large Language Models (LLMs), and you thrive on building things from the ground up, this role is for you. Core Responsibilities Innovate and Build: Design, develop, and deploy full-stack platform applications integrated with Generative AI, from concept to production. AI-Powered Product Development: Create and enhance key products such as: Intelligent chatbots for customer service and internal support. Automated quality analysis and call auditing systems using LLMs for transcription and sentiment analysis. AI-driven internal portals and dashboards to surface insights and streamline workflows. Full-Stack Engineering: Write clean, scalable, and robust code across the MERN stack (MongoDB, Express.js, React, Node.js) and Python. Gen AI Integration & Optimization: Work hands-on with foundation LLMs, fine-tuning custom models, and implementing advanced prompting techniques (zero-shot, few-shot) to solve specific business problems. Research & Prototyping: Explore and implement cutting-edge AI techniques, including setting up systems for offline LLM inference to ensure privacy and performance. Collaboration: Partner closely with product managers, designers, and business stakeholders to transform ideas into tangible, high-impact technical solutions. Required Skills & Experience Experience: 2-5 years of professional experience in a software engineering role. Full-Stack Proficiency: Strong command of the MERN stack (MongoDB, Express.js, React, Node.js) for building modern web applications. Python Expertise: Solid programming skills in Python, especially for backend services and AI/ML workloads. Generative AI & LLM Experience (Must-Have): Demonstrable experience integrating with foundation LLMs (e.g., OpenAI API, Llama, Mistral, etc.). Hands-on experience building complex AI systems and implementing architectures such as Retrieval-Augmented Generation (RAG) to ground models with external knowledge. Practical experience with AI application frameworks like LangChain and LangGraph to create agentic, multi-step workflows. Deep understanding of prompt engineering techniques (zero-shot, few-shot prompting). Experience or strong theoretical understanding of fine-tuning custom models for specific domains. Familiarity with concepts or practical experience in deploying LLMs for offline inference . R&D Mindset: A natural curiosity and passion for learning, experimenting with new technologies, and solving problems in novel ways. Bonus Points (Nice-to-Haves) Cloud Knowledge: Hands-on experience with AWS services (e.g., EC2, S3, Lambda, SageMaker).

Posted 20 hours ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Firstsource Solutions is a leading provider of customized Business Process Management (BPM) services. Firstsource specializes in helping customers stay ahead of the curve through transformational solutions to reimagine business processes and deliver increased efficiency, deeper insights, and superior outcomes. We are trusted brand custodians and long-term partners to 100+ leading brands with presence in the US, UK, Philippines, and India. Our ‘rightshore’ delivery model offers solutions covering complete customer lifecycle across Healthcare, Telecommunications & Media and Banking, Financial Services & Insurance verticals. Our clientele includes Fortune 500 and FTSE 100 companies Job location :Bangalore/Hyderabad/Gurugram Position : Sr. Manager / Manager- Business Transformation Division : BPO Experience : 8-10+ years of relevant work experience in process transformation, implementing Intelligent automation, large-scale Quality projects. Responsibilities: We are currently seeking highly skilled and certified Lean Six Sigma Black Belt/PMP certified professional for an open position in our Banking Financial Services (BFS) division. If you have hands-on experience in Banking Back Office operations and the Contact Center domain, this could be a golden opportunity for you! Working with management to determine strategy for new initiatives or projects. Reviewing current processes and recommending changes based on industry best practices. Perform gap identification exercise in process mapping using AS-IS and TO-BE process maps Assessing and prioritizing improvement opportunities and impacts (risk, customer satisfaction, error reduction, system capabilities / constraints) Ability to perform data analysis and identify the key inference for driving the impacted metrices Map customer journeys and identify issues/ opportunities for process re-engineering and digital enhancement Engage with stakeholders to understand their requirements, conduct RCA to identify opportunities, deploy solutions and provide regular updates on progress. Ability to identify the digital intervention for improving the efficiency and effectiveness of processes Developing an implementation plan for each project, including identifying stakeholders, creating timelines, and developing budgets. Implementing projects to improve processes within the organization, such as Lean Six Sigma initiatives or process mapping exercises. Managing the execution of transformation initiatives, including tracking progress and ensuring that milestones are met. Communicating with stakeholders about transformation initiatives and their impact on the business. Monitor & audit the deployed processes for effectiveness & efficiency Responsible for generating business impact for the clients using the CI methodologies and frameworks Responsible for identifying the Gen AI opportunities -Capabilities, applicability and business case Understanding of scenario to implement the AI, ML tools Key Performance Indicators Value delivered through projects in different client businesses across Operations, Digital and Technology. Automation identification and deployment with support from Digital and technology teams. Project Management and Process Improvement Facilitating change including facilitated Idea generation and idea management. Working collaboratively with Digital, Tech, Cx and automation teams to deliver the key objectives Qualification & Experience requirements Bachelor’s degree in a related field, such as business administration, management or engineering. Experience in business transformation and change management in previous organisation is desired. Good/Strong understanding of Generative AI, predictive ML, and data analytics. Excellent communication and interpersonal skills. Ability to work independently and as part of a team in Hybrid setup Disclaimer: Firstsource follows a fair, transparent, and merit-based hiring process. We never ask for money at any stage. Beware of fraudulent offers and always verify through our official channels or @firstsource.com email addresses. Contact details Padmapriya Shekar Email-padmapriya.shekar1@firstsource.com

Posted 21 hours ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Looking for AI Engineer | Hyderabad to join a team of rockstar developers. The candidate should have a minimum of 6+ yrs of experience. About CodeVyasa :CodeVyasa is a mid-sized product engineering company that works with top-tier product/solutions companies such as McKinsey, Walmart, RazorPay, Swiggy , and others. We are about 550+ people strong and we cater to Product & Data Engineering use-cases around Agentic AI, RPA, Full-stack and various other GenAI areas . KEY RESPONSIBILITIES ● Design and deploy intelligent, agent-driven systems that autonomously solve complex, real-world problems using cutting-edge algorithms and AI libraries. ● Engineer collaborative agentic frameworks that coordinate multiple specialized agents to deliver advanced AI capabilities for enterprise-scale applications. ● Build and extend MCP-based infrastructure that enables secure, context-rich interaction between agents and external tools, empowering AI systems to act, reason, and adapt in real-time. ● Build human-in-the-loop agent workflows where humans benefit from AI assistance in decision-making and automation, while agents learn and improve through continuous human feedback. ● Stay abreast of the latest trends in AI research and integrate them into our products and services. ● Communicate technical challenges and solutions to non-technical stakeholders and team members. Skills Required: Fluency in Python and experience with core machine learning and deep learning frameworks (e.g., PyTorch, TensorFlow) , with a proven ability to build, train, and evaluate models in real-world environments. Solid grasp of modern ML/AI fundamentals , including representation learning, optimization, generalization, and evaluation metrics across supervised, unsupervised, and generative settings. Experience working with multimodal data, including text, image, and structured formats, and building pipelines for feature extraction, embedding generation, and downstream model consumption. Hands-on experience integrating AI models into production workflows, including model inference, API deployment, and system monitoring. Proficiency in using version control, testing frameworks, and collaborative development workflows, including Git and basic CI/CD practices. Ability to communicate clearly about system behavior, trade-offs, and architectural decisions, especially when working across interdisciplinary teams. Understanding of LLMOps/MLOps principles , including model/version tracking, pipeline reproducibility, observability, and governance in production environments. Why Join CodeVyasa? Work on innovative, high-impact projects with a team of top-tier professionals. Continuous learning opportunities and professional growth. Flexible work environment with a supportive company culture. Competitive salary and comprehensive benefits package. Free healthcare coverage.

Posted 21 hours ago

Apply

0.0 - 1.0 years

7 - 15 Lacs

Hyderabad, Telangana

On-site

We are seeking an experienced Generative AI Developer with strong Python skills and proven hands-on experience building and deploying AI/ML models . In this role, you will work on designing, developing, and deploying innovative Generative AI solutions that create real business impact. You’ll contribute at every stage — from research and prototyping to production deployment. Key Responsibilities: Design, build, and deploy AI/ML models, with a strong focus on Generative AI (e.g., LLMs, diffusion models). Develop robust Python backend services to support AI applications in production. Integrate AI/ML models with scalable APIs and pipelines for real-world use cases. Optimize model performance, accuracy, and inference speed for production workloads. Experiment with model fine-tuning, prompt engineering, and dataset curation. Monitor deployed models and ensure smooth operation and updates. Collaborate with product managers, data scientists, and engineers to translate business needs into AI solutions. Document processes, maintain code quality, and follow best practices for reproducibility and scalability. Required Skills & Qualifications: Bachelor’s/Master’s degree in Computer Science, Data Science, AI/ML, or related field. Strong proficiency in Python and relevant frameworks (FastAPI, Flask, Django). Hands-on experience building, training, and deploying AI/ML models into production. Familiarity with Generative AI frameworks (Hugging Face Transformers, LangChain, OpenAI APIs, etc.). Good understanding of NLP, LLMs, and modern AI techniques. Experience with RESTful APIs, cloud services (AWS/GCP/Azure), and CI/CD workflows. Excellent problem-solving, debugging, and troubleshooting skills. Ability to work independently and deliver reliable results on time. Excellent communication and teamwork skills. Job Types: Full-time, Permanent Pay: ₹700,000.00 - ₹1,500,000.00 per year Benefits: Cell phone reimbursement Health insurance Internet reimbursement Leave encashment Life insurance Paid sick time Paid time off Provident Fund Schedule: Day shift Monday to Friday Ability to commute/relocate: Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Are you immediate Joiner? Experience: Python: 3 years (Required) Gen AI: 1 year (Required) Work Location: In person

Posted 22 hours ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Senior Python Developer Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience: 4–8 years 🧠 About Darwix AI Darwix AI is one of India’s fastest-growing AI startups, transforming enterprise sales with our GenAI-powered conversational intelligence and real-time agent assist suite. Our platform is used by high-growth enterprises across India, MENA, and Southeast Asia to improve sales productivity, personalize customer conversations, and unlock revenue intelligence in real-time. We are backed by marquee VCs, 30+ angel investors, and led by alumni from IITs, IIMs, and BITS with deep experience in building and scaling products from India for the world. 🎯 Role Overview As a Senior Python Developer at Darwix AI, you will be at the core of our engineering team, leading the development of scalable, secure, and high-performance backend systems that support AI workflows, real-time data processing, and enterprise-grade integrations. This role requires deep technical expertise in Python, a strong foundation in backend architecture, and the ability to collaborate closely with AI, product, and infrastructure teams. You will take ownership of critical backend modules and shape the engineering culture in a rapidly evolving, high-impact environment. 🔧 Key Responsibilities🔹 System Architecture & API Development Design, implement, and optimize backend services and microservices using Python frameworks such as FastAPI, Django, or Flask Lead the development of scalable RESTful APIs that integrate with frontend, mobile, and AI systems Architect low-latency, fault-tolerant services supporting real-time sales analytics and AI inference 🔹 Data Pipelines & Integrations Build and optimize ETL pipelines to manage structured and unstructured data from internal and third-party sources Integrate APIs with CRMs, telephony systems, transcription engines, and enterprise platforms like Salesforce, Zoho, and LeadSquared Lead scraping and data ingestion efforts from large-scale, dynamic web sources using Playwright, BeautifulSoup, or Scrapy 🔹 AI/ML Enablement Work closely with AI engineers to build infrastructure for LLM/RAG pipelines , vector DBs , and real-time AI decisioning Implement backend support for prompt orchestration , Langchain flows , and function-calling interfaces Support model deployment, inference APIs, and logging/monitoring for large-scale GenAI pipelines 🔹 Database & Storage Design Optimize database design and queries using MySQL , PostgreSQL , and MongoDB Architect and manage Redis and Kafka for caching, queueing, and real-time communication 🔹 DevOps & Quality Ensure continuous delivery through version control (Git), CI/CD pipelines, testing frameworks, and Docker-based deployments Identify and resolve bottlenecks related to performance, memory, or data throughput Adhere to best practices in code quality, testing, security, and documentation 🔹 Leadership & Collaboration Mentor junior developers and participate in code reviews Collaborate cross-functionally with product, AI, design, and sales engineering teams Contribute to architectural decisions, roadmap planning, and scaling strategies ✅ Qualifications 4–8 years of backend development experience in Python, with a deep understanding of object-oriented and functional programming Hands-on experience with FastAPI , Django , or Flask in production environments Proven experience building scalable microservices, data pipelines, and backend systems that support live applications Strong command over REST API architecture , database optimization, and data modeling Solid experience working with web scraping tools , automation frameworks, and external API integrations Knowledge of AI tools like Langchain , HuggingFace , Vector DBs (Pinecone, Weaviate, FAISS) , or RAG architectures is a strong plus Familiarity with cloud infrastructure (AWS/GCP) , Docker, and containerized deployments Comfortable working in fast-paced, high-ownership environments with shifting priorities and dynamic problem-solving 🌟 Bonus Prior experience in an early-stage SaaS startup or AI-first product environment Contributions to open-source Python projects or developer communities Experience working with real-time streaming systems (Kafka, Redis Streams, WebSockets) 💰 What We Offer Competitive fixed salary + performance-linked incentives Equity options for high-impact performers Opportunity to work on cutting-edge GenAI and SaaS products used by global enterprises Autonomy, rapid decision-making, and direct interaction with founders and senior leadership High-growth environment with clear progression toward Tech Lead or Engineering Manager roles Access to tools, compute, and learning resources to accelerate your technical and leadership growth 📩 To Apply Send your resume and GitHub/portfolio (if applicable) to people@darwix.ai Subject Line: Senior Python Developer – [Your Name] Darwix AI Built from India for the World | GenAI for Revenue Teams

Posted 22 hours ago

Apply

0.0 - 1.0 years

4 - 6 Lacs

Mohali, Punjab

On-site

ABOUT XENONSTACK XenonStack is the fastest-growing data and AI foundry for agentic systems, which enables people and organizations to gain real-time and intelligent business insights. Building Agentic Systems for AI Agents with https://www.akira.ai Vision AI Platform with https://www.xenonstack.ai Inference AI Infrastructure for Agentic Systems - https://www.nexastack.ai THE OPPORTUNITY We are seeking an enthusiastic Sales Executive to assist in generating leads, managing client relationships, and supporting sales efforts. If you are goal-driven, have strong communication skills, and are passionate about sales, we’d love to have you on our team. JOB ROLES & RESPONSIBILITIES Qualifying and managing inbound leads, ensuring timely follow-ups, and guiding prospects through the sales funnel. Conducting market research, analysing competitor landscapes, and identifying new sales opportunities. • Enhancing lead databases by updating contact details, identifying decision-makers, and ensuring accurate data for targeted outreach. • Sourcing potential clients through digital platforms, industry networking, and strategic research. Proactively reaching out to prospects through Engaging with potential clients, introducing XenonStack’s offerings, and setting up discovery meetings. Managing personalized email sequences to generate interest and nurture leads.• Utilizing LinkedIn for networking, outreach, and lead generation. Working closely with internal teams to align sales strategies and enhance lead conversion rates. SKILLS REQUIREMENTS MBA in Sales/Marketing preferred or a strong educational background in Business/Marketing. Fresh postgraduates with exceptional communication skills and a minimum of six months of internship experience in sales, business development, or a related field are encouraged to apply. 0-2 years in Sales, Business Development, or related roles. Strong self-motivation and ability to work independently.• Experience in B2B sales or lead generation is a plus. Ability to take ownership and drive measurable results. Excellent communication skills and attention to detail. Proactive and a growth-oriented mindset. Analytical thinking with problem-solving abilities. Strong organizational skills to manage multiple tasks efficiently. Job Types: Full-time, Permanent Pay: ₹400,000.00 - ₹600,000.00 per year Benefits: Paid sick time Paid time off Provident Fund Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Experience: Sales: 1 year (Required) Language: English (Required) Location: Mohali, Punjab (Required) Work Location: In person Speak with the employer +91 9815744707 Expected Start Date: 11/08/2025

Posted 22 hours ago

Apply

4.0 - 6.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Data Science @Dream Sports: Data Science at Dream Sports comprises seasoned data scientists thriving to drive value with data across all our initiatives. The team has developed state-of-the-art solutions for forecasting and optimization, data-driven risk prevention systems, Causal Inference and Recommender Systems to enhance product and user experience. We are a team of Machine Learning Scientists and Research Scientists with a portfolio of projects ranges from production ML systems that we conceptualize, build, support and innovate upon, to longer term research projects with potential game-changing impact for Dream Sports. This is a unique opportunity for highly motivated candidates to work on real-world applications of machine learning in the sports industry, with access to state-of-the-art resources, infrastructure, and data from multiple sources streaming from 250 million users and contributing to our collaboration with Columbia Dream Sports AI Innovation Center. Your Role: Executing clean experiments rigorously against pertinent performance guardrails and analysing performance metrics to infer actionable findings Developing and maintaining services with proactive monitoring and can incorporate best industry practices for optimal service quality and risk mitigation Breaking down complex projects into actionable tasks that adhere to set management practices and ensure stakeholder visibility Managing end-to-end lifecycle of large scale ML projects from data preparation, model training, deployment, monitoring, and upgradation of experiments Leveraging a strong foundation in ML, statistics, and deep learning to adeptly implement research-backed techniques for model development Staying abreast of the best ML practices and developments of the industry to mentor and guide team members Qualifiers: 4-6 years of experience in building, deploying and maintaining ML solutions Extensive experience with Python, Sql, Tensorflow/Pytorch and atleast one distributed data framework (Spark/Ray/Dask ) Working knowledge of Machine Learning, probability & statistics and Deep Learning Fundamentals Experience in designing end to end machine learning systems that work at scale About Dream Sports: Dream Sports is India’s leading sports technology company with 250 million users, housing brands such as Dream11 , the world’s largest fantasy sports platform, FanCode , a premier sports content & commerce platform and DreamSetGo , a sports experiences platform. Dream Sports is based in Mumbai and has a workforce of close to 1,000 ‘Sportans’. Founded in 2008 by Harsh Jain and Bhavit Sheth, Dream Sports’ vision is to ‘Make Sports Better’ for fans through the confluence of sports and technology. For more information: https://dreamsports.group/ Dream11 is the world’s largest fantasy sports platform with 230 million users playing fantasy cricket, football, basketball & hockey on it. Dream11 is the flagship brand of Dream Sports, India’s leading Sports Technology company and has partnerships with several national & international sports bodies and cricketers.

Posted 22 hours ago

Apply

3.0 years

0 Lacs

India

On-site

Key Responsibilities: Design and implement modular, reusable AI agents capable of autonomous decision-making using LLMs, APIs, and tools like LangChain, AutoGen, or Semantic Kernel. Engineer prompt strategies for task-specific agent workflows (e.g., document classification, summarization, labeling, sentiment detection). Integrate ML models (NLP, CV, RL) into agent behavior pipelines to support inference, learning, and feedback loops. Contribute to multi-agent orchestration logic including task delegation, tool selection, message passing, and memory/state management. Collaborate with MLOps, data engineering, and product teams to deploy agents at scale in production environments. Develop and maintain agent evaluations, unit tests, and automated quality checks for reliability and interpretability. Monitor and refine agent performance using logging, observability tools, and feedback signals. Required Qualifications: Bachelor’s or Master’s in Computer Science, AI/ML, Data Science, or related field. 3+ years of experience in developing AI/ML systems; 1+ year in agent-based architectures or LLM-enabled automation. Proficiency in Python and ML libraries (PyTorch, TensorFlow, scikit-learn). Experience with LLM frameworks (LangChain, AutoGen, OpenAI, Anthropic, Hugging Face Transformers). Strong grasp of NLP, prompt engineering, reinforcement learning, and decision systems. Knowledge of cloud environments (AWS, Azure, GCP) and CI/CD for AI systems. Preferred Skills: Familiarity with multi-agent frameworks and agent orchestration design patterns. Experience in building autonomous AI applications for data governance, annotation, or knowledge extraction. Background in human-in-the-loop systems, active learning, or interactive AI workflows. Understanding of vector databases (e.g., FAISS, Pinecone) and semantic search.

Posted 22 hours ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Company: Qualcomm India Private Limited Job Area: Engineering Group, Engineering Group > Systems Test Engineering General Summary: You will join the System Test team that is responsible for defining and implementing the overall testing strategy for WiFi & Networking AI/ML and Apps running on the access points. This involves development of test plans, tools and automation framework for validating and qualifying the AI SDK, AI/ML inference and applications running on the routers. You will work closely with systems team, development, and architecture team to understand the features and define test plans and solutions needed to deliver production grade software/firmware to the end customer. You will be collaborating with a variety of internal teams in Qualcomm covering multiple engineering disciplines including software, systems, and hardware. The successful applicant should have a diverse skill set and a strong background in developing test applications, API test framework and testing applications in embedded platforms. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 4+ years of Systems Test Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Systems Test Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 2+ years of Systems Test Engineering or related work experience. Responsibilities: Must be responsible for analyzing new feature and develop/create new test plans and adding new test cases Develop test applications, build API test frameworks and AI/ML models Manage Infrastructure, develop test topologies and prepare use cases for validation Work with cross functional teams for supporting end-to-end release Directs team of engineers on gathering, integrating, and interpreting information from a variety of sources in-order to troubleshoot issues and find solutions. Develop the right skill and train the team. Serves as a mentor to Engineers and Senior Engineers and teaches them about complex features, systems, testing strategies, and automation. Conduct log analysis with team members to identify where an issue has occurred and makes recommendations for how to address the issue. Networks with colleagues within own domain to gain insight, ideas, and connections. Shares information with peers and junior engineers. Collaborates with functional and lab teams, IO teams, network operators, field teams, and product management teams to ensure that the testing plan is accurate for addressing feature issues. Minimum Qualifications Bachelor's or Master’s degree in Engineering, Information Systems, Computer Science, Electronics & communications or related field. 10+ years in WiFi/Networking, AI, Embedded Application test, automation or Software Engineering Experience with developing and testing applications for Embedded systems Experience in developing and testing AI/ML models and applications Required Skills And Aptitudes Should possess strong knowledge in WLAN/networking and manual testing of networking products Strong knowledge in developing applications on embedded platforms Strong knowledge in developing and testing AI/ML models and AI/ML applications Must have good experience in testing of layer-2 to layer 7 protocols Possess high Debugging capability Strong problem-solving skills Ability to prioritize and execute tasks across multiple projects with tight deadlines and aggressive goals. Experience in scripting languages like python Excellent English communication (written and verbal) and interpersonal skills Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.

Posted 23 hours ago

Apply

0.0 - 3.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Humanli.AI is a Startup founded by Alumnus of IIM' Bangalore/ISB Hyderabad & IIM' Calcutta. We are democratizing and extending technology that were accessible & consumed only by MNC’s or Fortune companies to SME and Mid-size firms. We are pioneers in bringing Knowledge Management algorithms & Large Language Models into Conversational BOT framework. Job Title: AI/ML Engineer Location: Jaipur Job Type: Full-time Experience: 0-3 years Job Description: We are looking for an experienced AI/ML & Data Engineer to join our team and contribute to the development and deployment of our AI-based solutions. As an AI/ML & Data Engineer, you will be responsible for designing and implementing data models, algorithms, and pipelines for training and deploying machine learning models. Responsibilities: Design, develop, and fine-tune Generative AI models (e.g., LLMs, GANs, VAEs, Diffusion Models). Implement Retrieval-Augmented Generation (RAG) pipelines using vector databases (FAISS, Pinecone, ChromaDB, Weaviate). Develop and integrate AI Agents for task automation, reasoning, and decision-making. Work on fine-tuning open-source LLMs (e.g., LLaMA, Mistral, Falcon) for specific applications. Optimize and deploy transformer-based architectures for NLP and vision-based tasks. Train models using TensorFlow, PyTorch, Hugging Face Transformers. Work on prompt engineering, instruction tuning, and reinforcement learning (RLHF). Collaborate with data scientists and engineers to integrate models into production systems. Stay updated with the latest advancements in Generative AI, ML, and DL. Optimize models for performance improvements, including quantization, pruning, and low-latency inference techniques. Qualification: B.Tech in Computer Science. Fresher's may apply 0-3 years of experience in data engineering and machine learning. Immediate joiners: Preferred Requirement Experience with data preprocessing, feature engineering, and model evaluation. Understanding of transformers, attention mechanisms, and large-scale training. Hands-on experience with, RAG, LangChain/LangGraph, LlamaIndex, and other agent frameworks. Understanding of prompt tuning, LoRA/QLora, and efficient parameter fine-tuning (PEFT) techniques. Strong knowledge of data modeling, data preprocessing, and feature engineering techniques. Experience with cloud computing platforms such as AWS, Azure, or Google Cloud Platform. Excellent problem-solving skills and ability to work independently and collaboratively in a team environment. Strong communication skills and ability to explain technical concepts to non-technical stakeholders.

Posted 23 hours ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Senior CV Engineer Location: Gurugram Experience: 6–10 Years Industry: AI Product Overview: We are hiring for our esteemed client, a Series-A funded deep-tech company building a first-of-its-kind app-based operating system for Computer Vision. The team specializes in real-time video/image inference, distributed processing, and high-throughput data handling using advanced technologies and frameworks. Key Responsibilities: Lead design and implementation of complex CV pipelines (object detection, instance segmentation, industrial anomaly detection). Own major modules from concept to deployment ensuring low latency and high reliability. Transition algorithms from Python/PyTorch to optimized C++ edge GPU implementations using TensorRT, ONNX, and GStreamer. Collaborate with cross-functional teams to refine technical strategies and roadmaps. Drive long-term data and model strategies (synthetic data generation, validation frameworks). Mentor engineers and maintain high engineering standards. Required Skills & Qualifications: 6–10 years of experience in architecting and deploying CV systems. Expertise in multi-object tracking, object detection, semi/unsupervised learning. Proficiency in Python, PyTorch/TensorFlow, Modern C++, CUDA. Experience with real-time, low-latency model deployment on edge devices. Strong systems-level design thinking across ML lifecycles. Familiarity with MLOps (CI/CD for models, versioning, experiment tracking). Bachelor’s/Master’s degree in CS, EE, or related fields with strong ML and algorithmic foundations. (Preferred) Experience with NVIDIA DeepStream, GStreamer, LLMs/VLMs, open-source contributions.

Posted 1 day ago

Apply

2.0 years

0 Lacs

Vadodara, Gujarat, India

On-site

Location: Vadodara Company: Sharedpro Technology Pvt Ltd About the Role Sharedpro is seeking a skilled Backend Developer with strong expertise in Java or Python , and hands-on experience working with AI tools and frameworks. The ideal candidate will be responsible for building scalable backend systems, integrating AI-driven modules, and collaborating with cross-functional teams to deliver intelligent applications. Eligibility : Only candidates based in Vadodara or nearby locations will be considered. Key Responsibilities Design, develop, and maintain robust backend systems using Java or Python and related frameworks such as Spring Boot, Flask, or Django. Integrate with AI models, APIs, and tools to enable intelligent features within products. Build RESTful APIs and data pipelines to support ML/AI applications. Optimize performance, scalability, and security of backend infrastructure. Work closely with AI/ML teams to implement model inference, deployment, and monitoring workflows. Write clean, modular, and well-tested code following best practices. Collaborate with frontend developers, product managers, and data engineers. Required Skills & Experience Minimum 2 years of backend development experience with Java or Python. Hands-on experience with AI/ML tools, model integration, or inference engines (e.g., Hugging Face, OpenAI, TensorFlow, LangChain). Strong understanding of REST APIs, JSON, and asynchronous programming. Experience with relational or NoSQL databases (e.g., MySQL, MongoDB). Familiarity with Docker, Git, and CI/CD pipelines. Basic knowledge of cloud platforms such as AWS, GCP, or Azure. Preferred Qualifications Exposure to large language model APIs (e.g., OpenAI, Cohere, Claude). Knowledge of vector databases (e.g., Pinecone, Weaviate, FAISS). Understanding of microservices architecture and API gateway implementations. To apply, submit your resume or reach out to the Sharedpro hiring team for more details.

Posted 1 day ago

Apply

4.0 years

0 Lacs

India

On-site

By submitting your email address and any other personal information to this website, you consent to such information being collected, held, used and disclosed in accordance with our PRIVACY POLICY and our website TERMS AND CONDITIONS OUR STORY: At ContractPodAi, we're pioneering the future of legal with Leah—the operating system for legal. Leah Agentic AI coordinates specialized AI agents across Leah’s suite of solutions, including industry-leading Contract Lifecycle Management (CLM), to transform how legal teams work and create value. Leah doesn't just automate tasks—it uncovers hidden opportunities and transforms legal knowledge into business advantage. Our platform breaks down silos between legal, business, and executive teams, helping organizations discover revenue opportunities, minimize risks, and turn legal insights into strategic decisions. We know innovation happens when great people come together to solve business problems. ContractPodAi is a fast-growing team of innovators spread across London, New York, Glasgow, San Francisco, Toronto, Dubai, Sydney, Mumbai, Pune, and beyond. Here, you'll: • Pioneer the future of legal AI and business transformation • Make real impact by helping organizations unlock hidden value • Collaborate with talented colleagues across continents. If you're excited by cutting-edge technology, thrive in a fast-paced environment, and want to help build something revolutionary, we want to hear from you. THE OPPORTUNITY We are seeking an experienced AI Engineer to join our growing team at ContractPodAi. In this role, you will design, develop, and deploy intelligent systems that power next-generation features in our contract lifecycle management (CLM) platform. You will work at the intersection of machine learning, software engineering, and agentic AI to create autonomous, goal-driven agents capable of reasoning, learning, and acting in dynamic environments. This is your opportunity to play a pivotal role in advancing the capabilities of legal tech with powerful agent-based systems built on the latest advancements in large language models, reinforcement learning, and autonomous AI frameworks. WHAT YOU WILL DO: Architect and implement scalable agentic AI systems that autonomously execute complex workflows and reason over legal data. Research, prototype, and productionize ML/DL models, especially in natural language processing and understanding (NLP/NLU). Build and deploy intelligent legal agents that can interpret documents, make decisions, and collaborate with users or other agents to complete multi-step tasks. Utilize modern frameworks and platforms (e.g., LangChain, LangGraph AutoGen, OpenAI Function Calling, Semantic Kernel) to build multi-agent workflows. Fine-tune and integrate large language models (LLMs) using PEFT, LoRA, and RAG techniques tailored to legal domain challenges. Design and implement robust infrastructure for managing AI lifecycle, including training, inference, monitoring, and continuous learning. Collaborate with legal experts, product managers, and engineering teams to create explainable and trustworthy AI systems. Contribute to the development of our AI strategy for agent-based automation within legal operations and contract management. WHAT YOU WILL NEED: 4+ Years of experience and a strong background in computer science, software engineering, or data science with a deep focus on machine learning and NLP. Demonstrated experience building or integrating agentic AI systems (e.g., AutoGPT-style agents, goal-oriented LLM pipelines, multi-agent frameworks). Proficiency in Python and ML/NLP libraries such as HuggingFace Transformers, LangChain, PyTorch, TensorFlow, and Spacy. Experience developing and scaling ML models (including LSTMs, BERT, Transformers) for real-world applications. Understanding of LLM training (e.g., OpenAI, LLAMA, Falcon), embeddings, and prompt engineering. Hands-on experience with Reinforcement Learning (e.g., PPO, RLHF, RLAIF). Experience extracting text and semantic information from structured and unstructured documents (PDFs, Images, etc.). Comfort working in Agile/Scrum environments and collaborating across cross-functional teams. Passion for innovation in AI and a strong desire to build autonomous systems that solve complex, real-world problems. BENEFITS: Competitive salary Opportunity to work in a fast-moving, high growth SaaS company Paid Time off Generous Employee Referral program At ContractPodAi we believe in creating a diverse and inclusive workplace where everyone feels heard and valued. We are proud to be an Equal Opportunity Employer. We do not discriminate in employment on the basis of race, color, religion, sex, national origin, political affiliation, sexual orientation, marital status, disability, genetic information, age, membership in an employee organization, retaliation, parental status, military service, or other non-merit factor.

Posted 1 day ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

At Airtel , we’re not just scaling connectivity—we’re redefining how India experiences digital services. With 400M+ customers across telecom, financial services, and entertainment, our impact is massive. But behind every experience is an opportunity to make it smarter . We're looking for a Product Manager – AI to drive next-gen intelligence for our customers and business. AI is a transformational technology and we are looking or skilled product managers who will work on leveraging AI to power everything from our digital platforms to customer experience. You’ll work at the intersection of machine learning, product design, and systems thinking to deliver AI-driven products that create tangible business impact—fast. What You’ll Do Lead and contribute to AI-Powered Product Strategy Define product vision and strategy for AI-led initiatives that enhance productivity, automate decisions, and personalise user interactions across Airtel platforms. Translate Business Problems into AI Opportunities Partner with operations, engineering, and data science to surface high-leverage AI use cases across workforce management, customer experience, and process automation. Build & Scale ML-Driven Products Define data product requirements, work closely with ML engineers to develop models, and integrate intelligent workflows that continuously learn and adapt. Own Product Execution End-to-End Drive roadmaps, lead cross-functional teams, launch MVPs, iterate based on real-world feedback, and scale solutions with measurable ROI. What You Need to be Successful Influential Communication - Craft clarity from complexity. You can tailor messages for execs, engineers, and field teams alike—translating AI into business value. Strategic Prioritisation - Balance business urgency with technical feasibility. You can decide what not to build, and defend those decisions with data and a narrative Systems Thinking - You can sees the big picture —how decisions in one area ripple across the business, tech stack, and user experience. High Ownership & Accountability - Operate with a founder mindset. You don't wait for direction — you can rally teams, removes blockers, deal with tough stakeholders and drives outcomes. Adaptability - You thrive in ambiguity and pivot quickly without losing sight of long-term vision—key in fast-moving digital organizations. Skills You'll Need AI / ML Fundamentals Understanding of ML model types: Supervised, unsupervised, reinforcement learning Common algorithms: Linear/logistic regression, decision trees, clustering, neural networks Model lifecycle: Training, validation, testing, tuning, deployment, monitoring Understanding of LLMs, transformers, diffusion models, vector search, etc. Familiarity with GenAI product architecture: Retrieval-Augmented Generation (RAG), prompt tuning, fine-tuning Awareness of real-time personalization, recommendation systems, ranking algorithms, etc Data Fluency Understanding Data pipelines Working knowledge of SQL and Python for analysis Understanding of data annotation, labeling, and versioning Ability to define data requirements and assess data readiness AI Product Development Defining ML problem scope: Classification vs. regression vs. ranking vs. generation Model evaluation metrics: Precision, recall, etc. A/B testing & online experimentation for ML-driven experiences ML Infrastructure Awareness Know what it takes to make things work and happen. Model deployment techniques: Batch vs real-time inference, APIs, model serving Monitoring & drift detection: How to ensure models continue performing over time Familiarity with ML platforms/tools: TensorFlow, PyTorch, Hugging Face, Vertex AI, SageMaker, etc. (at a product level) Understanding latency, cost, and resource implications of ML choices AI Ethics & Safety We care deeply about our customers, their privacy and compliance to regulation. Understand Bias and fairness in models: How to detect and mitigate them Explainability & transparency: Importance for user trust and regulation Privacy & security: Understanding implications of sensitive or PII data in AI Alignment and guardrails in generative AI systems Preferred Qualifications Experienced Machine Learning/Artificial Intelligence PMs Experience building 0-1 products, scaled platforms/ecosystem products, or ecommerce Bachelor's degree in Computer Science, Engineering, Information Systems, Analytics, Mathematics Masters degree in Business Why Airtel Digital? Massive Scale : Your products will impact 400M+ users across sectors Real-World Relevance : Solve meaningful problems for our customers — protecting our customers, spam & fraud prevention, personalised experiences, connecting homes. Agility Meets Ambition : Work like a startup with the resources of a telecom giant AI That Ships : We don’t just run experiments. We deploy models and measure real-world outcomes Leadership Access : Collaborate closely with CXOs and gain mentorship from India’s top product and tech leaders

Posted 1 day ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Us Welcome to FieldAssist, where Innovation meets excellence!! We are a top-tier SaaS platform that specializes in optimizing Route-to-Market strategies and enhancing brand relationships within the CPG partner ecosystem. With over 1,00,000 sales users representing over 600+ CPG brands across 10+ countries in South East Asia, the Middle East, and Africa, we reach 10,000 distributors and 7.5 million retail outlets every day. FieldAssist is a 'Proud Partner to Great Brands' like Godrej Consumers, Saro Africa, Danone, Tolaram, Haldiram’s, Eureka Forbes, Bisleri, Nilon’s, Borosil, Adani Wilmar, Henkel, Jockey, Emami, Philips, Ching’s and Mamaearth among others. Do you crave a dynamic work environment where you can excel and enjoy the journey? We have the perfect opportunity for you!! Responsibilities Build and maintain robust backend services and REST APIs using Python (Django, Flask, or FastAPI). Develop end-to-end ML pipelines including data preprocessing, model inference, and result delivery. Integrate and scale AI/LLM models, including RAG (Retrieval Augmented Generation) and intelligent agents. Design and optimize ETL pipelines and data workflows using tools like Apache Airflow or Prefect. Work with Azure SQL and Cosmos DB for transactional and NoSQL workloads. Implement and query vector databases for similarity search and embedding-based retrieval (e.g., Azure Cognitive Search, FAISS, or Pinecone). Deploy services on Azure Cloud, using Docker and CI/CD practices. Collaborate with cross-functional teams to bring AI features into product experiences. Write unit/integration tests and participate in code reviews to ensure high code quality. e and maintain applications using the .NET platform and environment Who we're looking for: Strong command of Python 3.x, with experience in Django, Flask, or FastAPI. Experience building and consuming RESTful APIs in production systems. Solid grasp of ML workflows, including model integration, inferencing, and LLM APIs (e.g., OpenAI). Familiarity with RAG, vector embeddings, and prompt-based workflows. Proficient with Azure SQL and Cosmos DB (NoSQL). Experience with vector databases (e.g., FAISS, Pinecone, Azure Cognitive Search). Proficiency in containerization using Docker, and deployment on Azure Cloud. Experience with data orchestration tools like Apache Airflow. Comfortable working with Git, CI/CD pipelines, and observability tools. Strong debugging, testing (pytest/unittest), and optimization skills. Good to Have: Experience with LangChain, transformers, or LLM fine-tuning. Exposure to MLOps practices and Azure ML. Hands-on experience with PySpark for data processing at scale. Contributions to open-source projects or AI toolkits. Background working in startup-like environments or cross-functional product teams. FieldAssist on the Web: Website: https://www.fieldassist.com/people-philosophy-culture/ Culture Book: https://www.fieldassist.com/fa-culture-book CEO's Message: https://www.youtube.com/watch?v=bl_tM5E5hcw LinkedIn: https://www.linkedin.com/company/fieldassist/

Posted 1 day ago

Apply

6.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

On-site

Job Description We are seeking a highly skilled and forward-thinking Senior Data Scientist with deep expertise in Generative AI, Agentic AI, and Deep Learning, ideally with experience in the Healthcare, Life Sciences, and Fintech domains. The ideal candidate will bring a strong blend of research acumen and enterprise solution delivery, capable of leading the design and deployment of AI-powered platforms for complex, high-impact business challenges. This is a strategic and hands-on role where you will work on next-gen AI products including intelligent assistants, enterprise automation tools, and AI-integrated decision systems designed to improve operational efficiency and user engagement in regulated industries. Key Responsibilities Lead the end-to-end development of LLM-powered solutions, including fine-tuning, prompt engineering, and dynamic context handling. Design and implement Agentic AI workflows to automate and streamline business operations across healthcare, life sciences, and fintech use cases. Translate business problems into scalable AI/ML pipelines, ensuring alignment with domain-specific compliance and data sensitivity requirements. Build and optimize Reinforcement Learning-based digital twin models and autonomous decision systems. Develop intelligent enterprise search and customer service automation tools using GenAI, NLP, and multi-intent query understanding. Design robust, production-grade ML services using Python, Flask, TensorFlow, PyTorch, and containerized deployment via Docker. Collaborate cross-functionally to build data-centric products and drive research-to-deployment lifecycle. Provide technical leadership in system architecture, MLOps integration, and solution optimization for real-time performance. Contribute to internal documentation, training, and patentable innovation initiatives. Required Skills & Qualifications 6+ years of professional experience in AI/ML development and deployment, with demonstrated success in real-world production environments. Strong programming skills in Python and deep learning frameworks like TensorFlow, PyTorch, Keras. Hands-on experience with Generative AI, LLM fine-tuning, and agent-based AI systems. Expertise in NLP, voice AI, and Reinforcement Learning. Proven ability to develop and deploy solutions for Healthcare and Life Sciences, with an understanding of the domains challenges and regulations. Experience in the fintech domain will be a strong plus. Experience in designing APIs and building scalable microservices for AI inference and data processing. Familiarity with DevOps practices, version control, CI/CD pipelines, and container technologies (e.g., Docker, Git). Working knowledge of workflow automation frameworks. Masters degree in Signal Processing, Machine Learning, Computer Science, or related field (PhD or research background is a strong advantage). Nice To Have Experience building AI applications that support clinical workflows, financial analytics, or biomedical data platforms. Background in digital therapeutics, financial risk models, or AI-driven operations tools. Contribution to AI research publications, open-source projects, or patents. Immediate joiners are preferred (ref:hirist.tech)

Posted 1 day ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for contributing to the development and deployment of machine learning algorithms. Evaluates accuracy and functionality of machine learning algorithms as a part of a larger team. Contributes to translating application requirements into machine learning problem statements. Analyzes and evaluates solutions both internally generated as well as third party supplied. Contributes to developing ways to use machine learning to solve problems and discover new products, working on a portion of the problem and collaborating with more senior researchers as needed. Works with moderate guidance in own area of knowledge. Job Description Core Responsibilities About the Role: We are seeking an experienced Data Scientist to join our growing Operational Intelligence team. You will play a key role in building intelligent systems that help reduce alert noise, detect anomalies, correlate events, and proactively surface operational insights across our large-scale streaming infrastructure. You’ll work at the intersection of machine learning, observability, and IT operations, collaborating closely with Platform Engineers, SREs, Incident Managers, Operators and Developers to integrate smart detection and decision logic directly into our operational workflows. This role offers a unique opportunity to push the boundaries of AI/ML in large-scale operations. We welcome curious minds who want to stay ahead of the curve, bring innovative ideas to life, and improve the reliability of streaming infrastructure that powers millions of users globally. What You’ll Do Design and tune machine learning models for event correlation, anomaly detection, alert scoring, and root cause inference Engineer features to enrich alerts using service relationships, business context, change history, and topological data Apply NLP and ML techniques to classify and structure logs and unstructured alert messages Develop and maintain real-time and batch data pipelines to process alerts, metrics, traces, and logs Use Python, SQL, and time-series query languages (e.g., PromQL) to manipulate and analyze operational data Collaborate with engineering teams to deploy models via API integrations, automate workflows, and ensure production readiness Contribute to the development of self-healing automation, diagnostics, and ML-powered decision triggers Design and validate entropy-based prioritization models to reduce alert fatigue and elevate critical signals Conduct A/B testing, offline validation, and live performance monitoring of ML models Build and share clear dashboards, visualizations, and reporting views to support SREs, engineers, and leadership Participate in incident postmortems, providing ML-driven insights and recommendations for platform improvements Collaborate on the design of hybrid ML + rule-based systems to support dynamic correlation and intelligent alert grouping Lead and support innovation efforts including POCs, POVs, and exploration of emerging AI/ML tools and strategies Demonstrate a proactive, solution-oriented mindset with the ability to navigate ambiguity and learn quickly Participate in on-call rotations and provide operational support as needed Qualifications Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Statistics or a related field 3+ years of experience building and deploying ML solutions in production environments 2+ years working with AIOps, observability, or real-time operations data Strong coding skills in Python (including pandas, NumPy, Scikit-learn, PyTorch, or TensorFlow) Experience working with SQL, time-series query languages (e.g., PromQL), and data transformation in pandas or Spark Familiarity with LLMs, prompt engineering fundamentals, or embedding-based retrieval (e.g., sentence-transformers, vector DBs) Strong grasp of modern ML techniques including gradient boosting (XGBoost/LightGBM), autoencoders, clustering (e.g., HDBSCAN), and anomaly detection Experience managing structured + unstructured data, and building features from logs, alerts, metrics, and traces Familiarity with real-time event processing using tools like Kafka, Kinesis, or Flink Strong understanding of model evaluation techniques including precision/recall trade-offs, ROC, AUC, calibration Comfortable working with relational (PostgreSQL), NoSQL (MongoDB), and time-series (InfluxDB, Prometheus) databases Ability to collaborate effectively with SREs, platform teams, and participate in Agile/DevOps workflows Clear written and verbal communication skills to present findings to technical and non-technical stakeholders Comfortable working across Git, Confluence, JIRA, & collaborative agile environments Nice To Have Experience building or contributing to the AIOps platform (e.g., Moogsoft, BigPanda, Datadog, Aisera, Dynatrace, BMC etc.) Experience working in streaming media, OTT platforms, or large-scale consumer services Exposure to Infrastructure as Code (Terraform, Pulumi) and modern cloud-native tooling Working experience with Conviva, Touchstream, Harmonic, New Relic, Prometheus, & event- based alerting tools Hands-on experience with LLMs in operational contexts (e.g., classification of alert text, log summarization, retrieval-augmented generation) Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate) and embeddings-based search for observability data Experience using MLflow, SageMaker, or Airflow for ML workflow orchestration Knowledge of LangChain, Haystack, RAG pipelines, or prompt templating libraries Exposure to MLOps practices (e.g., model monitoring, drift detection, explainability tools like SHAP or LIME) Experience with containerized model deployment using Docker or Kubernetes Use of JAX, Hugging Face Transformers, or LLaMA/Claude/Command-R models in experimentation Experience designing APIs in Python or Go to expose models as services Cloud proficiency in AWS/GCP, especially for distributed training, storage, or batch inferencing Contributions to open-source ML or DevOps communities, or participation in AIOps research/benchmarking efforts Certifications in cloud architecture, ML engineering, or data science specialization Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 2-5 Years

Posted 1 day ago

Apply

10.0 - 12.0 years

0 Lacs

Delhi, India

On-site

Sr/Lead ML Engineer Placement type (FTE/C/CTH): C/CTH Duration : 6 month with extension Location: Phoenix AZ, must be onsite 5 days a week Start Date: 2 weeks from the offer Interview Process One and done Reason for position Integration ML to the Observability Grafana platform Team Overview Onshore and offshore Project Description AI/ML for Observability (AIOps) Developed machine learning and deep learning solutions for observability data to enhance IT operations. Implemented time series forecasting, anomaly detection, and event correlation models. Integrated LLMs using prompt engineering, fine-tuning, and RAG for incident summarization. Built MCP client-server architecture for seamless integration with the Grafana ecosystem. Duties/Day to Day Overview Machine Learning & Model Development Design and develop ML/DL models for: Time series forecasting (e.g., system load, CPU/memory usage) Anomaly detection in logs, metrics, or traces Event classification and correlation to reduce alert noise Select, train, and tune models using frameworks like TensorFlow, PyTorch, or scikit-learn Evaluate model performance using metrics like precision, recall, F1-score, and AUC ML Pipeline Engineering Build scalable data pipelines for training and inference (batch or streaming) Preprocess large observability datasets from tools like Prometheus, Kafka, or BigQuery Deploy models using cloud-native services (e.g., GCP Vertex AI, Azure ML, Docker/Kubernetes) Maintain retraining pipelines and monitor for model drift LLM Integration for Observability Intelligence Implement LLM-based workflows for summarizing incidents or logs Develop and refine prompts for GPT, LLaMA, or other large language models Integrate Retrieval-Augmented Generation (RAG) with vector databases (e.g., FAISS, Pinecone) Control latency, hallucinations, and cost in production LLM pipelines Grafana & MCP Ecosystem Integration Build or extend MCP client/server components for Grafana Surface ML model outputs (e.g., anomaly scores, predictions) in observability dashboards Collaborate with observability engineers to integrate ML insights into existing monitoring tools Collaboration & Agile Delivery Participate in daily stand-ups, sprint planning, and retrospectives Collaborate with: Data engineers on pipeline performance and data ingestion Frontend developers for real-time data visualizations SRE and DevOps teams for alert tuning and feedback loop integration Translate model outputs into actionable insights for platform teams Testing, Documentation & Version Control Write unit, integration, and regression tests for ML code and pipelines Maintain documentation on models, data sources, assumptions, and APIs Use Git, CI/CD pipelines, and model versioning tools (e.g., MLflow, DVC) Top Requirements (Must haves) AI ML Engineer Skills Design and develop machine learning algorithms and deep learning applications and systems for Observability data (AIOps) Hands on experience in Time series forecasting/prediction, anomaly detection ML algorithms Hands on experience in event classification and correlation ML algorithms Hands on experience on integrating with LLMs with prompt/fine-tuning/rag for effective summarization Working knowledge on implementing MCP client and server for Grafana Eco-system or similar exposure Key Skills: Programming languages: Python, R ML Frameworks: TensorFlow, PyTorch, scikit-learn Cloud platforms: Google Cloud, Azure Front-End Frameworks/Libraries: Experience with frameworks like React, Angular, or Vue.js, and libraries like jQuery. Design Tools: Proficiency in design software like Figma, Adobe XD, or Sketch. Databases: Knowledge of database technologies like MySQL, MongoDB, or PostgreSQL. Server-Side Languages: Familiarity with server-side languages like Python, Node.js, or Java. Version Control: Experience with Git and other version control systems. Testing: Knowledge of testing frameworks and methodologies. Agile Development: Experience with agile development methodologies. Communication and Collaboration: Strong communication and collaboration skills. Experience: Lead – 10 to 12 Years (Onshore and Offshore). Developers - 6 to 8 Years for Engineers

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As an Emerging Tech Specialist at our Applied Research center, you will focus on applying scientific and technical research to solve practical problems and develop new products or processes within a specific industry or field. Your responsibilities will include conducting research, analyzing data, and developing solutions that can be implemented in real-world settings. You will define the research agenda and collaborate with academia and partners to conduct applied research. Additionally, you will be responsible for building tech playbooks that can be utilized by product and implementation teams. Your key responsibilities will involve researching emerging tech trends, the ecosystem of players, use cases, and their impact on client businesses. This will include scanning and curating startups, universities, and tech partnerships to create an innovation ecosystem. You will rapidly design and develop Proof of Concepts (PoCs) in emerging tech areas, sharing design specifications with team members, integrating, and testing components. Furthermore, you will contribute to thought leadership by developing showcases that demonstrate the application of emerging technologies in a business context. As part of the Applied Research Center activities, you will contribute to the design, development, testing, and implementation of proof of concepts in emerging tech areas. You will also be involved in problem definition and requirements analysis, developing reusable components, and ensuring compliance with coding standards and secure coding guidelines. Your role will also include innovation consulting, where you will understand client requirements and implement solutions using emerging tech expertise. Additionally, you will be responsible for talent management, mentoring the team to acquire identified emerging tech skills, and participating in demo sessions and hackathons. You will work with startups to provide innovative solutions to client problems and enhance our offerings. In terms of technical requirements, you should have expertise in various emerging areas such as Advanced AI, New Interaction Models, Platforms and Protocols, Cybersecurity, Quantum, Autonomous Machines, and Emerging Research areas like Brain AGI and Space Semicon. You will also need advanced theoretical knowledge, experimental design expertise, data analysis skills, prototype development capabilities, and research tool proficiency. Preferred skills include experience in User Experience Design, Artificial Intelligence, Cybersecurity Competency Management, Machine Learning, Robotics Algorithms, and X Reality (XR) technologies. Soft skills like a collaborative mindset, communication skills, problem-solving approach, intellectual curiosity, and commercial awareness will be beneficial for this role.,

Posted 1 day ago

Apply

Exploring Inference Jobs in India

With the rapid growth of technology and data-driven decision making, the demand for professionals with expertise in inference is on the rise in India. Inference jobs involve using statistical methods to draw conclusions from data and make predictions based on available information. From data analysts to machine learning engineers, there are various roles in India that require inference skills.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Hyderabad
  5. Pune

These major cities are known for their thriving tech industries and are actively hiring professionals with expertise in inference.

Average Salary Range

The average salary range for inference professionals in India varies based on experience level. Entry-level positions may start at around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.

Career Path

In the field of inference, a typical career path may start as a Data Analyst or Junior Data Scientist, progress to a Data Scientist or Machine Learning Engineer, and eventually lead to roles like Senior Data Scientist or Principal Data Scientist. With experience and expertise, professionals can also move into leadership positions such as Data Science Manager or Chief Data Scientist.

Related Skills

In addition to expertise in inference, professionals in India may benefit from having skills in programming languages such as Python or R, knowledge of machine learning algorithms, experience with data visualization tools like Tableau or Power BI, and strong communication and problem-solving abilities.

Interview Questions

  • What is the difference between inferential statistics and descriptive statistics? (basic)
  • How do you handle missing data in a dataset when performing inference? (medium)
  • Can you explain the bias-variance tradeoff in the context of inference? (medium)
  • What are the assumptions of linear regression and how do you test them? (advanced)
  • How would you determine the significance of a coefficient in a regression model? (medium)
  • Explain the concept of p-value and its significance in hypothesis testing. (basic)
  • Can you discuss the difference between frequentist and Bayesian inference methods? (advanced)
  • How do you handle multicollinearity in a regression model? (medium)
  • What is the Central Limit Theorem and why is it important in statistical inference? (medium)
  • How would you choose between different machine learning algorithms for a given inference task? (medium)
  • Explain the concept of overfitting and how it can affect inference results. (medium)
  • Can you discuss the difference between parametric and non-parametric inference methods? (advanced)
  • Describe a real-world project where you applied inference techniques to draw meaningful conclusions from data. (advanced)
  • How do you assess the goodness of fit of a regression model in inference? (medium)
  • What is the purpose of cross-validation in machine learning and how does it impact inference? (medium)
  • Can you explain the concept of Type I and Type II errors in hypothesis testing? (basic)
  • How would you handle outliers in a dataset when performing inference? (medium)
  • Discuss the importance of sample size in statistical inference and hypothesis testing. (basic)
  • How do you interpret confidence intervals in an inference context? (medium)
  • Can you explain the concept of statistical power and its relevance in inference? (medium)
  • What are some common pitfalls to avoid when performing inference on data? (basic)
  • How do you test the normality assumption in a dataset for conducting inference? (medium)
  • Explain the difference between correlation and causation in the context of inference. (medium)
  • How would you evaluate the performance of a classification model in an inference task? (medium)
  • Discuss the importance of feature selection in building an effective inference model. (medium)

Closing Remark

As you explore opportunities in the inference job market in India, remember to prepare thoroughly by honing your skills, gaining practical experience, and staying updated with industry trends. With dedication and confidence, you can embark on a rewarding career in this field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies