Home
Jobs

1020 Inference Jobs - Page 27

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

Belthra Road, Uttar Pradesh, India

On-site

Linkedin logo

Job Purpose China and Japan play a critical role in the growth and success of the GSK organization. The creation of Asia Development organization provides an opportunity for the biostatistics team to utilize statistical methods to further enhance our disease understanding and to help determine the optimal analytical methods using appropriate data sources in Asia. This new job description reflects the accountabilities and skill sets required to meet the needs of leading innovation in Asia Development Biostatistics. This new role is created to help shape the future of the Asia Development Biostatistics function and transform the way in which GSK uses data and quantitative thinking to drive disease-aligned decision-making in R&D. This individual will be able to place statistical thinking at the heart of regional and global R&D decision-making; to ensure that the fit-for-purpose statistical methodology, such as predictive models and well-designed experiments and trials, deliver robust evidence as the input to those decisions – ultimately making the R&D process more efficient and increasing the probability of success. Key Responsibilities Enhance our disease understanding through the implementation of advanced statistical and/or machine learning methodologies across a range of areas from complex models through to market access activities involving real world data to directly support business critical projects. Identify opportunities to utilize statistical methods to further enhance our disease understanding. Collaboration with cross regional and global functional quantitative groups to help determine the optimal analytical methods using appropriate data sources. Support the upskilling of Statisticians and other quantitative disciplines in statistical methodologies for disease understanding in Asia including their direct application in clinical development planning. Lead the planning and implementation of strategic projects within Asia Development Biostatistics, ensuring they have clear objectives, achievable timelines, intermediate milestones, and well-defined criteria for success Present statistical principles, methods, study designs, and/or results transparently and precisely to GSK researchers and management across all levels both in Asia and the global organization, ensuring appropriate understanding and contributing to better quantitative decision-making across drug development Interact with external Asia scientific groups (industry, academia, regulators) and vendors through presentations, publications, and collaborations, in order to contribute to innovation and uptake of novel statistical methodology and tools within GSK Asia and across the pharmaceutical industry. Serve as a member of the Asia Development Biostatistics Leadership Team Basic Qualifications PhD in Statistics or closely related field 10+ years experience in the field of drug development 5+ years experience as a people leader Knowledge/understanding of drug development process Preferred Experience Or Qualifications Demonstrated ability to influence management, stakeholders, and staff to adapt to positive change. Demonstrated digital fluency and experience in applying creative thinking/business analysis skills to improve or solve business problems. Experience applying a broad array of foundational statistical methodologies, including but not limited to methods for study/experimental design, linear models, generalized linear models, linear and generalized mixed models, survival analysis, Bayesian methods, longitudinal data analysis, and causal inference. Knowledge of or interest multiple aspects of drug development, including drug discovery, preclinical development, and clinical development Experience working in high performance computing environments either for running intensive computational methods or for handling large simulation experiments Evidence of ability to publish quantitative methodological research or scientific research findings using advanced quantitative methodologies. Demonstrable evidence of influencing and partnering with diverse stakeholders of varying levels of seniority. Managerial experience Good communication and influencing skills, curiosity, passion for patients Knowledge and work experience in clinical statistics. Business Acumen - understands implications of decisions from a broad business perspective and uses this knowledge to influence decision-making in Asia Development. Demonstrate leadership in the application of innovative approaches with portfolio-level impact. Works to move the Asia Development statistics organization to a leadership position in the industry both regionally and globally. A strong sense of initiative, urgency, drive, pragmatism and judgement – overall an ability to make things happen. Proven ability to deliver to demanding deadlines whilst maintaining the highest quality. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

🚀 We’re hiring: Senior Full-Stack AI & LLM Engineer (Team Lead) – Pune | WFO | Immediate Joiners Only Own the full stack of our AI-powered web products and mentor a high-impact team—onsite in our Pune office. What you’ll tackle Craft slick UIs in React/TypeScript and robust APIs in Node/Express Model data in MongoDB & PostgreSQL / Supabase (pgvector) Ship production-grade LLM & agentic pipelines (LangChain, CrewAI, RAG, tool-calling) Drive LLMOps : prompt/version control, evals, cost/latency tuning Lead 4-8 engineers through code reviews, sprints & architecture sessions What you bring 8 + yrs dev experience, 2 + yrs leading teams Deep React, Node, Python & data-science expertise Proven record shipping LLM-backed features to production CI/CD, Docker, IaC (AWS/GCP/Terraform) mastery Excellent communication & stakeholder skills Bonus points: vector DBs, streaming inference, open-source contributions Stack glimpse: React 18 • Tailwind • Node 20 • MongoDB Atlas • Supabase/Postgres • Python 3.11 • LangChain • CrewAI • Hugging Face • Docker • AWS/GCP • Terraform 📍 Location: Magarpatta, Pune — Work From Office (WFO) only ⏱ Immediate joiners preferred — ready to start within 2 weeks 📧 Apply: Send résumé/GitHub + a brief note on a recent LLM project to adi@iqan.ai 🤝 Schedule: Connect with shyamlee@ampleint.com to book your initial discussion Let’s build the future of intelligent software together! 💡 Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Minimum qualifications: Master's degree in Statistics or Economics, a related field, or equivalent practical experience. 8 years of work experience using analytics to solve product or business problems, coding (e.g., Python, R, SQL), querying databases or statistical analysis, or 6 years of work experience with a PhD degree. Experience with statistical data analysis such as linear models, multivariate analysis, causal inference, or sampling methods. Experience with statistical software (e.g., SQL, R, Python, MATLAB, pandas) and database languages along with Statistical Analysis, Modeling and Inference. Preferred qualifications: Experience translating analysis results into business recommendations. Experience understanding potential outcomes framework and with causal inference methods (e.g., split-testing, instrumental variables, difference-in-difference methods, fixed effects regression, panel data models, regression discontinuity, matching estimators). Experience selecting tools to solve data analysis issues. Experience articulating business questions and using data to find a solution. Knowledge of structural econometric methods. About the job At Google, data drives all of our decision-making. Quantitative Analysts work all across the organization to help shape Google's business and technical strategies by processing, analyzing and interpreting huge data sets. Using analytical excellence and statistical methods, you mine through data to identify opportunities for Google and our clients to operate more efficiently, from enhancing advertising efficacy to network infrastructure optimization to studying user behavior. As an analyst, you do more than just crunch the numbers. You work with Engineers, Product Managers, Sales Associates and Marketing teams to adjust Google's practices according to your findings. Identifying the problem is only half the job; you also figure out the solution. Responsibilities Interact cross-functionally with a variety of leaders and teams, and work with Engineers and Product Managers to identify opportunities for design and to assess improvements for advertising measurement products. Collaborate with teams to define questions about advertising effectiveness, incrementality assessment, the impact of privacy, user behavior, brand building, bidding etc., and develop and implement quantitative methods to answer those questions. Work with large, complex data sets. Solve difficult, non-routine analysis problems, applying advanced analytical methods as needed. Conduct analyses that include data gathering and requirements specification, exploratory data analysis (EDA), model development, and delivery of results to business partners and executives. Build and prototype analysis pipelines iteratively to provide insights at scale. Develop knowledge of Google data structures,metrics, advocating for changes where needed for product development. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Khairatabad, Telangana, India

On-site

Linkedin logo

Location: IN - Hyderabad Telangana Goodyear Talent Acquisition Representative: Ashutosh Panda Sponsorship Available: No Relocation Assistance Available: No Job Responsibilities You apply standards for all ongoing development and configuration, and approve all technical solutions by business process. You maintain the standards for SAP procedures and documentation, including functional specifications and technical specifications. You evaluate cross-functional solutions with other business process principals, advise the project team on the use of standard procedures and tools, and implement new business strategies through configuration interaction and the creation of programming specifications. You are responsible for the completeness and correctness of specifications as per defined quality standards. You provide knowledge transfer on current technologies to stakeholders and architects, and conduct cost/benefit analysis by evaluating alternative designs. You support the definition and documentation of test strategies, test plans, and test cases by assessing functional requirements. You support the resolution of defects on behalf of the functional team in the Application Life Management tool. You assume a leadership role in medium-sized initiatives, serving as a key contributor, facilitator, or group lead. You are responsible for traceability of defined requirements through test cases. You support the internal/external audit processes as the one point of contact for cross-functional areas to ensure compliance. You troubleshoot, investigate, and persist in developing solutions to problems with unknown causes where precedents do not exist, applying logic, inference, creativity, and initiative. You provide cross-functional support and maintenance for responsible business/technical areas. Job Qualifications You have a Bachelor's degree in MIS, Computer Science, Engineering, Technology, Business Administration, or in lieu of a degree, 12 years of IT experience. You have at least 7 years of experience in IT and 3 years of SAP SD experience. Hands-on experience with configuring key SD elements such as pricing, billing, shipping, sales order processing, and credit management. Strong grasp of sales and distribution processes and how they integrate with other modules (MM, PP, FI, etc.) You have strong capability to perform configurations and/or developments for SAP-related applications. You possess strong cross-functional solution design capabilities across functions and applications. You have the ability to explore and implement new processes and technologies. You have strong written and verbal communication skills with a very good command of spoken and written English. You are able to work flexible hours as required on special occasions. Goodyear is an Equal Employment Opportunity and Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to that individual's race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, ethnicity, citizenship, or any other characteristic protected by law. Goodyear is one of the world’s largest tire companies. It employs about 74,000 people and manufactures its products in 57 facilities in 23 countries around the world. Its two Innovation Centers in Akron, Ohio and Colmar-Berg, Luxembourg strive to develop state-of-the-art products and services that set the technology and performance standard for the industry. For more information about Goodyear and its products, go to www.goodyear.com/corporate Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Title: Senior Python Developer Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience: 3–8 years About Darwix AI Darwix AI is one of India’s fastest-growing AI startups, transforming enterprise sales with our GenAI-powered conversational intelligence and real-time agent assist suite. Our platform is used by high-growth enterprises across India, MENA, and Southeast Asia to improve sales productivity, personalize customer conversations, and unlock revenue intelligence in real-time. We are backed by marquee VCs, 30+ angel investors, and led by alumni from IITs, IIMs, and BITS with deep experience in building and scaling products from India for the world. Role Overview As a Senior Python Developer at Darwix AI, you will be at the core of our engineering team, leading the development of scalable, secure, and high-performance backend systems that support AI workflows, real-time data processing, and enterprise-grade integrations. This role requires deep technical expertise in Python, a strong foundation in backend architecture, and the ability to collaborate closely with AI, product, and infrastructure teams. You will take ownership of critical backend modules and shape the engineering culture in a rapidly evolving, high-impact environment. Key Responsibilities System Architecture & API Development Design, implement, and optimize backend services and microservices using Python frameworks such as FastAPI, Django, or Flask Lead the development of scalable RESTful APIs that integrate with frontend, mobile, and AI systems Architect low-latency, fault-tolerant services supporting real-time sales analytics and AI inference Data Pipelines & Integrations Build and optimize ETL pipelines to manage structured and unstructured data from internal and third-party sources Integrate APIs with CRMs, telephony systems, transcription engines, and enterprise platforms like Salesforce, Zoho, and LeadSquared Lead scraping and data ingestion efforts from large-scale, dynamic web sources using Playwright, BeautifulSoup, or Scrapy AI/ML Enablement Work closely with AI engineers to build infrastructure for LLM/RAG pipelines , vector DBs , and real-time AI decisioning Implement backend support for prompt orchestration , Langchain flows , and function-calling interfaces Support model deployment, inference APIs, and logging/monitoring for large-scale GenAI pipelines Database & Storage Design Optimize database design and queries using MySQL , PostgreSQL , and MongoDB Architect and manage Redis and Kafka for caching, queueing, and real-time communication DevOps & Quality Ensure continuous delivery through version control (Git), CI/CD pipelines, testing frameworks, and Docker-based deployments Identify and resolve bottlenecks related to performance, memory, or data throughput Adhere to best practices in code quality, testing, security, and documentation Leadership & Collaboration Mentor junior developers and participate in code reviews Collaborate cross-functionally with product, AI, design, and sales engineering teams Contribute to architectural decisions, roadmap planning, and scaling strategies Qualifications 4–8 years of backend development experience in Python, with a deep understanding of object-oriented and functional programming Hands-on experience with FastAPI , Django , or Flask in production environments Proven experience building scalable microservices, data pipelines, and backend systems that support live applications Strong command over REST API architecture , database optimization, and data modeling Solid experience working with web scraping tools , automation frameworks, and external API integrations Knowledge of AI tools like Langchain , HuggingFace , Vector DBs (Pinecone, Weaviate, FAISS) , or RAG architectures is a strong plus Familiarity with cloud infrastructure (AWS/GCP) , Docker, and containerized deployments Comfortable working in fast-paced, high-ownership environments with shifting priorities and dynamic problem-solving Bonus Prior experience in an early-stage SaaS startup or AI-first product environment Contributions to open-source Python projects or developer communities Experience working with real-time streaming systems (Kafka, Redis Streams, WebSockets) What We Offer Competitive fixed salary + performance-linked incentives Equity options for high-impact performers Opportunity to work on cutting-edge GenAI and SaaS products used by global enterprises Autonomy, rapid decision-making, and direct interaction with founders and senior leadership High-growth environment with clear progression toward Tech Lead or Engineering Manager roles Access to tools, compute, and learning resources to accelerate your technical and leadership growth Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Lead Python Engineer – Backend & AI Integrations Location: Gurgaon Working Days: Monday to Friday, with 2nd and 4th Saturdays off Working Hours : 10:30 AM – 8:00 PM Experience : 3–8 years Function: Backend Engineering | AI Platform Integration | Scalable Systems About Darwix AI Darwix AI is one of India’s fastest-growing GenAI SaaS companies powering real-time decision intelligence for enterprise revenue teams. Our platform transforms frontline performance through: Transform+: Live agent nudges & call intelligence Sherpa.ai: GenAI-powered multilingual sales coach Store Intel: Computer vision for in-store sales analysis We serve market leaders across BFSI, real estate, and retail—including IndiaMart, Wakefit, GIVA, Sobha, and Bank Dofar. Our stack processes thousands of voice conversations daily, powers real-time dashboards, and delivers high-stakes nudges that directly impact multi-crore revenue pipelines. We are building at the intersection of voice AI, backend scale, and real-time analytics. You will play a key role in shaping the tech foundation that drives this mission. Role Overview We’re looking for a Lead Python Engineer to architect, own, and scale the core backend systems that power Darwix AI’s GenAI applications. You’ll work at the confluence of backend engineering, data pipelines, speech processing, and AI model integrations—supporting everything from real-time call ingestion to multi-tenant analytics dashboards. You will lead a high-performing engineering pod, collaborate with product, AI, and infra teams, and mentor junior engineers. This is a high-impact, ownership-first role with direct influence over product velocity, system performance, and enterprise reliability. Key Responsibilities 1. Backend Architecture & Infrastructure Design and maintain scalable APIs and backend systems using Python (FastAPI) Optimize data flow for speech-to-text transcription, diarization outputs, and call scoring workflows Build and maintain modular service components (STT, scoring engine, notification triggers) Manage asynchronous job queues (Celery, Redis) for large batch processing Ensure high availability, security, and scalability of backend systems across geographies 2. AI/ML Integration & Processing Pipelines Integrate with LLMs (OpenAI, Cohere, Hugging Face) and inference APIs for custom use cases Handle ingestion and parsing of STT outputs (WhisperX, Deepgram, etc.) Work closely with the AI team to productionize model outputs into usable product layers Manage embedding pipelines, RAG workflows, and retrieval caching across client tenants 3. Database & Data Engineering Design and maintain schemas across PostgreSQL, MongoDB, and TimescaleDB Optimize read/write operations for large call data, agent metrics, and dashboard queries Collaborate on real-time analytics systems used by enterprise sales teams Implement access controls and tenant isolation logic for sensitive sales data 4. Platform Reliability, Monitoring & Scaling Collaborate with DevOps team on infrastructure orchestration (Docker, Kubernetes, GitHub Actions) Set up alerting, logging, and auto-recovery protocols for uptime guarantees Drive version control and CI/CD automation for releases with minimal regression Support benchmarking, load testing, and latency reduction initiatives 5. Technical Leadership & Team Collaboration Mentor junior engineers, review pull requests, and enforce code quality standards Collaborate with product managers on scoping and technical feasibility Break down large tech initiatives into sprints and delegate effectively Take ownership of technical decisions and present trade-offs with clarity Required Skills & Experience 3–8 years of hands-on backend engineering experience, primarily in Python Strong grasp of FastAPI, REST APIs, job queues (Celery), and async workflows Solid experience with relational and NoSQL databases: PostgreSQL, MongoDB, Redis Familiarity with working on production systems involving large-scale API calls or streaming data Prior experience integrating 3rd-party APIs (e.g., OpenAI, CRM, VoIP, or transcription vendors) Working knowledge of Docker, CI/CD pipelines (GitHub Actions preferred), and basic infra scaling Experience working in high-growth SaaS or data-product companies Bonus Skills (Preferred, Not Mandatory) Experience with LLM applications, vector stores (FAISS, Pinecone), and RAG pipelines Familiarity with speech-to-text engines (WhisperX, Deepgram) and audio processing Prior exposure to multi-tenant SaaS systems with role-based access and usage metering Knowledge of OAuth2, webhooks, event-driven architectures Experience with frontend collaboration (Angular/React) and mobile APIs Contributions to open-source projects, technical blogs, or developer communities Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Title: Senior Python Developer – Backend Engineering Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience Required: 4–8 Years About Darwix AI Darwix AI is building India’s most advanced GenAI-powered platform for enterprise sales teams. We combine speech recognition, LLMs, vector databases, real-time analytics, and multilingual intelligence to power customer conversations across India, the Middle East, and Southeast Asia. We’re solving complex backend problems across speech-to-text pipelines , agent assist systems , AI-based real-time decisioning , and scalable SaaS delivery . Our engineering team sits at the core of our product and works closely with AI research, product, and client delivery to build the future of revenue enablement. Backed by top-tier VCs, AI advisors, and enterprise clients, this is a chance to build something foundational. Role Overview We are hiring a Senior Python Developer to architect, implement, and optimize high-performance backend systems that power our AI platform. You will take ownership of key backend services—from core REST APIs and data pipelines to complex integrations with AI/ML modules. This role is for builders. You’ll work closely with product, AI, and infra teams, write production-grade Python code, lead critical decisions on architecture, and help shape engineering best practices. Key Responsibilities 1. Backend API Development Design and implement scalable, secure RESTful APIs using FastAPI , Flask , or Django REST Framework Architect modular services and microservices to support AI, transcription, real-time analytics, and reporting Optimize API performance with proper indexing, pagination, caching, and load management strategies Integrate with frontend systems, mobile clients, and third-party systems through clean, well-documented endpoints 2. AI Integrations & Inference Orchestration Work closely with AI engineers to integrate GenAI/LLM APIs (OpenAI, Llama, Gemini), transcription models (Whisper, Deepgram), and retrieval-augmented generation (RAG) workflows Build services to manage prompt templates, chaining logic, and LangChain flows Deploy and manage vector database integrations (e.g., FAISS , Pinecone , Weaviate ) for real-time search and recommendation pipelines 3. Database Design & Optimization Model and maintain relational databases using MySQL or PostgreSQL ; experience with MongoDB is a plus Optimize SQL queries, schema design, and indexes to support low-latency data access Set up background jobs for session archiving, transcript cleanup, and audio-data binding 4. System Architecture & Deployment Own backend deployments using GitHub Actions , Docker , and AWS EC2 Ensure high availability of services through containerization, horizontal scaling, and health monitoring Manage staging and production environments, including DB backups, server health checks, and rollback systems 5. Security, Auth & Access Control Implement robust authentication (JWT, OAuth), rate limiting , and input validation Build role-based access controls (RBAC) and audit logging into backend workflows Maintain compliance-ready architecture for enterprise clients (data encryption, PII masking) 6. Code Quality, Documentation & Collaboration Write clean, modular, extensible Python code with meaningful comments and documentation Build test coverage (unit, integration) using PyTest , unittest , or Postman/Newman Participate in pull requests, code reviews, sprint planning, and retrospectives with the engineering team Required Skills & QualificationsTechnical Expertise 3–8 years of experience in backend development with Python, PHP. Strong experience with FastAPI , Flask , or Django (at least one in production-scale systems) Deep understanding of RESTful APIs , microservice architecture, and asynchronous Python patterns Strong hands-on with MySQL (joins, views, stored procedures); bonus if familiar with MongoDB , Redis , or Elasticsearch Experience with containerized deployment using Docker and cloud platforms like AWS or GCP Familiarity with Git , GitHub , CI/CD pipelines , and Linux-based server environments Plus Points Experience working on audio processing , speech-to-text (STT) pipelines, or RAG architectures Hands-on with vector databases or LangChain , LangGraph Exposure to real-time systems, WebSockets, and stream processing Basic understanding of frontend integration workflows (e.g., with HTML/CSS/JS interfaces) Show more Show less

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Backend Development Engineer Primary Tech stack: Python Experience level: 2 years and above Salary: 5-8LPA Location: T-Hub, Hyderabad (In-office) Joining: Immediate or as per availability "Minimum two years of experience as a Backend Developer with leadership skills" We are looking for a skilled Backend Developer with expertise in AI-driven applications to join our dynamic tech team. In this role, you will be instrumental in building and maintaining the backend logic of AI-driven web applications, focusing on developing scalable infrastructure for large language models (LLMs), agents, and AI workflows. Your expertise in Python, LangChain, LangGraph, and server-side logic will be crucial in developing robust and intelligent backend systems. This position is ideal for someone who thrives in a collaborative environment, is passionate about AI, and is eager to contribute to innovative product development. What will you do? (but not limited to) Develop and manage APIs and microservices using Python, FastAPI, or Flask to support AI-driven functionalities. Design, implement, and optimize workflows leveraging LangChain, LangGraph, and other AI frameworks. Build and deploy scalable AI agents and workflows using LLMs (GPT, Claude, Gemini, etc.). Integrate various AI models, fine-tune them, and optimise their inference performance. Develop and maintain vector databases and retrieval-augmented generation (RAG) pipelines for enhanced AI responses. Collaborate with front-end developers and UI/UX designers to integrate AI-driven backend logic with the user interface. Implement robust security measures, authentication, and data protection for AI applications. Optimize application performance, ensuring minimal latency in AI-driven workflows. Work with cloud platforms like AWS, Azure, or GCP for model deployment and scalable infrastructure management. Debug, troubleshoot, and improve applications while ensuring robustness and scalability. Mentor junior developers and share knowledge about AI best practices and advancements. Stay updated with emerging trends in AI, LLMs, and backend development. Work closely with the product team to translate business requirements into robust technical solutions. Who can apply? Bachelor's degree or higher in Computer Science, Software Engineering, or related field. Minimum 2 years of experience as a Backend Developer, with a proven track record in developing and deploying web applications. Proficiency in Python and frameworks such as FastAPI, Flask, or Django. Familiarity with embeddings, fine-tuning LLMs and working with vector databases like Pinecone. Strong experience in server management, including setup, administration, and optimisation. Experience in integrating and working with LLM models will be considered a plus, alongside an understanding of the JavaScript (Node.js/Express.js/MongoDB) stack. In-depth knowledge of security practices, including data protection and vulnerability management. Expertise in database administration and performance tuning. Familiarity with code versioning tools, such as Git/GitHub. Strong problem-solving skills, with an ability to think algorithmically and a keen eye for debugging and optimising code. Experience in a startup environment and building products from scratch is highly preferred. Excellent communication skills and the ability to work effectively in a team. Strong project management skills, with experience in agile methodologies. What do we offer? A supportive and flexible workplace that promotes work-life balance, recognising and appreciating your contributions. The autonomy to embrace, explore, and experiment with your ideas. An inclusive environment where your individuality is highly valued, fostering open dialogue and diverse perspectives. Additional Benefits Cross-functional exposure to diverse teams, enabling a holistic understanding of all business functions. Engaging social events that foster camaraderie and networking opportunities with various startups. A fantastic problem-solving team that criticises and gels along, creating a better version of every idea. About Shoshin Tech We're more than just a tech startup — we're on a mission to build a platform that empowers professionals, educators, and researchers to work smarter, faster, and with greater impact. Our tools are designed not just for productivity, but for transformation. If you possess a creative and innovative mindset, entrepreneurial spirit, and can-do attitude, where you hold a genuine passion for cutting-edge technology, a drive to facilitate transformative learning experiences or a commitment to promoting well-being for all, and wish to be part of a high-performance team enthusiastic about operational excellence, you’lll love it here. Shoshin Tech believes in envisioning an Equal Opportunity Employer - We celebrate diversity and are committed to creating an inclusive environment for all teams. We are committed to working with and providing reasonable accommodations to individuals with disabilities. Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Ambattur, Tamil Nadu, India

On-site

Linkedin logo

About Beatly.AI Beatly.AI is a fast-scaling AI startup transforming healthcare through neural network-powered innovation. Our mission is to build clinically meaningful, data-driven solutions that empower medical professionals and enhance patient outcomes. We are seeking an Senior AI/ML Team Engineer to drive both hands-on model development and technical leadership across our deep learning initiatives. About the Role As an Senior AI/ML Team Engineer , you will play a dual role: writing production-grade code, developing and deploying neural network models, while also leading a team of engineers through key initiatives. You’ll be responsible for full-lifecycle development of AI systems—from research and experimentation to scalable cloud deployment—and for shaping our infrastructure and team practices. Key Responsibilities Design and build advanced neural network models (CNNs, LSTMs, GRUs, autoencoders, time-series networks) for real-world healthcare applications. Lead the development of modular, reusable ML pipelines for training, validation, and inference. Actively contribute code and own model architecture, optimization, and deployment workflows. Architect cloud-ready, scalable ML systems using AWS/GCP/Azure, with containers (Docker) and orchestration (Kubernetes). Implement robust ML Ops processes including CI/CD pipelines, model versioning, monitoring, and rollback mechanisms. Collaborate with data, product, and engineering teams to define AI use cases and deliver end-to-end solutions. Guide and mentor a team of engineers—conduct code reviews, help troubleshoot, and support their development. Optimize compute resource usage (e.g., GPU scheduling, mixed precision, parallelism) for large-scale training jobs. Track and improve performance metrics (accuracy, latency, reliability), ensuring production-readiness of deployed models. Evaluate new techniques in neural networks (excluding LLMs) and assess their relevance for healthcare use cases. Skills & Qualifications 4+ years of experience in AI/ML engineering with a strong track record of shipping deep learning solutions. 1+ years of experience in leading or mentoring technical teams, while continuing to build and ship code. Strong proficiency in Python , with hands-on expertise in PyTorch , TensorFlow , or Keras . Deep understanding of neural network architectures for time-series, image, or signal-based tasks. Solid experience deploying models on cloud platforms (AWS/GCP/Azure), using tools like Docker, Kubernetes, and Terraform. Experience with ML pipeline tools (MLFlow, Airflow, DVC, Weights & Biases) and experiment tracking. Proficient in software engineering fundamentals, microservices, API integration, and scalable system design. Excellent written and verbal communication skills with a leadership mindset. Preferred Qualifications Experience with healthcare data: ECG, imaging, vitals, or clinical time-series. Familiarity with HIPAA, FDA, or healthcare compliance workflows. Background in distributed model training and GPU optimisation (e.g., Horovod, TorchElastic). Master’s or PhD in Computer Science, or related discipline. What We Offer Strategic leadership role with deep technical involvement. A mission-driven environment solving problems that matter. Real ownership over architecture, code, and team development. Excellent compensation with top-tier salary . Dynamic, collaborative team culture with space to innovate and grow. Apply Now If you're an experienced deep learning engineer ready to lead and build, and passionate about using neural networks to change the future of healthcare—we want to meet you. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

We're looking for a hands-on Computer Vision Engineer who thrives in fast-moving environments and loves building real-world, production-grade AI systems. If you enjoy working with video, visual data, cutting-edge ML models, and solving high-impact problems, we want to talk to you. This role sits at the intersection of deep learning, computer vision, and edge AI — building scalable models and intelligent systems that power our next-generation sports tech platform. Key Responsibilities Design, train, and optimize deep learning models for real-time object detection, tracking, and video understanding. Implement and deploy AI models using frameworks like PyTorch, TensorFlow/Keras, and Transformers. Work with video and image datasets using OpenCV, YOLO, NumPy, Pandas, and visualization tools like Matplotlib. Collaborate with data engineers and edge teams to deploy models on real-time streaming pipelines. Optimize inference performance for edge devices (Jetson, T4, etc.) and handle video ingestion workflows. Prototype new ideas rapidly, conduct A/B tests, and validate improvements in real-world scenarios. Document processes, communicate findings clearly, and contribute to our growing AI knowledge base. Requirements Strong command of Python and familiarity with C/C++. Experience with one or more deep learning frameworks: PyTorch, TensorFlow, Keras. Solid foundation in YOLO, Transformers, or OpenCV for real-time visual AI. Understanding of data preprocessing, feature engineering, and model evaluation using NumPy, Pandas, etc. Good grasp of computer vision, convolutional neural networks (CNNs), and object detection techniques. Exposure to video streaming workflows (e.g., GStreamer, FFmpeg, RTSP). Ability to write clean, modular, and efficient code. A self-starter and builder at heart — comfortable with ambiguity and fast execution. Passionate about solving real-world problems using AI. Comfortable working in a startup culture: lean teams, rapid iteration, high ownership. A collaborative team player who’s curious, coachable, and open to feedback. Undergraduate degree (Master’s or PhD preferable) in Computer Science, Artificial Intelligence, or a related discipline is preferred. A strong academic background is a plus. Nice To Have Experience deploying models in production, especially on GPU/edge devices. Interest in reinforcement learning, sports analytics, or real-time systems. About Company: We are a young startup developing our own indigenous sports analytical platform called TECH AT PLAY. It is the ultimate analytics ecosystem for athletes, It automatically records, categorizes, and analyzes performance while live-streaming the game to sports fans across the globe, all from a single platform. Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Lowe’s Lowe's Companies, Inc. (NYSE: LOW) is a FORTUNE® 50 home improvement company serving approximately 17 million customer transactions a week in the U.S. With total fiscal year 2022 sales of over $97 billion, approximately $92 billion of sales were generated in the U.S., where Lowe's operates over 1,700 home improvement stores and employs approximately 300,000 associates. Based in Mooresville, N.C., Lowe's supports the communities it serves through programs focused on creating safe, affordable housing and helping to develop the next generation of skilled trade experts. About Lowe’s India At Lowe's India, we are the enablers who help create an engaging customer experience for our $97 billion home improvement business at Lowe's. Our 4000+ associates work across technology, analytics, business operations, finance & accounting, product management, and shared services. We leverage new technologies and find innovative methods to ensure that Lowe's has a competitive edge in the market. About the Team The pricing Analytics team supports pricing managers and merchants in defining and optimizing the pricing strategies for various product categories across the channels .The team leverages advance analytics to forecast/measure the impact of pricing actions , develop strategic price zones, recommend price changes and identify sales/margin opportunities to achieve company targets . Job Summary: The primary purpose of this role is to develop and maintain descriptive and predictive analytics models and tools that support Lowe's pricing strategy. Collaborating closely with the Pricing team, the analyst will help translate pricing goals and objectives into data and analytics requirements. Utilizing both open source and commercial data science tools, the analyst will gather and wrangle data to deliver data driven insights, trends, and identify anomalies . The analyst will apply the most suitable statistical and machine learning techniques to answer relevant questions and provide retail recommendations . The analyst will actively collaborate with product and business team, incorporating feedback through out the development to drive continuous improvement and ensure a best-in-class position in the pricing space. Roles & Responsibilities: Core Responsibilities: Translate pricing strategy and business objectives into analytics requirements. Develop and implement processes for collecting, exploring, structuring, enhancing, and cleaning large datasets from both internal and external sources. Conduct data validation, detect outliers, and perform root cause analysis to prepare data for statistical and machine learning models. Research, design, and implement relevant statistical and machine learning models to solve specific business problems. Ensure the accuracy of data science and machine learning model results and build trust in their reliability. Apply machine learning model outcomes to relevant business use cases. Assist in designing and executing A/B tests, multivariate experiments, and randomized controlled trials (RCTs) to evaluate the effects of price changes. Perform advanced statistical analyses (e.g., causal inference, Bayesian analysis, regression modeling) to extract actionable insights from experimentation data. Collaborate with teams such as Pricing Strategy & Execution, Analytics COE, Merchandising, IT, and others to define, prioritize, and develop innovative solutions. Keep up to date with the latest developments in data science, statistics, and experimentation techniques. Automate routine manual processes to improve efficiency. Years of Experience: 3-6 years of relevant experience Education Qualification & Certifications (optional) Required Minimum Qualifications : Bachelor’s or Masters in Engineering/business analytics/Data Science/Statistics/economics/math Skill Set Required Primary Skills (must have) 3+ Years of experience in advance quantitative analysis , statistical modeling and Machine Learning. Ability to perform various analytical concepts like Regression, Sampling techniques, hypothesis, Segmentation, Time Series Analysis, Multivariate Statistical Analysis, Predictive Modelling. 3+ years’ experience in corporate Data Science, Analytics, Pricing & Promotions, Merchandising, or Revenue Management . 3+ years’ experience working with common analytics and data science software and technologies such as SQL, Python, R, or SAS. 3+ years’ experience working with Enterprise level databases ( e.g., Hadoop, Teradata, Oracle, DB2 ) 3+ years’ experience using enterprise-grade data visualization tools ( e.g., Power BI , Tableau ) 3+ years’ experience working with cloud platforms ( e.g., GCP, Azure ,AWS ) Secondary Skills (desired) Technical expertise in Alteryx, Knime. Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job description Job Title: MLOps Engineer Company: Aaizel International Technologies Pvt. Ltd. Location: Gurugram Experience Required: 6+ Years Employment Type: Full-Time About Aaizeltech Aaizeltech is a deep-tech company building AI/ML-powered platforms, scalable SaaS applications, and intelligent embedded systems. We are seeking a Senior MLOps Engineer to lead the architecture, deployment, automation, and scaling of infrastructure and ML systems across multiple product lines. Role Overview This role requires strong expertise and hands-on MLOps experience. You will architect and manage cloud infrastructure, CI/CD systems, Kubernetes clusters, and full ML pipelines—from data ingestion to deployment and drift monitoring. Key Responsibilities MLOps Responsibilities: Collaborate with data scientists to operationalize ML workflows. Build complete ML pipelines with Airflow, Kubeflow Pipelines, or Metaflow. Deploy models using KServe, Seldon Core, BentoML, TorchServe, or TF Serving. Package models into Docker containers using Flask or FastAPI or Django for APIs. Automated dataset versioning & model tracking via DVC and MLflow. Setup model registries and ensure reproducibility and audit trails. Implement model monitoring for: (i) Data drift and schema validation (using tools like Evidently AI, Alibi Detect). (ii) Performance metrics (accuracy, precision, recall). (iii) Infrastructure metrics (latency, throughput, memory usage). Implement event-driven retraining workflows triggered by drift alerts or data freshness. Schedule GPU workloads on Kubernetes and manage resource utilization for ML jobs. Design and manage secure, scalable infrastructure using AWS, GCP, or Azure. Build and maintain CI/CD pipelines using Jenkins, GitLab CI, GitHub Actions, or AWS DevOps. Write and manage Infrastructure as Code using Terraform, Pulumi, or CloudFormation. Automated configuration management with Ansible, Chef, or SaltStack. Manage Docker containers and advanced Kubernetes resources (Helm, StatefulSets, CRDs, DaemonSets). Implement robust monitoring and alerting stacks: Prometheus, Grafana, CloudWatch, Datadog, ELK, or Loki. Must-Have Skills Advanced expertise in Linux administration, networking, and shell scripting. Strong knowledge of Docker, Kubernetes, and container security. Hands-on experience with IaC tools like Terraform and configuration management like Ansible. Proficient in cloud-native services: IAM, EC2, EKS/GKE/AKS, S3, VPCs, Load Balancing, Secrets Manager. Mastery of CI/CD tools (e.g., Jenkins, GitLab, GitHub Actions). Familiarity with SaaS architecture, distributed systems, and multi-env deployments. Proficiency in Python for scripting and ML-related deployments. Experience integrating monitoring, alerting, and incident management workflows. Strong understanding of DevSecOps, security scans (e.g., Trivy, SonarQube, Snyk) and secrets management tools (Vault, SOPS). Experience with GPU orchestration and hybrid on-prem + cloud environments. Nice-to-Have Skills Knowledge of GitOps workflows (e.g., ArgoCD, FluxCD). Experience with Vertex AI, SageMaker Pipelines, or Triton Inference Server. Familiarity with Knative, Cloud Run, or serverless ML deployments. Exposure to cost estimation, rightsizing, and usage-based autoscaling. Understanding of ISO 27001, SOC2, or GDPR-compliant ML deployments. Knowledge of RBAC for Kubernetes and ML pipelines. Who You'll Work With AI/ML Engineers, Backend Developers, Frontend Developers, QA Team Product Owners, Project Managers, and external Government or Enterprise Clients How to Apply If you are passionate about embedded systems and excited to work on next-generation technologies, we would love to hear from you. Please send your resume and a cover letter outlining your relevant experience to hr@aaizeltech.com or bhavik@aaizeltech.com or anju@aaizeltech.com (Contact No- 7302201247) Show more Show less

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

We are looking for a talented Computer Vision Engineer to join our innovative team. In this role, you will have the opportunity to work on challenging problems, leveraging both deep learning and classical computer vision techniques. By joining us, you will have a significant impact on the development of cutting-edge technologies, work alongside a highly skilled and collaborative team, and advance your career by tackling exciting real-world projects. Responsibilities Develop computer vision solutions for machine learning applications. Design deep learning models using frameworks like PyTorch, TensorFlow, and OpenCV. Implement algorithms for object detection, segmentation, classification, and tracking. Work with large datasets for image processing and develop scalable models. Deploy models on edge devices to optimize inference efficiency and resource utilization. Collaborate with data scientists to understand project requirements and deliver solutions. Implement and evaluate model performance to ensure accuracy and efficiency. Stay up to date with the latest techniques in deep learning and classical computer vision. Requirements Bachelor's degree in Computer Science, Electrical Engineering, or related field. 3-5 years of experience in computer vision. Strong skills in Python and experience with deep learning frameworks like PyTorch and TensorFlow. Experience with computer vision libraries such as OpenCV. Proficiency in classical computer vision techniques, including SIFT, SURF, feature extraction, image manipulation, and calibration. Experience in deploying models to edge devices for real-time computer vision applications. Strong understanding of Data Structures and Algorithms (DSA) with hands-on coding experience. Strong experience working with large-scale datasets. Excellent communication and collaboration skills. Preferred Qualifications Experience with cloud platforms (AWS, GCP, Azure). Familiarity with Docker, containerization, and REST APIs. Experience with model deployment to production environments. Understanding of software development methodologies and version control. This job was posted by Manikanta Reddy from Inito. Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Company Description O-Health is digital healthcare company dedicated to AI-driven digital solutions. Our platform connects patients in remote areas to doctors and utilizes NLP and AI for diagnostics. Role Description This is a full-time on-site role for a NLP + ML Engineer at O-Health located in Bengaluru. The NLP + ML Engineer will be responsible for pattern recognition in text, working with neural networks, implementing algorithms, and analyzing statistics on a daily basis in a healthtech ecosystem. Qualifications Experience in Neural Networks, Data Science and Pattern Recognition Strong background in Computer Science and Statistics Proficiency in machine learning frameworks and tools Excellent problem-solving and analytical skills Ability to work collaboratively in a team environment Master's/Bachelor's in Computer Science, Engineering, Mathematics, or related field with atleast 4 years of experience Experience in development of multi-lingual ASR systems Responsibilities: Design and develop robust backend systems to handle real-time patient data and ML outputs. Develop and integrate machine learning models with APIs to the main O-Health application. Optimize model serving pipelines (e.g. using TorchServe, FastAPI, or ONNX). Manage data pipelines for de-identified OPD datasets used in model training and inference. Implement data encryption, anonymization, and consent-based data access. Development of multilingual voice and text processing. Support versioning and A/B testing of health algorithms. Required Skills:Backend Engineering Strong in Python with frameworks like FastAPI with experience in DBMS. Experience with RESTful APIs, WebSockets, and asynchronous data flows. Familiar with PostgreSQL databases. Working knowledge of Docker, Git, and CI/CD pipelines. Machine Learning Ops Hands-on with PyTorch, Scikit-learn, or TensorFlow for inference integration. Comfortable with model optimization, quantization, and edge deployment formats (e.g. ONNX, TFLite). Familiarity with language models (LLMs) and multilingual NLP. Knowledge of data preprocessing, tokenization, and feature engineering for clinical/NLP tasks. Other Required Skills Understanding of HIPAA/GDPR compliance. Experience working on healthcare, social impact, or AI-for-good projects. What You'll Impact: You’ll play a pivotal role in connecting machine learning research with field-ready healthcare tools. Your work will help scale diagnosis support systems to thousands of underserved patients and power multilingual health consultations in real-time. Show more Show less

Posted 2 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

A career in our Advisory Acceleration Centre is the natural extension of PwC’s leading class global delivery capabilities. We provide premium, cost effective, high quality services that support process quality and delivery capability in support for client engagements. To really stand out and make us fit for the future in a constantly changing world, each and every one of us at PwC needs to be an authentic and inclusive leader, at all grades/levels and in all lines of service. To help us achieve this we have the PwC Professional; our global leadership development framework. It gives us a single set of expectations across our lines, geographies and career paths, and provides transparency on the skills we need as individuals to be successful and progress in our careers, now and in the future. Responsibilities As a Senior Associate, you'll work as part of a team of problem solvers, helping to solve complex business issues from strategy to execution. PwC Professional skills and responsibilities for this management level include but are not limited to: Use feedback and reflection to develop self awareness, personal strengths and address development areas. Delegate to others to provide stretch opportunities and coach to help deliver results. Develop new ideas and propose innovative solutions to problems. Use a broad range of tools and techniques to extract insights from from current trends in business area. Review your work and that of others for quality, accuracy and relevance. Share relevant thought leadership. Use straightforward communication, in a structured way, when influencing others. Able to read situations and modify behavior to build quality, diverse relationships. Uphold the firm's code of ethics and business conduct. Edu qualifications: BE / B Tech / MCA/ MTech Experience range: 5 - 10 years Skill - GW Testing - Senior Associate Job Description - Reviewing requirements / specifications / technical design documents Designing detailed, comprehensive and well-structured Test Plans and Test Cases Setting up Test Environment & Test Data Executing tests as needed throughout the project. Analyzing and reporting test results. Identifying and tracking defects through their lifecycle. Understanding of Integration - Technical Design Document and Use Case Testing experience of any one of the Guidewire products: PolicyCenter Experience on policy transactions, workflow ,Audits, forms inference Performing thorough testing [Smoke / System / Integration / Regression / Stabilization Possessing expertise in Test Management Tools like ALM / Jira Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

Experience : 4.00 + years Salary : USD 80000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - An USA based Series A funded Technology Startup) What do you need for this opportunity? Must have skills required: Generative Models, jax, Reinforcement Learning, Scikit-learn, Pytorch, TensorFlow, AWS, Docker, NLP, Python An USA based Series A funded Technology Startup is Looking for: Senior Deep Learning Engineer Job Summary: We are seeking a highly skilled and experienced Senior Deep Learning Engineer to join our team. This individual will lead the design, development, and deployment of cutting-edge deep learning models and systems. The ideal candidate is passionate about leveraging state-of-the-art machine learning techniques to solve complex real-world problems, thrives in a collaborative environment, and has a proven track record of delivering impactful AI solutions. Key Responsibilities: Model Development and Optimization: Design, train, and deploy advanced deep learning models for various applications such as computer vision, natural language processing, speech recognition, and recommendation systems. Optimize models for performance, scalability, and efficiency on various hardware platforms (e.g., GPUs, TPUs). Research and Innovation: Stay updated with the latest advancements in deep learning, AI, and related technologies. Develop novel architectures and techniques to push the boundaries of what’s possible in AI applications. System Design and Deployment: Architect and implement scalable and reliable machine learning pipelines for training and inference. Collaborate with software and DevOps engineers to deploy models into production environments. Collaboration and Leadership: Work closely with cross-functional teams, including data scientists, product managers, and software engineers, to define project goals and deliverables. Provide mentorship and technical guidance to junior team members and peers. Data Management: Collaborate with data engineering teams to preprocess, clean, and augment large datasets. Develop tools and processes for efficient data handling and annotation. Performance Evaluation: Define and monitor key performance metrics (KPIs) to evaluate model performance and impact. Conduct rigorous A/B testing and error analysis to continuously improve model outputs. Qualifications and Skills: Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or a related field. PhD preferred. Experience: 5+ years of experience in developing and deploying deep learning models. Proven track record of delivering AI-driven products or research with measurable impact. Technical Skills: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX. Strong programming skills in Python, with experience in libraries like NumPy, Pandas, and Scikit-learn. Familiarity with distributed computing frameworks such as Spark or Dask. Hands-on experience with cloud platforms (AWS or GCP) and containerization tools (Docker, Kubernetes). Domain Expertise: Experience with at least one specialized domain, such as computer vision, NLP, or time-series analysis. Familiarity with reinforcement learning, generative models, or other advanced AI techniques is a plus. Soft Skills: Strong problem-solving skills and the ability to work independently. Excellent communication and collaboration abilities. Commitment to fostering a culture of innovation and excellence. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

2.0 years

30 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

Experience : 2.00 + years Salary : INR 3000000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: GRPO, high-availability, Trl, LLM, Kubernetes, Python, machine_learning, Generative AI Yugen AI is Looking for: We are looking for a talented LLMOps Engineer to design, deploy, and operationalise agentic solutions for fraud investigations. This is critical to reducing fraud investigations TAT (turn-around time) by more than 70%. In this role, you will work directly with our CTO, Soumanta Das , as well as a team of 5 engineers (Backend Engineers, Data Engineers, Platform Engineers). Responsibilities Deploy and scale LLM inference workloads on Kubernetes (K8s) with 99.9% uptime. Build agentic tools and services for fraud investigations with complex reasoning capabilities. Work with Platform Engineers to set up monitoring and observability (e.g., Prometheus, Grafana) to track model performance and system health. Fine-tune open-source LLMs using TRL or similar libraries. Use Terraform for infrastructure-as-code to support scalable ML deployments. Contribute to Tech blogs, especially technical deep dives of the latest research in the field of reasoning. Requirements Strong programming skills (Python, etc.) and problem-solving abilities. Hands-on experience with open-source LLM inference and serving frameworks such as vLLM. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Some familiarity with fine-tuning and deploying open-source LLMs using GRPO, TRL, or similar frameworks. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Familiarity with/Knowledge of high-availability systems. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

4.0 years

10 Lacs

Calicut

On-site

GlassDoor logo

Qualifications ● 4 + years in data science or machine-learning roles (NLP or time-series a plus). ● Hands-on with clustering, anomaly detection, and causal-inference techniques. ● Strong Python (pandas, scikit-learn, PyTorch/TensorFlow) and solid SQL. ● Experience turning notebooks into production jobs via Airflow, Flyte, or similar. ● Comfortable with vector databases and similarity search (Faiss, PGVector, etc.). ● You think in experiments, communicate results clearly, and love shipping incrementally. Key Responsibilities ● Pattern Discovery – Build unsupervised / semi-supervised models (HDBSCAN, metric learning, spectral, etc.) that group similar issues and surface them for human review. ● Trend Detection & Forecasting – Design change-point and anomaly detectors, then forecast issue volume with Prophet, NeuralProphet, or your tool of choice. ● Cross-Channel Correlation – Link signals across chat, email, voice, and social to reveal how pain points bounce between channels. ● Root-Cause & Prescriptive Modeling – Apply causal graphs or lightweight GNNs to suggest likely drivers and remediation actions. ● Active-Learning Loops – Create feedback workflows that let analysts merge/split clusters and continuously improve model accuracy. ● Experimentation & Metrics – Define success criteria, run controlled experiments, and publish clear, visual results for the team. ● Collaboration – Partner with the LLM, platform, and dashboard engineers to deliver end-to-end features—then measure the lift. Job Type: Full-time Pay: Up to ₹1,000,000.00 per year Benefits: Provident Fund Schedule: Morning shift Supplemental Pay: Performance bonus Experience: machine learning: 4 years (Preferred) data science: 4 years (Preferred) python: 2 years (Required) Work Location: In person

Posted 2 weeks ago

Apply

2.0 years

30 Lacs

Kolkata, West Bengal, India

Remote

Linkedin logo

Experience : 2.00 + years Salary : INR 3000000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: GRPO, high-availability, Trl, LLM, Kubernetes, Python, machine_learning, Generative AI Yugen AI is Looking for: We are looking for a talented LLMOps Engineer to design, deploy, and operationalise agentic solutions for fraud investigations. This is critical to reducing fraud investigations TAT (turn-around time) by more than 70%. In this role, you will work directly with our CTO, Soumanta Das , as well as a team of 5 engineers (Backend Engineers, Data Engineers, Platform Engineers). Responsibilities Deploy and scale LLM inference workloads on Kubernetes (K8s) with 99.9% uptime. Build agentic tools and services for fraud investigations with complex reasoning capabilities. Work with Platform Engineers to set up monitoring and observability (e.g., Prometheus, Grafana) to track model performance and system health. Fine-tune open-source LLMs using TRL or similar libraries. Use Terraform for infrastructure-as-code to support scalable ML deployments. Contribute to Tech blogs, especially technical deep dives of the latest research in the field of reasoning. Requirements Strong programming skills (Python, etc.) and problem-solving abilities. Hands-on experience with open-source LLM inference and serving frameworks such as vLLM. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Some familiarity with fine-tuning and deploying open-source LLMs using GRPO, TRL, or similar frameworks. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Familiarity with/Knowledge of high-availability systems. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Bhubaneswar, Odisha, India

Remote

Linkedin logo

Experience : 4.00 + years Salary : USD 80000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - An USA based Series A funded Technology Startup) What do you need for this opportunity? Must have skills required: Generative Models, jax, Reinforcement Learning, Scikit-learn, Pytorch, TensorFlow, AWS, Docker, NLP, Python An USA based Series A funded Technology Startup is Looking for: Senior Deep Learning Engineer Job Summary: We are seeking a highly skilled and experienced Senior Deep Learning Engineer to join our team. This individual will lead the design, development, and deployment of cutting-edge deep learning models and systems. The ideal candidate is passionate about leveraging state-of-the-art machine learning techniques to solve complex real-world problems, thrives in a collaborative environment, and has a proven track record of delivering impactful AI solutions. Key Responsibilities: Model Development and Optimization: Design, train, and deploy advanced deep learning models for various applications such as computer vision, natural language processing, speech recognition, and recommendation systems. Optimize models for performance, scalability, and efficiency on various hardware platforms (e.g., GPUs, TPUs). Research and Innovation: Stay updated with the latest advancements in deep learning, AI, and related technologies. Develop novel architectures and techniques to push the boundaries of what’s possible in AI applications. System Design and Deployment: Architect and implement scalable and reliable machine learning pipelines for training and inference. Collaborate with software and DevOps engineers to deploy models into production environments. Collaboration and Leadership: Work closely with cross-functional teams, including data scientists, product managers, and software engineers, to define project goals and deliverables. Provide mentorship and technical guidance to junior team members and peers. Data Management: Collaborate with data engineering teams to preprocess, clean, and augment large datasets. Develop tools and processes for efficient data handling and annotation. Performance Evaluation: Define and monitor key performance metrics (KPIs) to evaluate model performance and impact. Conduct rigorous A/B testing and error analysis to continuously improve model outputs. Qualifications and Skills: Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or a related field. PhD preferred. Experience: 5+ years of experience in developing and deploying deep learning models. Proven track record of delivering AI-driven products or research with measurable impact. Technical Skills: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX. Strong programming skills in Python, with experience in libraries like NumPy, Pandas, and Scikit-learn. Familiarity with distributed computing frameworks such as Spark or Dask. Hands-on experience with cloud platforms (AWS or GCP) and containerization tools (Docker, Kubernetes). Domain Expertise: Experience with at least one specialized domain, such as computer vision, NLP, or time-series analysis. Familiarity with reinforcement learning, generative models, or other advanced AI techniques is a plus. Soft Skills: Strong problem-solving skills and the ability to work independently. Excellent communication and collaboration abilities. Commitment to fostering a culture of innovation and excellence. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

2.0 years

30 Lacs

Bhubaneswar, Odisha, India

Remote

Linkedin logo

Experience : 2.00 + years Salary : INR 3000000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: GRPO, high-availability, Trl, LLM, Kubernetes, Python, machine_learning, Generative AI Yugen AI is Looking for: We are looking for a talented LLMOps Engineer to design, deploy, and operationalise agentic solutions for fraud investigations. This is critical to reducing fraud investigations TAT (turn-around time) by more than 70%. In this role, you will work directly with our CTO, Soumanta Das , as well as a team of 5 engineers (Backend Engineers, Data Engineers, Platform Engineers). Responsibilities Deploy and scale LLM inference workloads on Kubernetes (K8s) with 99.9% uptime. Build agentic tools and services for fraud investigations with complex reasoning capabilities. Work with Platform Engineers to set up monitoring and observability (e.g., Prometheus, Grafana) to track model performance and system health. Fine-tune open-source LLMs using TRL or similar libraries. Use Terraform for infrastructure-as-code to support scalable ML deployments. Contribute to Tech blogs, especially technical deep dives of the latest research in the field of reasoning. Requirements Strong programming skills (Python, etc.) and problem-solving abilities. Hands-on experience with open-source LLM inference and serving frameworks such as vLLM. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Some familiarity with fine-tuning and deploying open-source LLMs using GRPO, TRL, or similar frameworks. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Familiarity with/Knowledge of high-availability systems. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Cuttack, Odisha, India

Remote

Linkedin logo

Experience : 4.00 + years Salary : USD 80000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - An USA based Series A funded Technology Startup) What do you need for this opportunity? Must have skills required: Generative Models, jax, Reinforcement Learning, Scikit-learn, Pytorch, TensorFlow, AWS, Docker, NLP, Python An USA based Series A funded Technology Startup is Looking for: Senior Deep Learning Engineer Job Summary: We are seeking a highly skilled and experienced Senior Deep Learning Engineer to join our team. This individual will lead the design, development, and deployment of cutting-edge deep learning models and systems. The ideal candidate is passionate about leveraging state-of-the-art machine learning techniques to solve complex real-world problems, thrives in a collaborative environment, and has a proven track record of delivering impactful AI solutions. Key Responsibilities: Model Development and Optimization: Design, train, and deploy advanced deep learning models for various applications such as computer vision, natural language processing, speech recognition, and recommendation systems. Optimize models for performance, scalability, and efficiency on various hardware platforms (e.g., GPUs, TPUs). Research and Innovation: Stay updated with the latest advancements in deep learning, AI, and related technologies. Develop novel architectures and techniques to push the boundaries of what’s possible in AI applications. System Design and Deployment: Architect and implement scalable and reliable machine learning pipelines for training and inference. Collaborate with software and DevOps engineers to deploy models into production environments. Collaboration and Leadership: Work closely with cross-functional teams, including data scientists, product managers, and software engineers, to define project goals and deliverables. Provide mentorship and technical guidance to junior team members and peers. Data Management: Collaborate with data engineering teams to preprocess, clean, and augment large datasets. Develop tools and processes for efficient data handling and annotation. Performance Evaluation: Define and monitor key performance metrics (KPIs) to evaluate model performance and impact. Conduct rigorous A/B testing and error analysis to continuously improve model outputs. Qualifications and Skills: Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or a related field. PhD preferred. Experience: 5+ years of experience in developing and deploying deep learning models. Proven track record of delivering AI-driven products or research with measurable impact. Technical Skills: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX. Strong programming skills in Python, with experience in libraries like NumPy, Pandas, and Scikit-learn. Familiarity with distributed computing frameworks such as Spark or Dask. Hands-on experience with cloud platforms (AWS or GCP) and containerization tools (Docker, Kubernetes). Domain Expertise: Experience with at least one specialized domain, such as computer vision, NLP, or time-series analysis. Familiarity with reinforcement learning, generative models, or other advanced AI techniques is a plus. Soft Skills: Strong problem-solving skills and the ability to work independently. Excellent communication and collaboration abilities. Commitment to fostering a culture of innovation and excellence. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Kolkata, West Bengal, India

Remote

Linkedin logo

Experience : 4.00 + years Salary : USD 80000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - An USA based Series A funded Technology Startup) What do you need for this opportunity? Must have skills required: Generative Models, jax, Reinforcement Learning, Scikit-learn, Pytorch, TensorFlow, AWS, Docker, NLP, Python An USA based Series A funded Technology Startup is Looking for: Senior Deep Learning Engineer Job Summary: We are seeking a highly skilled and experienced Senior Deep Learning Engineer to join our team. This individual will lead the design, development, and deployment of cutting-edge deep learning models and systems. The ideal candidate is passionate about leveraging state-of-the-art machine learning techniques to solve complex real-world problems, thrives in a collaborative environment, and has a proven track record of delivering impactful AI solutions. Key Responsibilities: Model Development and Optimization: Design, train, and deploy advanced deep learning models for various applications such as computer vision, natural language processing, speech recognition, and recommendation systems. Optimize models for performance, scalability, and efficiency on various hardware platforms (e.g., GPUs, TPUs). Research and Innovation: Stay updated with the latest advancements in deep learning, AI, and related technologies. Develop novel architectures and techniques to push the boundaries of what’s possible in AI applications. System Design and Deployment: Architect and implement scalable and reliable machine learning pipelines for training and inference. Collaborate with software and DevOps engineers to deploy models into production environments. Collaboration and Leadership: Work closely with cross-functional teams, including data scientists, product managers, and software engineers, to define project goals and deliverables. Provide mentorship and technical guidance to junior team members and peers. Data Management: Collaborate with data engineering teams to preprocess, clean, and augment large datasets. Develop tools and processes for efficient data handling and annotation. Performance Evaluation: Define and monitor key performance metrics (KPIs) to evaluate model performance and impact. Conduct rigorous A/B testing and error analysis to continuously improve model outputs. Qualifications and Skills: Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or a related field. PhD preferred. Experience: 5+ years of experience in developing and deploying deep learning models. Proven track record of delivering AI-driven products or research with measurable impact. Technical Skills: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX. Strong programming skills in Python, with experience in libraries like NumPy, Pandas, and Scikit-learn. Familiarity with distributed computing frameworks such as Spark or Dask. Hands-on experience with cloud platforms (AWS or GCP) and containerization tools (Docker, Kubernetes). Domain Expertise: Experience with at least one specialized domain, such as computer vision, NLP, or time-series analysis. Familiarity with reinforcement learning, generative models, or other advanced AI techniques is a plus. Soft Skills: Strong problem-solving skills and the ability to work independently. Excellent communication and collaboration abilities. Commitment to fostering a culture of innovation and excellence. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

2.0 years

30 Lacs

Cuttack, Odisha, India

Remote

Linkedin logo

Experience : 2.00 + years Salary : INR 3000000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: GRPO, high-availability, Trl, LLM, Kubernetes, Python, machine_learning, Generative AI Yugen AI is Looking for: We are looking for a talented LLMOps Engineer to design, deploy, and operationalise agentic solutions for fraud investigations. This is critical to reducing fraud investigations TAT (turn-around time) by more than 70%. In this role, you will work directly with our CTO, Soumanta Das , as well as a team of 5 engineers (Backend Engineers, Data Engineers, Platform Engineers). Responsibilities Deploy and scale LLM inference workloads on Kubernetes (K8s) with 99.9% uptime. Build agentic tools and services for fraud investigations with complex reasoning capabilities. Work with Platform Engineers to set up monitoring and observability (e.g., Prometheus, Grafana) to track model performance and system health. Fine-tune open-source LLMs using TRL or similar libraries. Use Terraform for infrastructure-as-code to support scalable ML deployments. Contribute to Tech blogs, especially technical deep dives of the latest research in the field of reasoning. Requirements Strong programming skills (Python, etc.) and problem-solving abilities. Hands-on experience with open-source LLM inference and serving frameworks such as vLLM. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Some familiarity with fine-tuning and deploying open-source LLMs using GRPO, TRL, or similar frameworks. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Familiarity with/Knowledge of high-availability systems. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Guwahati, Assam, India

Remote

Linkedin logo

Experience : 4.00 + years Salary : USD 80000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - An USA based Series A funded Technology Startup) What do you need for this opportunity? Must have skills required: Generative Models, jax, Reinforcement Learning, Scikit-learn, Pytorch, TensorFlow, AWS, Docker, NLP, Python An USA based Series A funded Technology Startup is Looking for: Senior Deep Learning Engineer Job Summary: We are seeking a highly skilled and experienced Senior Deep Learning Engineer to join our team. This individual will lead the design, development, and deployment of cutting-edge deep learning models and systems. The ideal candidate is passionate about leveraging state-of-the-art machine learning techniques to solve complex real-world problems, thrives in a collaborative environment, and has a proven track record of delivering impactful AI solutions. Key Responsibilities: Model Development and Optimization: Design, train, and deploy advanced deep learning models for various applications such as computer vision, natural language processing, speech recognition, and recommendation systems. Optimize models for performance, scalability, and efficiency on various hardware platforms (e.g., GPUs, TPUs). Research and Innovation: Stay updated with the latest advancements in deep learning, AI, and related technologies. Develop novel architectures and techniques to push the boundaries of what’s possible in AI applications. System Design and Deployment: Architect and implement scalable and reliable machine learning pipelines for training and inference. Collaborate with software and DevOps engineers to deploy models into production environments. Collaboration and Leadership: Work closely with cross-functional teams, including data scientists, product managers, and software engineers, to define project goals and deliverables. Provide mentorship and technical guidance to junior team members and peers. Data Management: Collaborate with data engineering teams to preprocess, clean, and augment large datasets. Develop tools and processes for efficient data handling and annotation. Performance Evaluation: Define and monitor key performance metrics (KPIs) to evaluate model performance and impact. Conduct rigorous A/B testing and error analysis to continuously improve model outputs. Qualifications and Skills: Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or a related field. PhD preferred. Experience: 5+ years of experience in developing and deploying deep learning models. Proven track record of delivering AI-driven products or research with measurable impact. Technical Skills: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX. Strong programming skills in Python, with experience in libraries like NumPy, Pandas, and Scikit-learn. Familiarity with distributed computing frameworks such as Spark or Dask. Hands-on experience with cloud platforms (AWS or GCP) and containerization tools (Docker, Kubernetes). Domain Expertise: Experience with at least one specialized domain, such as computer vision, NLP, or time-series analysis. Familiarity with reinforcement learning, generative models, or other advanced AI techniques is a plus. Soft Skills: Strong problem-solving skills and the ability to work independently. Excellent communication and collaboration abilities. Commitment to fostering a culture of innovation and excellence. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

Exploring Inference Jobs in India

With the rapid growth of technology and data-driven decision making, the demand for professionals with expertise in inference is on the rise in India. Inference jobs involve using statistical methods to draw conclusions from data and make predictions based on available information. From data analysts to machine learning engineers, there are various roles in India that require inference skills.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Hyderabad
  5. Pune

These major cities are known for their thriving tech industries and are actively hiring professionals with expertise in inference.

Average Salary Range

The average salary range for inference professionals in India varies based on experience level. Entry-level positions may start at around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.

Career Path

In the field of inference, a typical career path may start as a Data Analyst or Junior Data Scientist, progress to a Data Scientist or Machine Learning Engineer, and eventually lead to roles like Senior Data Scientist or Principal Data Scientist. With experience and expertise, professionals can also move into leadership positions such as Data Science Manager or Chief Data Scientist.

Related Skills

In addition to expertise in inference, professionals in India may benefit from having skills in programming languages such as Python or R, knowledge of machine learning algorithms, experience with data visualization tools like Tableau or Power BI, and strong communication and problem-solving abilities.

Interview Questions

  • What is the difference between inferential statistics and descriptive statistics? (basic)
  • How do you handle missing data in a dataset when performing inference? (medium)
  • Can you explain the bias-variance tradeoff in the context of inference? (medium)
  • What are the assumptions of linear regression and how do you test them? (advanced)
  • How would you determine the significance of a coefficient in a regression model? (medium)
  • Explain the concept of p-value and its significance in hypothesis testing. (basic)
  • Can you discuss the difference between frequentist and Bayesian inference methods? (advanced)
  • How do you handle multicollinearity in a regression model? (medium)
  • What is the Central Limit Theorem and why is it important in statistical inference? (medium)
  • How would you choose between different machine learning algorithms for a given inference task? (medium)
  • Explain the concept of overfitting and how it can affect inference results. (medium)
  • Can you discuss the difference between parametric and non-parametric inference methods? (advanced)
  • Describe a real-world project where you applied inference techniques to draw meaningful conclusions from data. (advanced)
  • How do you assess the goodness of fit of a regression model in inference? (medium)
  • What is the purpose of cross-validation in machine learning and how does it impact inference? (medium)
  • Can you explain the concept of Type I and Type II errors in hypothesis testing? (basic)
  • How would you handle outliers in a dataset when performing inference? (medium)
  • Discuss the importance of sample size in statistical inference and hypothesis testing. (basic)
  • How do you interpret confidence intervals in an inference context? (medium)
  • Can you explain the concept of statistical power and its relevance in inference? (medium)
  • What are some common pitfalls to avoid when performing inference on data? (basic)
  • How do you test the normality assumption in a dataset for conducting inference? (medium)
  • Explain the difference between correlation and causation in the context of inference. (medium)
  • How would you evaluate the performance of a classification model in an inference task? (medium)
  • Discuss the importance of feature selection in building an effective inference model. (medium)

Closing Remark

As you explore opportunities in the inference job market in India, remember to prepare thoroughly by honing your skills, gaining practical experience, and staying updated with industry trends. With dedication and confidence, you can embark on a rewarding career in this field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies