Home
Jobs

1028 Inference Jobs - Page 32

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

As our Principal Machine Learning (ML) / Personalization Engineer, you will:  Architect and deploy ML-based personalization systems for our suite of digital news products, including recommender systems for content ranking, homepage personalization, push notification targeting, and audience segmentation. Collaborate closely with editors, product managers, and analysts to integrate machine learning into the editorial workflow—making content creation, packaging, and distribution smarter and audience-aware. Analyze user behavior and content consumption patterns using large-scale datasets to build user understanding models and inform personalization strategies. Own the end-to-end ML pipeline: from data acquisition, feature engineering, model training & evaluation, to deployment and real-time inference. Drive experimentation culture: lead A/B testing and iterative optimization of recommendation and ranking models. Stay on top of global trends in personalization, news AI, large language models (LLMs), and recommendation systems, and bring best-in-class solutions to our stack. Who you need to be: Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field. 8–12 years of experience in machine learning, ideally in recommendation systems, personalization, or search relevance. Strong experience with Python and ML frameworks like TensorFlow, PyTorch, or Scikit-learn. Hands-on with recommendation engines (collaborative filtering, content-based, hybrid models) and vector similarity models. Experience with real-time data processing frameworks and deploying models in production. Solid understanding of SQL and data platforms (e.g., Snowflake, BigQuery, or Redshift). Exposure to BI tools (Metabase, Looker, Tableau) is a plus. Comfortable navigating ambiguous, fast-paced environments and leading cross-functional initiatives. Excellent communication and collaboration skills—able to explain complex ML concepts to non-technical stakeholders. Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Madhavaram, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Delivery Excellence Operations Manager Location: Chennai / Kolkata Experience Required: 8- 14+ years in BPO operations with a strong focus on process improvement and transformation Job Description We are seeking a dynamic and experienced Delivery Excellence Operations Manager to join our team in Chennai or Kolkata. This role is pivotal in driving operational excellence and continuous improvement initiatives across our global BPO engagements. The ideal candidate will have a proven track record of leading Lean Six Sigma projects, delivering impactful results through transformation strategies, and leveraging automation technologies. Key Responsibilities Lead and implement Continuous Improvement (CI) initiatives across assigned engagements, fostering a culture of operational excellence. Deploy and mentor Lean Six Sigma (LSS) projects with a focus on digital transformation and Robotic Process Automation (RPA). Drive the adoption of Quality Management Systems (QMS) to standardize best-in-class processes. Conduct process assessments, identify improvement opportunities, and lead ideation-to-implementation cycles. Promote global collaboration by sharing innovations, new methodologies, and benchmarks across centers. Design and maintain Balanced Scorecards and leadership dashboards for performance reporting. Support training initiatives to strengthen the organization's DNA in Lean and Six Sigma practices. Collaborate with teams to adopt emerging technologies such as AI, chatbots, process mining, and cloud-based analytics solutions. Provide consulting support for Big Data Analytics and help shape cloud computing strategies. Qualifications & Skills Lean Six Sigma certification is required; Black Belt (BB) preferred (internal or external certification). Must have led at least one high-impact BB project (e.g., FTE savings, revenue impact, or significant dollar savings via DMAIC), along with 4-5 other improvement projects. Strong data analysis skills including statistical inference and use of tools such as Minitab, R, Python, or SAS. Hands-on experience in CSAT improvement, AHT reduction, and TAT optimization projects. Excellent understanding of RPA tools such as UiPath, Blue Prism, Automation Anywhere, and basic exposure to AI technologies. Proficiency in dashboard and reporting tools like Power BI, Tableau, or QlikView. Understanding of AGILE project management methodologies is a plus. Prior experience in conducting training sessions/workshops for Lean Six Sigma and transformation initiatives. Preferred Background 8-14+ years of experience in the BPO industry, with strong exposure to delivery excellence functions. Demonstrated ability to lead transformation efforts with measurable business outcomes. Experience with cloud-based services, AI integration, and modern automation tools. Project leadership experience rather than merely supporting roles in LSS projects. Join us in shaping the future of BPO delivery through innovation, transformation, and excellence. (ref:iimjobs.com) Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Designation: - ML / MLOPs Engineer Location: - Noida (Sector- 132) Key Responsibilities: • Model Development & Algorithm Optimization : Design, implement, and optimize ML models and algorithms using libraries and frameworks such as TensorFlow , PyTorch , and scikit-learn to solve complex business problems. • Training & Evaluation : Train and evaluate models using historical data, ensuring accuracy, scalability, and efficiency while fine-tuning hyperparameters. • Data Preprocessing & Cleaning : Clean, preprocess, and transform raw data into a suitable format for model training and evaluation, applying industry best practices to ensure data quality. • Feature Engineering : Conduct feature engineering to extract meaningful features from data that enhance model performance and improve predictive capabilities. • Model Deployment & Pipelines : Build end-to-end pipelines and workflows for deploying machine learning models into production environments, leveraging Azure Machine Learning and containerization technologies like Docker and Kubernetes . • Production Deployment : Develop and deploy machine learning models to production environments, ensuring scalability and reliability using tools such as Azure Kubernetes Service (AKS) . • End-to-End ML Lifecycle Automation : Automate the end-to-end machine learning lifecycle, including data ingestion, model training, deployment, and monitoring, ensuring seamless operations and faster model iteration. • Performance Optimization : Monitor and improve inference speed and latency to meet real- time processing requirements, ensuring efficient and scalable solutions. • NLP, CV, GenAI Programming : Work on machine learning projects involving Natural Language Processing (NLP) , Computer Vision (CV) , and Generative AI (GenAI) , applying state-of-the-art techniques and frameworks to improve model performance. • Collaboration & CI/CD Integration : Collaborate with data scientists and engineers to integrate ML models into production workflows, building and maintaining continuous integration/continuous deployment (CI/CD) pipelines using tools like Azure DevOps , Git , and Jenkins . • Monitoring & Optimization : Continuously monitor the performance of deployed models, adjusting parameters and optimizing algorithms to improve accuracy and efficiency. • Security & Compliance : Ensure all machine learning models and processes adhere to industry security standards and compliance protocols , such as GDPR and HIPAA . • Documentation & Reporting : Document machine learning processes, models, and results to ensure reproducibility and effective communication with stakeholders. Required Qualifications: • Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. • 3+ years of experience in machine learning operations (MLOps), cloud engineering, or similar roles. • Proficiency in Python , with hands-on experience using libraries such as TensorFlow , PyTorch , scikit-learn , Pandas , and NumPy . • Strong experience with Azure Machine Learning services, including Azure ML Studio , Azure Databricks , and Azure Kubernetes Service (AKS) . • Knowledge and experience in building end-to-end ML pipelines, deploying models, and automating the machine learning lifecycle. • Expertise in Docker , Kubernetes , and container orchestration for deploying machine learning models at scale. • Experience in data engineering practices and familiarity with cloud storage solutions like Azure Blob Storage and Azure Data Lake . • Strong understanding of NLP , CV , or GenAI programming, along with the ability to apply these techniques to real-world business problems. • Experience with Git , Azure DevOps , or similar tools to manage version control and CI/CD pipelines. • Solid experience in machine learning algorithms , model training , evaluation , and hyperparameter tuning Show more Show less

Posted 3 weeks ago

Apply

4.0 - 6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Experience: 4-6 Years Key Responsibilities ● Fine-tune and train open-source LLMs (e.g., LLaMA or similar) for downstream applications. ● Build and orchestrate multi-agent workflows using LangGraph for production use cases. ● Implement and optimize RAG pipelines, including embedding stores and retrievers. ● Deploy and manage models via Hugging Face with robust inference capabilities. ● Develop modular backend components and APIs using Python. ● Ensure reproducibility, efficiency, and scalability in all LLM training and deployment tasks. ● Independently build and deliver project components from the ground up. Must-Have Skills ● 4–6 years of hands-on experience in AI/ML engineering roles. ● Strong experience with LangGraph in real-world, multi-agent applications. ● Production-level experience in LLM fine-tuning and deployment (not POCs or academic work). ● Deep understanding of RAG pipeline design and implementation. ● Proficiency in Python for data pipelines and model orchestration. ● Familiarity with open-source LLMs like LLaMA, Mistral, Falcon, etc. ● Deployment experience with the Hugging Face ecosystem. Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Description We are part of the India & Emerging Stores Customer Fulfilment Experience Org. Team's mission is to address unique customer requirements and the increasing associated costs/abuse of returns and rejects for Emerging Stores. Our team implements tech solves that reduce the net cost of concessions/refunds - this includes buyer and seller abuse, costs associated with return/reject transportation, cost of contacts and operations cost at return centers. We have a huge opportunity to create a legacy and our Legacy Statement is to “transform ease and quality of living in India, thereby enabling its potential in the 21st century”. We also believe that we have an additional responsibility to “help Amazon become truly global in its perspective and innovations” by creating global best-in-class products/platforms that can serve our customers worldwide. This is an opportunity to join our mission to build tech solutions that empower sellers to delight the next billion customers. You will be responsible for building new system capabilities grounds up for strategic business initiatives. If you feel excited by the challenge of setting the course for large company wide initiatives, building and launching customer facing products in IN and other emerging markets, this may be the next big career move for you. We are building systems which can scale across multiple marketplaces and are on the state-of-the-art in automated large scale e-commerce business. We are looking for a SDE to deliver capabilities across marketplaces. We operate in a high performance agile ecosystem where SDEs, Product Managers and Principals frequently connect with end customers of our products. Our SDEs stay connected with customers through seller/FC/Deliver Station visits and customer anecdotes. This allows our engineers to significantly influence product roadmap, contribute to PRFAQs and create disproportionate impact through the tech they deliver. We offer Technology leaders a once in a lifetime opportunity to transform billions of lives across the planet through their tech innovation. As an engineer, you will help with the design, implementation, and launch of many key product features. You will get an opportunity to work on the wide range of technologies (including AWS Open Search, Lambda, ECS, SQS, Dynamo DB, Neptune etc.) and apply new technologies for solving customer problems. You will have an influence on defining product features, drive operational excellence, and spearhead the best practices that enable a quality product. You will get to work with highly skilled and motivated engineers who are already contributing to building high-scale and high-available systems. If you are looking for an opportunity to work on world-leading technologies and would like to build creative technology solutions that positively impact hundreds of millions of customers, and relish large ownership and diverse technologies, join our team today! As An Engineer You Will Be Responsible For Ownership of product/feature end-to-end for all phases from the development to the production. Ensuring the developed features are scalable and highly available with no quality concerns. Work closely with senior engineers for refining the design and implementation. Management and execution against project plans and delivery commitments. Assist directly and indirectly in the continual hiring and development of technical talent. Create and execute appropriate quality plans, project plans, test strategies and processes for development activities in concert with business and project management efforts. Contribute intellectual property through patents. The candidate should be passionate engineer about delivering experiences that delight customers and creating solutions that are robust. He/she should be able to commit and own the deliveries end-to-end. About The Team Team: IES NCRC Tech Mission: We own programs to prevent customer abuse for IN & emerging marketplaces. We detect abusive customers for known abuse patterns and apply interventions at different stages of buyer's journey like checkout, pre-fulfillment, shipment and customer contact (customer service). We closely partner with International machine learning team to build ML based solutions for above interventions. Vision: Our goal is to automate detection of new abuse patterns and act quickly to minimize financial loss to Amazon. This would act as a deterrent for abusers, while building trust for genuine customers. We use machine learning based models to automate the abuse detection in a scalable & efficient manner. Technologies: The ML models leveraged by the team include a vast variety ranging from regression-based (XgBoost), to deep-learning models (RNN, CNN) and use frameworks like PyTorch, TensorFlow, Keras for training & inference. Productionization of ML models for real-time low-latency high traffic use-cases poses unique challenges, which in turn makes the work exciting. In terms of tech stack, multiple AWS technologies are used, e.g. Sagemaker, ECS, Lambda, ElasticSearch, StepFunctions, AWS Batch, DynamoDB, S3, CDK (for infra), GraphDBs and are open to adopt new technologies as per use-case. Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language Preferred Qualifications 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Haryana Job ID: A2992237 Show more Show less

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

India

Remote

Linkedin logo

Senior Machine Learning Engineer (AI-Powered Software Platform for Hidden Physical-Threat Detection & Real-Time Intelligence) About the Company: Aerobotics7 (A7) is a mission-driven deep-tech startup focused on developing a UAV-based next-gen sensing and advanced AI platform to detect, identify, and mitigate hidden threats like landmines, UXOs, and IEDs in real-time. We are embarking on a rapid development phase, creating innovative solutions leveraging cutting-edge technologies. Our dynamic team is committed to building impactful products through continuous learning, and close cross-collaboration. Position Overview: We are seeking a Senior Machine Learning Engineer with a strong research orientation to join our team. This role will focus on developing and refining proprietary machine learning models for drone-based landmine detection and mitigation. The ideal candidate will design, develop, and optimize advanced ML workflows with an emphasis on rigorous research, novel model development, and experimental validation in deep learning, multi-modal/sensor fusion and computer vision applications. Key Responsibilities: Lead the end-to-end AI model development process, including research, experimentation, design, and implementation. Architect, train, and deploy deep learning models on cloud (GCP) and edge devices, ensuring real-time performance. Develop and optimize multi-modal ML/DL models integrating multiple sensor inputs. Implement and fine-tune CNNs, Vision Transformers (ViTs), and other deep-learning architectures. Design and improve sensor fusion techniques for enhanced perception and decision-making. Optimize AI inference for low-latency and high-efficiency deployment on production. Cross-collaborate with software and hardware teams to integrate AI solutions into mission-critical applications. Develop scalable pipelines for model training, validation, and continuous improvement. Ensure robustness, interpretability, and security of AI models in deployment. Required Skills: • Strong expertise in deep learning frameworks (TensorFlow, PyTorch). • Experience with CNNs, ViTs, and other DL architectures. • Hands-on experience in multi-modal ML and sensor fusion techniques. • Proficiency in cloud-based AI model deployment (GCP experience preferred). • Experience with edge AI optimization (NVIDIA Jetson, TensorRT, OpenVINO). • Strong knowledge of data preprocessing, augmentation, and synthetic data generation. • Proficiency in model quantization, pruning, and optimization for real-time applications. • Familiarity with computer vision, object detection, and real-time inference techniques. • Ability to work with limited datasets, including generating synthetic data (VAEs or s similar), data annotation and augmentation strategies. • Strong coding skills in Python and C++ with experience in high-performance computing. Preferred Qualifications: • Experience: 2-4+ Years. • Experience with MLOps, including CI/CD pipelines, model versioning, and monitoring. • Knowledge of reinforcement learning techniques. • Experience in working in fast-paced startup environments. • Prior experience working on AI-driven autonomous systems, robotics, or UAVs. • Understanding of embedded systems and hardware acceleration for AI workloads. Benefits: NOTE: THIS ROLE IS UNDER AEROBOTICS7 INVENTIONS PVT. LTD., AN INDIAN ENTITY. IT IS A REMOTE INDIA-BASED ROLE WITH COMPENSATION ALIGNED TO INDIAN MARKET STANDARDS. WHILE OUR PARENT COMPANY IS US-BASED, THIS POSITION IS FOR CANDIDATES RESIDING AND WORKING IN INDIA. Competitive startup-level salary and comprehensive benefits package. Future opportunity for equity options in the company. Opportunity to work on impactful, cutting-edge technology in a collaborative startup environment. Professional growth with extensive learning and career development opportunities. Direct contribution to tangible, real-world impact. How to Apply: Interested candidates are encouraged to submit their resume along with an (optional) cover letter highlighting their relevant experience and passion for working in a dynamic startup environment. For any questions or further information, feel free to reach out to us directly by emailing us at careers@aerobotics7.com. Show more Show less

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

India

Remote

Linkedin logo

At Rethem, we're revolutionizing the sales landscape by putting buyer outcomes at the forefront. We understand that customers buy outcomes, and our AI-driven platform empowers your sales reps to deliver those outcomes, helping them crush their quotas. What Sets Us Apart Deep AI Integration: Our platform leverages advanced AI that acts as a personal coach for your reps, adapting to your business processes to automate complex tasks and provide real-time guidance Outcome-Driven Approach: By focusing on delivering measurable outcomes, we enable your sales team to build trust and foster long-term customer relationships Market Leadership: Positioned at the cutting edge of buyer-centric sales transformation, we're leading the shift towards more meaningful and effective sales interactions Proven Expertise: Our leadership and team consist of industry veterans with a track record of driving substantial growth and innovation in sales Our Mission To redefine the sales process by aligning it with buyer needs, leveraging AI to empower sales teams to deliver outcomes that drive mutual success. Transform Your Sales Strategy with AI Rethem turns your sales playbook into an intelligent, always-on guide that adapts in real-time. By harnessing the power of AI, we provide your team with: Real-Time Coaching: Enhance performance with actionable insights during every buyer interaction Enhanced Efficiency: Automate key processes so your reps can focus on building relationships and delivering value Outcome Alignment: Ensure your offerings are perfectly aligned with customer objectives, leading to higher satisfaction and loyalty Accelerate Growth: Drive higher win rates and larger deals through a buyer-focused approach Vision for the Future We envision a future where AI and human expertise collaborate seamlessly to create unparalleled sales experiences. By continuously innovating, we aim to stay at the forefront of buyer-centric sales transformation. Join the Sales Revolution Emerging from stealth mode, Rethem invites a select group of visionary organizations to pilot our groundbreaking platform. If you're ready to elevate your sales team, deliver exceptional customer outcomes, and empower your reps to crush their quotas, visit our website to learn more and apply. Be Part of Our Journey We're assembling a team of innovators passionate about reshaping the sales industry. Explore career opportunities with Re:them and help shape the future of outcome-driven, AI-powered sales. Experience the Power of AI-Driven Sales Transformation with Re:them. The Role We are seeking a hands-on Agentic AI Ops Engineer who thrives at the intersection of cloud infrastructure , AI agent systems , and DevOps automation . In this role, you will build and maintain the CI/CD infrastructure for Agentic AI solutions using Terraform on AWS , while also developing, deploying, and debugging intelligent agents and their associated tools . This position is critical to ensuring scalable, traceable, and cost-effective delivery of agentic systems in production environments. The Responsibilities CI/CD Infrastructure for Agentic AI Design, implement, and maintain CI/CD pipelines for Agentic AI applications using Terraform , AWS CodePipeline , CodeBuild , and related tools. Automate deployment of multi-agent systems and associated tooling, ensuring version control, rollback strategies, and consistent environment parity across dev/test/prod Agent Development & Debugging Collaborate with ML/NLP engineers to develop and deploy modular, tool-integrated AI agents in production. Lead the effort to create debuggable agent architectures , with structured logging, standardized agent behaviors, and feedback integration loops. Build agent lifecycle management tools that support quick iteration, rollback, and debugging of faulty behaviors Monitoring, Tracing & Reliability Implement end-to-end observability for agents and tools, including runtime performance metrics , tool invocation traces , and latency/accuracy tracking . Design dashboards and alerting mechanisms to capture agent failures, degraded performance, and tool bottlenecks in real-time. Build lightweight tracing systems that help visualize agent workflows and simplify root cause analysis Cost Optimization & Usage Analysis Monitor and manage cost metrics associated with agentic operations including API call usage , toolchain overhead , and model inference costs . Set up proactive alerts for usage anomalies , implement cost dashboards , and propose strategies for reducing operational expenses without compromising performance Collaboration & Continuous Improvement Work closely with product, backend, and AI teams to evolve the agentic infrastructure design and tool orchestration workflows . Drive the adoption of best practices for Agentic AI DevOps , including retraining automation, secure deployments, and compliance in cloud-hosted environments. Participate in design reviews, postmortems, and architectural roadmap planning to continuously improve reliability and scalability Requirements 2+ years of experience in DevOps, MLOps, or Cloud Infrastructure with exposure to AI/ML systems . Deep expertise in AWS serverless architecture , including hands-on experience with: AWS Lambda - function design, performance tuning, cold-start optimization. Amazon API Gateway - managing REST/HTTP APIs and integrating with Lambda securely. Step Functions - orchestrating agentic workflows and managing execution states. S3, DynamoDB, EventBridge, SQS - event-driven and storage patterns for scalable AI systems. Strong proficiency in Terraform to build and manage serverless AWS environments using reusable, modular templates Experience deploying and managing CI/CD pipelines for serverless and agent-based applications using AWS CodePipeline, CodeBuild, CodeDeploy , or GitHub Actions Hands-on experience with agent and tool development in Python , including debugging and performance tuning in production. Solid understanding of IAM roles and policies , VPC configuration, and least-privilege access control for securing AI systems. Deep understanding of monitoring, alerting, and distributed tracing systems (e.g., CloudWatch, Grafana, OpenTelemetry). Ability to manage environment parity across dev, staging, and production using automated infrastructure pipelines. Excellent debugging, documentation, and cross-team communication skills Benefits Health Insurance, PTO, and Leave time Ongoing paid professional training and certifications Fully Remote work Opportunity Strong Onboarding & Training programs Are you r eady to Join the Revolution? If you're ready to take on this exciting challenge and believe you meet our requirements, we encourage you to apply. Let's shape the future of AI-driven sales together! See more about us at https://www.rethem.ai/ EEO Statement All qualified applicants to Expedite Commerce are considered for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran's status or any other protected characteristic. Show more Show less

Posted 3 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

India

On-site

Linkedin logo

PLEASE NOTE: THIS ROLE IS ONLY FOR CANDIDATES WITH 5 TO 7 YEARS OF EXPERIENCE About PharmSight PharmSight is a leading innovator in bio-pharma analytics, providing cutting-edge AI-powered solutions that transform product research, market intelligence, and healthcare decision-making. We are dedicated to improving patient outcomes and driving advancements in the pharmaceutical industry through the application of advanced artificial intelligence Why join PharmSight? Competitive Compensation: Best-in-class salary with structured career progression Flexible Work Environment: Option to work from anywhere, at any time Global Client Exposure: Collaborate with leading pharmaceutical companies on impactful projects Career Growth & Recognition: A flat hierarchy with ample opportunities for leadership and professional development Role Overview As an AI Developer/Engineer (LLM) at PharmSight, you will be at the forefront of designing, developing, and deploying generative AI applications using state-of-the-art large language models (LLMs). You will be instrumental in crafting innovative AI solutions that solve complex challenges in bio-pharma analytics, product research, and market intelligence, directly impacting our clients ability to make data-driven decisions. This role demands a unique combination of deep technical expertise, creative problem-solving, and a passion for advancing AI technologies within the healthcare and pharmaceutical domains Key Responsibilities Architect, implement, and optimize large language models (LLMs) such as GPT, LLaMA, and BERT, tailoring them to the specific needs of bio-pharma analytics, product research, and market intelligence Experiment with diverse model architectures, hyperparameters, and training methodologies to maximize performance for targeted healthcare and pharmaceutical applications Fine-tune pre-trained models to address domain-specific challenges, ensuring exceptional accuracy, relevance, and contextual understanding Design and refine prompts to optimize LLM performance in generating accurate, insightful, and actionable outputs Develop instruction-tuning pipelines that align model behavior with specific business objectives and user requirements Continuously iterate on prompt strategies to enhance model interpretability and mitigate the risk of hallucinations or irrelevant outputs Conduct rigorous evaluations of LLMs using industry-standard metrics such as perplexity, BLEU, ROUGE, and domain-specific accuracy scores Perform in-depth error analysis, bias detection, and fairness audits to ensure models meet the highest ethical and regulatory standards Benchmark model performance against industry best practices and competitor solutions to maintain a competitive edge and drive continuous improvement Deploy LLMs into production environments, ensuring scalability, reliability, and low-latency performance to meet the demands of real-world applications Optimize models for inference speed and resource efficiency through techniques like quantization, distillation, and pruning Implement robust monitoring systems to track model performance in real-time and deploy timely updates to address drift or degradation in output quality Collaborate closely with data engineers and analysts to seamlessly integrate LLM outputs into PharmSight’s analytics platforms Leverage graph databases (e.g., vector graphs, hybrid graphs) to enhance structured knowledge extraction from unstructured text Develop APIs and intuitive interfaces that facilitate seamless interaction between LLMs and other critical system components Remain at the forefront of LLM research, actively exploring advancements in areas such as few-shot learning, reinforcement learning from human feedback (RLHF), and multimodal models Prototype and rigorously test emerging techniques to enhance model capabilities and address novel challenges in the bio-pharma domain Contribute findings to open-source projects, publish research insights, and represent PharmSight in AI research communities Work collaboratively with cross-functional teams including data scientists, product managers, and domain experts, ensuring that LLM development is aligned with critical business goals Mentor junior developers and analysts, providing guidance on LLM techniques, coding best practices, and emerging trends in AI Requirements Educational Background: bachelor’s or master’s degree in computer science, Data Science, Artificial Intelligence, or a related field AI & ML Experience: 5-7 years of hands-on experience in AI/ML development, with a strong focus on large language models (LLMs) Expertise in Python and deep learning frameworks (e.g., TensorFlow, PyTorch) Solid understanding of prompt engineering, model optimization, and NLP techniques Healthcare/Pharma Knowledge: A solid understanding of healthcare data, bio-pharma industry dynamics, and regulatory requirements Analytical Mindset: Exceptional problem-solving skills with the ability to translate business needs into innovative AI-driven solutions Communication Skills: Excellent written and verbal communication skills, with the ability to collaborate effectively with cross-functional teams and explain complex AI concepts to non-technical stakeholders (Bonus Skill) Experience in MLOps (e.g., Docker, Kubernetes, CI/CD pipelines, model monitoring) (Bonus Skill) Proficiency in cloud platforms (AWS, Azure, or GCP) for scalable AI deployment (Bonus Skill) Experience with knowledge graph construction and multimodal data integration (Eg, Neo4j, Entity extraction, nodes extraction) Join Us PharmSight offers a competitive salary, comprehensive benefits package, and the opportunity to work on cutting-edge AI projects that are transforming the pharmaceutical industry. We are committed to fostering a collaborative and innovative work environment where you can grow your skills and make a real impact Interested? Send your CV/Resume to Careers@pharmsight.com , and we’ll get back to you soon! Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

India

On-site

Linkedin logo

About Us Udacity is on a mission of forging futures in tech through radical talent transformation in digital technologies. We offer a unique and immersive online learning platform, powering corporate technical training in fields such as Artificial Intelligence, Machine Learning, Data Science, Autonomous Systems, Cloud Computing and more. Our rapidly growing global organization is revolutionizing how the enterprise market bridges the talent shortage and skills gaps during their digital transformation journey. At Udacity, the Analytics Team is deploying data to inform and empower the company with insight, to drive student success and business value. We are looking for a Principal Data Analyst to help advance that vision as part of our business analytics group. You will work with stakeholders to help inform their current initiatives and long term roadmap with data. You will be a key part of a dynamic data team that works daily with strategic partners to deliver data, prioritize resources and scale our impact. This is a chance to affect thousands of students around the world who come to Udacity to improve their lives, and your success as part of a world-class analytics organization will be visible up to the highest levels of the company. Your Responsibilities You will report to the Director of Data and lead high-impact analyses of Udacity’s curriculum and learner behavior to optimize content strategy, ensure skills alignment with industry needs, and drive measurable outcomes for learners and enterprise clients Lead the development of a strategic analytics roadmap for Udacity’s content organization, aligning insights with learning, product, and business goals. Partner with senior stakeholders to define and monitor KPIs that measure the health, efficacy, and ROI of our curriculum across both B2C and enterprise portfolios Centralize and synthesize learner feedback, CX signals, and performance data to identify content pain points and inform roadmap prioritization. Develop scalable methods to assess content effectiveness by integrating learner outcomes, usage behavior, and engagement metrics. Contribute to building AI-powered systems that classify learner feedback, learning styles, and success predictors. Act as a thought partner to leaders across Content and Product by communicating insights clearly and influencing strategic decisions. Lead cross-functional analytics initiatives and mentor peers and junior analysts to elevate data maturity across the organization. Requirements 8+ years experience in analytics or data science roles with a focus on product/content insights, ideally in edtech or SaaS. Advanced SQL and experience with data warehouses (Athena, Presto, Redshift, etc.). Strong proficiency in Python for data analysis, machine learning, and automation. Experience with dashboards and visualization tools (e.g., Tableau, PowerBI, or similar). Strong knowledge of experimentation, A/B testing, and causal inference frameworks. Proven ability to lead high-impact analytics projects independently and influence stakeholders Excellent communication skills—able to translate technical insights into business recommendations Preferred Experience Familiarity with Tableau, Amplitude, dbt, Airflow, or similar tools Experience working with large-scale sequential or clickstream data Exposure to NLP, embeddings, or GPT-based analysis for feedback classification Understanding of learning science or instructional design principles Benefits Experience a rewarding work environment with Udacity's perks and benefits! At Udacity, we offer you the flexibility of working from home. We also have in-person collaboration spaces in Mountain View, Cairo, Dubai and Noida and continue to build opportunities for team members to connect in person Flexible working hours Paid time off Comprehensive medical insurance coverage for you and your dependents Employee wellness resources and initiatives (access to wellness platforms like Headspace) Quarterly wellness day off Personalized career development Unlimited access to Udacity Nanodegrees What We Do Forging futures in tech is our vision. Udacity is where lifelong learners come to learn the skills they need, to land the jobs they want, and to build the lives they deserve. Don’t stop there! Please keep reading... You’ve probably heard the following statistic: Most male applicants only meet 60% of the qualifications, while women and other marginalized candidates only apply if they meet 100% of the qualifications. If you think you have what it takes but don’t meet every single point in the job description, please apply! We believe that historically, many processes disproportionately hurt the most marginalized communities in society- including people of color, working-class backgrounds, women and LGBTQ people. Centering these communities at our core is pivotal for any successful organization and a value we uphold steadfastly. Therefore, Udacity strongly encourages applications from all communities and backgrounds. Udacity is proud to be an Equal Employment Opportunity employer. Please read our blog post for “6 Reasons Why Diversity, Equity, and Inclusion in the Workplace Exists” Last, but certainly not least… Udacity is committed to creating economic empowerment and a more diverse and equitable world. We believe that the unique contributions of all Udacians is the driver of our success. To ensure that our products and culture continue to incorporate everyone’s perspectives and experience we never discriminate on the basis of race, color, religion, sex, gender, gender identity or expression, sexual orientation, marital status, national origin, ancestry, disability, medical condition (including genetic information), age, veteran status or military status, or any other basis protected by federal, state or local laws. As part of our ongoing work to build more diverse teams at Udacity, when applying, you will be asked to complete a voluntary self-identification survey. This survey is anonymous, we are unable to connect your application with your survey responses. Please complete this voluntary survey as we utilize the data for diversity measures in terms of gender and ethnic background in both our candidates and our Udacians. We consider this data seriously and appreciate your willingness to complete this step in the process, if you choose to do so. Udacity's Values Obsess over Outcomes - Take the Lead - Embrace Curiosity - Celebrate the Assist Udacity's Terms of Use and Privacy Policy Show more Show less

Posted 3 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description The ideal candidate must possess in-depth functional knowledge of the process area and apply it to operational scenarios to provide effective solutions. The role enables to identify discrepancies and propose optimal solutions by using a logical, systematic, and sequential methodology. It is vital to be open-minded towards inputs and views from team members and to effectively lead, control, and motivate groups towards company objects. Additionally, candidate must be self-directed, proactive, and seize every opportunity to meet internal and external customer needs and achieve customer satisfaction by effectively auditing processes, implementing best practices and process improvements, and utilizing the frameworks and tools available. Goals and thoughts must be clearly and concisely articulated and conveyed, verbally and in writing, to clients, colleagues, subordinates, and supervisors. Process Manager Role And Responsibilities Set up user journey dashboards across customer touchpoints (web and mobile app) in Adobe analytics, GA4 and Amplitude and identify pain points, familiarity with auditing tags using Omibug, GA debugger and other relevant tools. Understanding and familiarity with cross-device analyses using combined reporting suites, virtual report suite and familiarity with people metric and data warehouse. Building complex segments in analytics tools by going through online user journeys, self-serving tag audit to build segments. Analysis of customer journey and recommend personalization tests on digital properties by using Adobe analytics, GA4, Amplitude or any equivalent tool, walk through of analysis outcome and come up with ideas to optimize digital user experience. Website and mobile app Optimization consulting for client accounts across industries (customer journey analyses and personalization). Familiarity with website measurement strategy, identifying key KPIs and define goals, integrating online and offline data and segmentation strategies. Connect with clients for business requirements, walk through analysis outcome and come up with ideas for optimization of the digital properties. Build analytical reports and dashboards using visualization tools like LookerStudio or PowerBI. Technical And Functional Skills Bachelor’s Degree with overall experience of 6-10 years in digital analytics and optimization (Adobe Analytics, GA4, Appsflyer and Amplitude). Specialism – Adobe Analytics or GA4 and App analytics tools like Amplitude, Appsflyer, Visualization tools Expert- LookerStudio or PowerBI Certification in Adobe Analytics Business practitioner preferred. Ability to drive business inference from quantitative and qualitative datasets. Ability to collaborate with stakeholders across the globe. Strong communication, creative and innovation skills to help develop offerings to meet market needs. About Us At eClerx, we serve some of the largest global companies – 50 of the Fortune 500 clients. Our clients call upon us to solve their most complex problems, and deliver transformative insights. Across roles and levels, you get the opportunity to build expertise, challenge the status quo, think bolder, and help our clients seize value About The Team eClerx is a global leader in productized services, bringing together people, technology and domain expertise to amplify business results. Our mission is to set the benchmark for client service and success in our industry. Our vision is to be the innovation partner of choice for technology, data analytics and process management services. Since our inception in 2000, we've partnered with top companies across various industries, including financial services, telecommunications, retail, and high-tech. Our innovative solutions and domain expertise help businesses optimize operations, improve efficiency, and drive growth. With over 18,000 employees worldwide, eClerx is dedicated to delivering excellence through smart automation and data-driven insights. At eClerx, we believe in nurturing talent and providing hands-on experience. eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, per applicable law. Show more Show less

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Title: AI/ML Architect Location: Onsite – Bangalore Experience: 10+ years Position Summary: We are seeking an experienced AI/ML Architect to lead the design and deployment of scalable AI solutions. This role requires a strong blend of technical depth, systems thinking, and leadership in machine learning, computer vision, and real-time analytics. You will drive the architecture for edge, on-prem, and cloud-based AI systems, integrating 3 rd party data sources, sensor and vision data to enable predictive, prescriptive, and autonomous operations across industrial environments. Key Responsibilities: Architecture &Strategy  Define the end-to-end architecture for AI/ML systems including time series forecasting, computer vision, and real-time classification.  Design scalable ML pipelines (training, validation, deployment, retraining) using MLOps best practices.  Architect hybrid deployment models supporting both cloud and edge inference for low-latency processing. Model Integration  Guide the integration of ML models into the IIoT platform for real-time insights, alerting, and decision support.  Support model fusion strategies combining disparate data sources, sensor streams with visual data (e.g., object detection + telemetry + 3 rd party data ingestion). MLOps & Engineering  Define and implement ML lifecycle tooling, including version control, CI/CD, experiment tracking, and drift detection.  Ensure compliance, security, and auditability of deployed ML models. Collaboration &Leadership  Collaborate with Data Scientists, ML Engineers, DevOps, Platform, and Product teams to align AI efforts with business goals.  Mentor engineering and data teams in AI system design, optimization, and deployment strategies. Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

About the Role As a Senior Data Scientist, you will be instrumental in advancing our Media Measurement platform and data science capabilities. You will develop innovative solutions that provide precise, actionable insights to our clients through advanced modeling techniques, automation, and causal inference methodologies. This role combines deep technical expertise with business acumen to drive innovation in media measurement. Key Responsibilities: Develop and enhance media measurement models, incorporating various media variables, market factors, and causal signals Implement advanced statistical techniques including Bayesian inference, hierarchical models, and experimental design Design and maintain automated systems for model training, deployment, and monitoring Collaborate with Product and Engineering teams to integrate data science solutions into our platform Translate complex analytical outputs into actionable business insights Lead R&D initiatives to explore and implement cutting-edge measurement methodologies Provide expert guidance on measurement results and business recommendations Drive innovation in media measurement science and methodology Required Qualifications: 5+ years of experience in data science, with focus on media analytics, econometrics, or causal inference B.S. or higher in Mathematics, Computer Science, Statistics, Data Science, or related quantitative field Strong expertise in media measurement and attribution methodologies Proficiency in Python, SQL, and statistical modeling techniques Experience with large-scale data processing and probabilistic modeling Strong problem-solving skills and ability to work independently Excellent communication skills for both technical and non-technical audiences Experience with model deployment and monitoring at scale Good to have: Experience in Geo Incrementality Testing and analysis Background in Media Mix Modeling (MMM) Experience with marketing effectiveness measurement Familiarity with A/B testing and experimental design Experience with marketing analytics platforms and tools Show more Show less

Posted 3 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

India

On-site

Linkedin logo

About Oportun Oportun (Nasdaq: OPRT) is a mission-driven fintech that puts its 2.0 million members' financial goals within reach. With intelligent borrowing, savings, and budgeting capabilities, Oportun empowers members with the confidence to build a better financial future. Since inception, Oportun has provided more than $16.6 billion in responsible and affordable credit, saved its members more than $2.4 billion in interest and fees, and helped its members save an average of more than $1,800 annually. Oportun has been certified as a Community Development Financial Institution (CDFI) since 2009. WORKING AT OPORTUN Working at Oportun means enjoying a differentiated experience of being part of a team that fosters a diverse, equitable and inclusive culture where we all feel a sense of belonging and are encouraged to share our perspectives. This inclusive culture is directly connected to our organization's performance and ability to fulfill our mission of delivering affordable credit to those left out of the financial mainstream. We celebrate and nurture our inclusive culture through our employee resource groups. Company Overview At Oportun, we are on a mission to foster financial inclusion for all by providing affordable and responsible lending solutions to underserved communities. As a purpose-driven financial technology company, we believe in empowering our customers with access to responsible credit that can positively transform their lives. Our relentless commitment to innovation and data-driven practices has positioned us as a leader in the industry, and we are actively seeking exceptional individuals to join our team as Senior Software Engineer to play a critical role in driving positive change. Position overview We are seeking a highly skilled Platform Engineer with expertise in building self-serve platforms that combine real-time ML deployment and advanced data engineering capabilities. This role requires a blend of cloud-native platform engineering, data pipeline development, and deployment expertise. The ideal candidate will have a strong background in implementing data workflows, building platforms to enable self-serve for ML pipelines while enabling seamless deployments. Responsibilities Platform Engineering Design and build self-serve platforms that support real-time ML deployment and robust data engineering workflows. Create APIs and backend services using Python and FastAPI to manage and monitor ML workflows and data pipelines. Real-Time ML Deployment Implement platforms for real-time ML inference using tools like AWS SageMaker and Databricks. Enable model versioning, monitoring, and lifecycle management with observability tools such as New Relic. Data Engineering Build and optimise ETL/ELT pipelines for data preprocessing, transformation, and storage using PySpark and Pandas. Develop and manage feature stores to ensure consistent, high-quality data for ML model training and deployment. Design scalable, distributed data pipelines on platforms like AWS, integrating tools such as DynamoDB, PostgreSQL, MongoDB, and MariaDB. CI/CD and Automation Use CI/CD pipelines using Jenkins, GitHub Actions, and other tools for automated deployments and testing. Automate data validation and monitoring processes to ensure high-quality and consistent data workflows. Documentation and Collaboration Create and maintain detailed technical documentation, including high-level and low-level architecture designs. Collaborate with cross-functional teams to gather requirements and deliver solutions that align with business goals. Participate in Agile processes such as sprint planning, daily standups, and retrospectives using tools like Jira. Experience Required Qualifications 5-10 years experience in IT 5-8 years experience in platform backend engineering 1 year experience in DevOps & data engineering roles. Hands-on experience with real-time ML model deployment and data engineering workflows. Technical Skills Strong expertise in Python and experience with Pandas, PySpark, and FastAPI. Proficiency in container orchestration tools such as Kubernetes (K8s) and Docker. Advanced knowledge of AWS services like SageMaker, Lambda, DynamoDB, EC2, and S3. Proven experience building and optimizing distributed data pipelines using Databricks and PySpark. Solid understanding of databases such as MongoDB, DynamoDB, MariaDB, and PostgreSQL. Proficiency with CI/CD tools like Jenkins, GitHub Actions, and related automation frameworks. Hands-on experience with observability tools like New Relic for monitoring and troubleshooting. We are proud to be an Equal Opportunity Employer and consider all qualified applicants for employment opportunities without regard to race, age, color, religion, gender, national origin, disability, sexual orientation, veteran status or any other category protected by the laws or regulations in the locations where we operate. California applicants can find a copy of Oportun's CCPA Notice here: https://oportun.com/privacy/california-privacy-notice/. We will never request personal identifiable information (bank, credit card, etc.) before you are hired. We do not charge you for pre-employment fees such as background checks, training, or equipment. If you think you have been a victim of fraud by someone posing as us, please report your experience to the FBI’s Internet Crime Complaint Center (IC3). Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

AI/LLM Architect Medicine moves too slow. At Velsera, we are changing that. Velsera was formed in 2023 through the shared vision of Seven Bridges and Pierian, with a mission to accelerate the discovery, development, and delivery of life-changing insights. Velsera provides software and professional services for: AI-powered multimodal data harmonization and analytics for drug discovery and development IVD development, validation, and regulatory approval Clinical NGS interpretation, reporting, and adoption With our headquarters in Boston, MA, we are growing and expanding our teams located in different countries! What will you do? Lead and participate in collaborative solutioning sessions with business stakeholders, translating business requirements and challenges into well-defined machine learning/data science use cases and comprehensive AI solution specifications. Architect robust and scalable AI solutions that enable data-driven decision-making, leveraging a deep understanding of statistical modeling, machine learning, and deep learning techniques to forecast business outcomes and optimize performance. Design and implement data integration strategies to unify and streamline diverse data sources, creating a consistent and cohesive data landscape for AI model development. Develop efficient and programmatic methods for synthesizing large volumes of data, extracting relevant features, and preparing data for AI model training and validation. Leverage advanced feature engineering techniques and quantitative methods, including statistical modeling, machine learning, deep learning, and generative AI, to implement, validate, and optimize AI models for accuracy, reliability, and performance. Simplify data presentation to help stakeholders easily grasp insights and make informed decisions. Maintain a deep understanding of the latest advancements in AI and generative AI, including various model architectures, training methodologies, and evaluation metrics. Identify opportunities to leverage generative AI to securely and ethically address business needs, optimize existing processes, and drive innovation. Contribute to project management processes, providing regular status updates, and ensuring the timely delivery of high-quality AI solutions. Primarily responsible for contributing to project delivery and maximizing business impact through effective AI solution architecture and implementation. Occasionally contribute technical expertise during pre-sales engagements and support internal operational improvements as needed. What do you bring to the table? A bachelor's or master's degree in a quantitative field (e.g., Computer Science, Statistics, Mathematics, Engineering) is required. The ideal candidate will have a strong background in designing and implementing end-to-end AI/ML pipelines, including feature engineering, model training, and inference. Experience with Generative AI pipelines is needed. 8+ years of experience in AI/ML development, with at least 3+ years in an AI architecture role. Fluency in Python and SQL and noSQL is essential. Experience with common data science libraries such as pandas and Scikit-learn, as well as deep learning frameworks like PyTorch and TensorFlow, is required. Hands-on experience with cloud-based AI/ML platforms and tools, such as AWS (SageMaker, Bedrock), GCP (Vertex AI, Gemini), Azure AI Studio, or OpenAI, is a must. This includes experience with deploying and managing models in the cloud. Our Core Values People first. We create collaborative and supportive environments by operating with respect and flexibility to promote mental, emotional and physical health. We practice empathy by treating others the way they want to be treated and assuming positive intent. We are proud of our inclusive diverse team and humble ourselves to learn about and build our connection with each other. Patient focused. We act with swift determination without sacrificing our expectations of quality . We are driven by providing exceptional solutions for our customers to positively impact patient lives. Considering what is at stake, we challenge ourselves to develop the best solution, not just the easy one. Integrity. We hold ourselves accountable and strive for transparent communication to build trust amongst ourselves and our customers. We take ownership of our results as we know what we do matters and collectively we will change the healthcare industry. We are thoughtful and intentional with every customer interaction understanding the overall impact on human health. Curious. We ask questions and actively listen in order to learn and continuously improve . We embrace change and the opportunities it presents to make each other better. We strive to be on the cutting edge of science and technology innovation by encouraging creativity. Impactful. We take our social responsibility with the seriousness it deserves and hold ourselves to a high standard. We improve our sustainability by encouraging discussion and taking action as it relates to our natural, social and economic resource footprint. We are devoted to our humanitarian mission and look for new ways to make the world a better place. Velsera is an Equal Opportunity Employer: Velsera is proud to be an equal opportunity employer committed to providing employment opportunity regardless of sex, race, creed, colour, gender, religion, marital status, domestic partner status, age, national origin or ancestry. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

About the Role: We are seeking an experienced MLOps Engineer with a strong background in NVIDIA GPU-based containerization and scalable ML infrastructure ( Contractual - Assignment Basis) . You will work closely with data scientists, ML engineers, and DevOps teams to build, deploy, and maintain robust, high-performance machine learning pipelines using NVIDIA NGC containers, Docker, Kubernetes , and modern MLOps practices. Key Responsibilities: Design, develop, and maintain end-to-end MLOps pipelines for training, validation, deployment, and monitoring of ML models. Implement GPU-accelerated workflows using NVIDIA NGC containers, CUDA, and RAPIDS . Containerize ML workloads using Docker and deploy on Kubernetes (preferably with GPU support like NVIDIA device plugin for K8s) . Integrate model versioning, reproducibility, CI/CD, and automated model retraining using tools like MLflow, DVC, Kubeflow, or similar . Optimize model deployment for inference on NVIDIA hardware using TensorRT, Triton Inference Server , or ONNX Runtime-GPU . Manage cloud/on-prem GPU infrastructure and monitor resource utilization and model performance in production. Collaborate with data scientists to transition models from research to production-ready pipelines. Required Skills: Proficiency in Python and ML libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Strong experience with Docker , Kubernetes , and NVIDIA GPU containerization (NGC, nvidia-docker) . Familiarity with NVIDIA Triton Inference Server , TensorRT , and CUDA . Experience with CI/CD for ML (GitHub Actions, GitLab CI, Jenkins, etc.). Deep understanding of ML lifecycle management , monitoring, and retraining. Experience working with cloud platforms (AWS/GCP/Azure) or on-prem GPU clusters. Preferred Qualifications: Experience with Kubeflow , Seldon Core , or similar orchestration tools. Exposure to Airflow , MLflow , Weights & Biases , or DVC . Knowledge of NVIDIA RAPIDS and distributed GPU workloads. MLOps certifications or NVIDIA Deep Learning Institute training (preferred but not mandatory). Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Qualcomm India Private Limited Job Area Information Technology Group, Information Technology Group > IT Software Developer General Summary Qualcomm EDAAP (Engineering Solutions and AIML) team is seeking an experienced develop and support scalable Machine learning platform. The ideal candidate will have a strong background in building and operating distributed systems, with expertise in Rust, Python, Kubernetes, and Linux. You will play a critical role in developing, supporting and debugging our Generative AI platforms. Experience 3 to 7 years of experience strong knowledge of Python or Rust, NoSQL (Mongo/Redis), working experience of developing/supporting large scale end user facing applications. Responsibilities Develop, Debug and support end to end components of large-scale Generative AI platform. Set up and operate Kubernetes clusters for efficient deployment and management of containerized applications Implement distributed microservices architecture to enable scalable and fault-tolerant inference pipelines Ensure optimal performance, security, and reliability of inference platforms, leveraging expertise in Linux, networking, servers, and data centers Develop and maintain scripts and tools for automating deployment, monitoring, and maintenance tasks Troubleshoot issues and optimize system performance, using knowledge of data structures and algorithms Work closely with users to debug issues and address performance and scalability issues. Participate in code reviews, contributing to the improvement of the overall code quality and best practices Requirements/Skills 3 to 7 years of experience in software development, with a focus on building scalable and distributed systems Proficiency in Rust and Python programming languages, with experience in developing high-performance applications Experience setting up and operating Kubernetes clusters, including deployment, scaling, and management of containerized applications Strong understanding of distributed microservices architecture and its application in large-scale systems Excellent knowledge of Linux, including shell scripting, package management, and system administration Good understanding of networking fundamentals, including protocols, architectures, and network security Familiarity with data structures and algorithms, including trade-offs and optimization techniques Experience debugging complex production issues in large scale application platforms. Experience working with cloud-native technologies, such as containers, orchestration, and service meshes Strong problem-solving skills, with the ability to debug complex issues and optimize system performance Excellent communication and collaboration skills, with experience working with cross-functional teams and customers Minimum Qualifications 3+ years of IT-relevant work experience with a Bachelor's degree in a technical field (e.g., Computer Engineering, Computer Science, Information Systems). OR 5+ years of IT-relevant work experience without a Bachelor’s degree. 3+ years of any combination of academic or work experience with Full-stack Application Development (e.g., Java, Python, JavaScript, etc.) 1+ year of any combination of academic or work experience with Data Structures, algorithms, and data stores. Develop, Debug and support end to end components of large-scale Generative AI platform. Set up and operate Kubernetes clusters for efficient deployment and management of containerized applications Implement distributed microservices architecture to enable scalable and fault-tolerant inference pipelines Ensure optimal performance, security, and reliability of inference platforms, leveraging expertise in Linux, networking, servers, and data centers Develop and maintain scripts and tools for automating deployment, monitoring, and maintenance tasks Troubleshoot issues and optimize system performance, using knowledge of data structures and algorithms Work closely with users to debug issues and address performance and scalability issues. Participate in code reviews, contributing to the improvement of the overall code quality and best practices 3 to 7 years of experience in software development, with a focus on building scalable and distributed systems Proficiency in Rust and Python programming languages, with experience in developing high-performance applications Experience setting up and operating Kubernetes clusters, including deployment, scaling, and management of containerized applications Strong understanding of distributed microservices architecture and its application in large-scale systems Excellent knowledge of Linux, including shell scripting, package management, and system administration Good understanding of networking fundamentals, including protocols, architectures, and network security Familiarity with data structures and algorithms, including trade-offs and optimization techniques Experience debugging complex production issues in large scale application platforms. Experience working with cloud-native technologies, such as containers, orchestration, and service meshes Strong problem-solving skills, with the ability to debug complex issues and optimize system performance Excellent communication and collaboration skills, with experience working with cross-functional teams and customers Bachelors (Engineering) or Masters Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3072987 Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

This role is for one of Weekday's clients Salary range: Rs 1000000 - Rs 1500000 (ie INR 10-15 LPA) Min Experience: 3 years Location: Bengaluru JobType: full-time Requirements About the Role We are seeking a passionate and skilled AI Engineer to join our innovative engineering team. In this role, you will play a pivotal part in designing, developing, and deploying cutting-edge artificial intelligence solutions with a focus on natural language processing (NLP) , computer vision , and machine learning models using TensorFlow and related frameworks. You will work on challenging projects that leverage large-scale data, deep learning, and advanced AI techniques, helping transform business problems into smart, automated, and scalable solutions. If you're someone who thrives in a fast-paced, tech-driven environment and loves solving real-world problems with AI, we'd love to hear from you. Key Responsibilities Design, develop, train, and deploy AI/ML models using frameworks such as TensorFlow, Keras, and PyTorch. Implement solutions across NLP, computer vision, and deep learning domains, using advanced techniques such as transformers, CNNs, LSTMs, OCR, image classification, and object detection. Collaborate closely with product managers, data scientists, and software engineers to identify use cases, define architecture, and integrate AI solutions into products. Optimize model performance for speed, accuracy, and scalability, using industry best practices in model tuning, validation, and A/B testing. Deploy AI models to cloud platforms such as AWS, GCP, and Azure, leveraging their native AI/ML services for efficient and reliable operation. Stay up to date with the latest AI research, trends, and technologies, and propose how they can be applied within the company's context. Ensure model explainability, reproducibility, and compliance with ethical AI standards. Contribute to the development of MLOps pipelines for managing model versioning, CI/CD for ML, and monitoring deployed models in production. Required Skills & Qualifications 3+ years of hands-on experience building and deploying AI/ML models in production environments. Proficiency in TensorFlow and deep learning workflows; experience with PyTorch is a plus. Strong foundation in natural language processing (e.g., NER, text classification, sentiment analysis, transformers) and computer vision (e.g., image processing, object recognition). Experience deploying and managing AI models on AWS, Google Cloud Platform (GCP), and Microsoft Azure. Skilled in Python and relevant libraries such as NumPy, Pandas, OpenCV, Scikit-learn, Hugging Face Transformers, etc. Familiarity with model deployment tools such as TensorFlow Serving, Docker, and Kubernetes. Experience working in cross-functional teams and agile environments. Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or related field. Preferred Qualifications Experience with MLOps tools and pipelines (MLflow, Kubeflow, SageMaker, etc.). Knowledge of data privacy and ethical AI practices. Exposure to edge AI or real-time inference systems. Show more Show less

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Position Summary... As the Senior Data Analyst, Product Analytics, Marketplace Analytics & Data Science, you will be part of the team with an aim to evaluate the effectiveness and efficiency of the Marketplace platform. Your focus will be to support the Product team as they prioritize and build capabilities and tools to allow Sellers to both sell and ship products to Customers. What you'll do... About Team As the Senior Data Analyst, Product Analytics, Marketplace Analytics ; Data Science, you will be part of the team with an aim to evaluate the effectiveness and efficiency of the Marketplace platform. Your focus will be to support the Product team as they prioritize and build capabilities and tools to allow Sellers to both sell and ship products to Customers. You will be responsible for building out a holistic view of performance as well as a detailed analytics roadmap across the entire Product Lifecycle by leveraging state of the art analytics tools (i.e. SQL/Hive/Hadoop/Cloud, Mixpanel, Quantum Metrics, Tableau/ThoughtSpot/Looker, etc.). Key projects include supporting Product Discovery, providing business impact Sizing, assessing the performance of Product Features, and understanding Seller Behavior through Clickstream Analytics. This role is highly visible since you must work cross functionally with Product, Engineering, Business, and Operations. Given the size and scale of Walmart and the advanced capabilities being built by Product for Marketplace this position will have a significant impact. What You Will Do Serve as thought leader and trusted advisor to Product leadership and the larger product management team by collaborating with them through the entire product lifecycle from discovery, A/B testing, to post launch insights and learnings. Demonstrate proactive, solution-oriented thought leadership -- you are always looking ahead to whats coming next, you solve problems within and outside of your core area, and you are comfortable actively influencing leaders to ensure you drive key decisions and priorities. Proactively identify product opportunities within and beyond product ownerships areas through a hypothesis driven culture and data driven deep dives Responsible for organizing and assembling the resources, technology, and processes to support the Product Analytics needs ofthe Marketplace Product teams. Successfully work with cross functional group consisting of Product, Engineering and Business to drive data based decisions. Interface with product ; business stakeholders across geographies to proactively identify opportunities, develop business acumen, cultivate stakeholder relationships ; develop best in class data analytics solutions. Define the product engagement data capture strategy and collaborate with Engineering to ensure the accuracy of the data. Have an strong understanding of various data sources and how to organize and utilize them to deliver critical insights to the broader organization. Leverage clickstream data to identify opportunities for improving customer experience and influence product roadmaps. Perform conversion analysis, funnel analysis, impact sizing to influence decision-making. Extensive hands-on experience with SQL to query from different databases Define and monitor KPIs to measure product performance and monitoring product health. Create effective reporting and dashboards by applying expertise in data visualization tools such as Mixpanel, Quantum Metric, Looker/Tableau, and Splunk to monitor product performance. Design and execute A/B tests, observational inference, predictive analytics to identify and quantify the impact of new product feature as an ongoing discipline to constantly improve product features and provide better experiences for customers across all platforms such as Desktop, Mobile, etc. What You Will Bring MBA or Masters Degree in Mathematics, Engineering, Statistics or a related technical field 4-9 years of experience in data analysis or an analytical capacity 3+ years of experience in Product Analytics, Digital Analytics, or eCommerce Analytics Proficient SQL programming skills with an understanding of database capabilities and experience of integrating, structuring, and analyzing large amounts of data from diverse sources. Design A/B tests to test and quantify the impact of new product feature. Drive A/B testing as an ongoing discipline to constantly improve product features and provide better experiences for customers across all platforms (e.g., Desktop, Mobile, etc.). Experience leveraging big data technologies (Hive/Hadoop) and modern data visualization tools (Tableau, ThoughtSpot, Looker) to blend data from multiple sources to help answer multi-dimensional business questions Expert level understanding of Microsoft Office suite especially Excel and PowerPoint Strong analytical and quantitative skills and ability to synthesize findings into tangible actions that help drive business outcomes Strong organizational skills, a strong sense of ownership and accountability, and the ability to lead projects, communicate effectively, and be a self-starter. You can communicate technical material to a range of audiences and to tell a story that provides insight into the business You embrace tackling complex problems with a high degree of ambiguity. PREFERRED QUALIFICATIONS: Background in Product Analytics, ideally experience with two-sided businesses (Buyer ; Seller) like a Marketplace (eBay), Rideshare (Uber), other sharing business models Experience with A/B and Multivariate test design and implementation and Regression modelling Retail and/or eCommerce industry experience in a heavily data-driven environment preferred Working knowledge of Digital Product Analytics methodologies. Preference will be given to candidates with experience in both B2B and B2C digital products Experience using an enterprise-level product analytics platforms (e.g. Mixpanel, Quantum Metric, Splunk, etc.) You have a passion for working in a fast-paced agile environment About Walmart Global Tech Imagine working in an environment where one line of code can make life easier for hundreds of millions of people. Thats what we do at Walmart Global Tech. Were a team of software engineers, data scientists, cybersecurity experts and service professionals within the worlds leading retailer who make an epic impact and are at the forefront of the next retail disruption. People are why we innovate, and people power our innovations. We are people-led and tech-empowered. We train our team in the skillsets of the future and bring in experts like you to help us grow. We have roles for those chasing their first opportunity as well as those looking for the opportunity that will define their career. Here, you can kickstart a great career in tech, gain new skills and experience for virtually every industry, or leverage your expertise to innovate at scale, impact millions and reimagine the future of retail. Flexible, hybrid work We use a hybrid way of working with primary in office presence coupled with an optimal mix of virtual presence. We use our campuses to collaborate and be together in person, as business needs require and for development and networking opportunities. This approach helps us make quicker decisions, remove location barriers across our global team, be more flexible in our personal lives. Benefits Beyond our great compensation package, you can receive incentive awards for your performance. Other great perks include a host of best-in-class benefits maternity and parental leave, PTO, health benefits, and much more. Belonging We aim to create a culture where every associate feels valued for who they are, rooted in respect for the individual. Our goal is to foster a sense of belonging, to create opportunities for all our associates, customers and suppliers, and to be a Walmart for everyone. At Walmart, our vision is "everyone included." By fostering a workplace culture where everyone isand feelsincluded, everyone wins. Our associates and customers reflect the makeup of all 19 countries where we operate. By making Walmart a welcoming place where all people feel like they belong, were able to engage associates, strengthen our business, improve our ability to serve customers, and support the communities where we operate. Minimum Qualifications... Outlined below are the required minimum qualifications for this position. If none are listed, there are no minimum qualifications. Minimum Qualifications:Option 1: Bachelor's degree in Business, Engineering, Statistics, Economics, Analytics, Mathematics, Arts, Finance or related field and 2 years' experience in data analysis, data science, statistics, or related field. Option 2: Master's degree in Business, Engineering, Statistics, Economics, Analytics, Mathematics, Computer Science, Information Technology or related field. Option 3: 4 years' experience in data analysis, data science, statistics, or related field. Preferred Qualifications... Outlined below are the optional preferred qualifications for this position. If none are listed, there are no preferred qualifications. Primary Location... G, 1, 3, 4, 5 Floor, Building 11, Sez, Cessna Business Park, Kadubeesanahalli Village, Varthur Hobli , India R-2106424 Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Flexera saves customers billions of dollars in wasted technology spend. A pioneer in Hybrid ITAM and FinOps, Flexera provides award-winning, data-oriented SaaS solutions for technology value optimization (TVO), enabling IT, finance, procurement and cloud teams to gain deep insights into cost optimization, compliance and risks for each business service. Flexera One solutions are built on a set of definitive customer, supplier and industry data, powered by our Technology Intelligence Platform, that enables organizations to visualize their Enterprise Technology Blueprint™ in hybrid environments—from on-premises to SaaS to containers to cloud. We’re transforming the software industry. We’re Flexera. With more than 50,000 customers across the world, we’re achieving that goal. But we know we can’t do any of that without our team. Ready to help us re-imagine the industry during a time of substantial growth and ambitious plans? Come and see why we’re consistently recognized by Gartner, Forrester and IDC as a category leader in the marketplace. Learn more at flexera.com Job Summary: We are seeking a skilled and motivated Senior Data Engineer to join our Automation, AI/ML team. In this role, you will work on designing, building, and maintaining data pipelines and infrastructure to support AI/ML initiatives, while contributing to the automation of key processes. This position requires expertise in data engineering, cloud technologies, and database systems, with a strong emphasis on scalability, performance, and innovation. Key Responsibilities: Identify and automate manual processes to improve efficiency and reduce operational overhead. Design, develop, and optimize scalable data pipelines to integrate data from multiple sources, including Oracle and SQL Server databases. Collaborate with data scientists and AI/ML engineers to ensure efficient access to high-quality data for training and inference models. Implement automation solutions for data ingestion, processing, and integration using modern tools and frameworks. Monitor, troubleshoot, and enhance data workflows to ensure performance, reliability, and scalability. Apply advanced data transformation techniques, including ETL/ELT processes, to prepare data for AI/ML use cases. Develop solutions to optimize storage and compute costs while ensuring data security and compliance. Required Skills and Qualifications: Experience in identifying, streamlining, and automating repetitive or manual processes. Proven experience as a Data Engineer, working with large-scale database systems (e.g., Oracle, SQL Server) and cloud platforms (AWS, Azure, Google Cloud). Expertise in building and maintaining data pipelines using tools like Apache Airflow, Talend, or Azure Data Factory. Strong programming skills in Python, Scala, or Java for data processing and automation tasks. Experience with data warehousing technologies such as Snowflake, Redshift, or Azure Synapse. Proficiency in SQL for data extraction, transformation, and analysis. Familiarity with tools such as Databricks, MLflow, or H2O.ai for integrating data engineering with AI/ML workflows. Experience with DevOps practices and tools, such as Jenkins, GitLab CI/CD, Docker, and Kubernetes. Knowledge of AI/ML concepts and their integration into data workflows. Strong problem-solving skills and attention to detail. Preferred Qualifications: Knowledge of security best practices, including data encryption and access control. Familiarity with big data technologies like Hadoop, Spark, or Kafka. Exposure to Databricks for data engineering and advanced analytics workflows. Flexera is proud to be an equal opportunity employer. Qualified applicants will be considered for open roles regardless of age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by local/national laws, policies and/or regulations. Flexera understands the value that results from employing a diverse, equitable, and inclusive workforce. We recognize that equity necessitates acknowledging past exclusion and that inclusion requires intentional effort. Our DEI (Diversity, Equity, and Inclusion) council is the driving force behind our commitment to championing policies and practices that foster a welcoming environment for all. We encourage candidates requiring accommodations to please let us know by emailing careers@flexera.com. Show more Show less

Posted 3 weeks ago

Apply

0.0 - 3.0 years

0 Lacs

Sukhlia, Indore, Madhya Pradesh

Remote

Indeed logo

Job Title: AWS & DevOps Engineer Department: DevOps Location: Indore Job Type: Full-time Experience: 3-5 years Notice Period: 0-15 days (immediate joiners preferred) Work Arrangement: On-site (Work from Office) Advantal Technologies is looking for a skilled AWS & DevOps Engineer to help build and manage the cloud infrastructure. This role involves designing scalable infrastructure, automating deployments, enforcing security, and supporting a hybrid (AWS + open-source) deployment strategy. Key Responsibilities: AWS Cloud Infrastructure: · Design, provision, and manage secure and scalable cloud architecture on AWS. · Configure and manage core services: VPC, EC2, S3, RDS (PostgreSQL), Lambda, CloudFront, Cognito, and IAM. · Deploy AI models using Amazon SageMaker for inference at scale. · Manage API integrations via Amazon API Gateway and AWS WAF. DevOps & Automation: · Implement CI/CD pipelines using AWS CodePipeline, GitHub Actions, or GitLab CI. · Containerize backend applications using Docker and orchestrate with AWS ECS/Fargate or Kubernetes (for on-prem/hybrid). · Use Terraform or AWS CloudFormation for Infrastructure as Code (IaC). · Monitor applications using CloudWatch, Security Hub, and CloudTrail. Security & Compliance: · Implement IAM policies and KMS key management, and enforce Zero Trust architecture. · Configure S3 object lock, audit logs, and data classification controls. · Support GDPR/HIPAA-ready compliance setup via AWS Config, GuardDuty, and Security Hub. Required Skills & Experience: Must-Have · 3–5 years of hands-on experience in AWS infrastructure and services. · Proficiency with Terraform, CloudFormation, or other IaC tools. · Experience with Docker, CI/CD pipelines, and cloud networking (VPC, NAT, Route 53). · Strong understanding of DevSecOps principles and AWS security best practices. · Experience supporting production-grade SaaS applications. Nice-to-Have: · Exposure to AI/ML model deployment (especially via SageMaker or containerized APIs). · Knowledge of multi-tenant SaaS infrastructure patterns. · Experience with Vault, Keycloak, or open-source IAM/security stacks for non-AWS environments. · Familiarity with Kubernetes (EKS or self-hosted). Tools & Stack You'll Use: · AWS (Lambda, RDS, S3, SageMaker, Cognito, CloudFront, CloudWatch, API Gateway) · Terraform, Docker, GitHub Actions · CI/CD: GitHub, GitLab, AWS CodePipeline · Monitoring: CloudWatch, GuardDuty, Prometheus (non-AWS) · Security: KMS, IAM, Vault Please share resume to hr@advantal.net Job Types: Full-time, Permanent Pay: ₹261,624.08 - ₹1,126,628.25 per year Benefits: Paid time off Provident Fund Work from home Schedule: Day shift Monday to Friday Ability to commute/relocate: Sukhlia, Indore, Madhya Pradesh: Reliably commute or willing to relocate with an employer-provided relocation package (Preferred) Experience: AWS DevOps: 3 years (Required) Work Location: In person Speak with the employer +91 9131295441 Expected Start Date: 02/06/2025

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

We’re on the lookout for a Data Science Manager with deep expertise in Speech-to-Text (STT), Natural Language Processing (NLP), and Generative AI to lead a high-impact Conversational AI initiative for one of our premier EMEA-based clients. You’ll not only guide a team of data scientists and ML engineers but also work hands-on to build cutting-edge systems for real-time transcription, sentiment analysis, summarization, and intelligent decision-making . Your solutions will enable smarter engagement strategies, unlock valuable insights, and directly impact client success. What You'll Do: Strategic Leadership & Delivery: Lead the end-to-end delivery of AI solutions for transcription and conversation analytics. Collaborate with client stakeholders to understand business problems and translate them into AI strategies. Provide mentorship to team members, foster best practices, and ensure high-quality technical delivery. Conversational AI Development: Oversee development and tuning of ASR models using tools like Whisper, DeepSpeech, Kaldi, AWS/GCP STT. Guide implementation of speaker diarization for multi-speaker conversations. Ensure solutions are domain-tuned and accurate in real-world conditions. Generative AI & NLP Applications: Architect LLM-based pipelines for summarization, topic extraction, and conversation analytics. Design and implement custom RAG pipelines to enrich conversational insights using external knowledge bases. Apply prompt engineering and NER techniques for context-aware interactions. Decision Intelligence & Sentiment Analysis: Drive the development of models for sentiment detection, intent classification , and predictive recommendations . Enable intelligent workflows that suggest next-best actions and enhance customer experiences. AI at Scale: Oversee deployment pipelines using Docker, Kubernetes, FastAPI , and cloud-native tools (AWS/GCP/Azure AI). Champion cost-effective model serving using ONNX, TensorRT, or Triton. Implement and monitor MLOps workflows to support continuous learning and model evolution. What You'll Bring to the Table: Technical Excellence 8+ Years of proven experience leading teams in Speech-to-Text, NLP, LLMs, and Conversational AI domains. Strong Python skills and experience with PyTorch, TensorFlow, Hugging Face, LangChain . Deep understanding of RAG architectures , vector DBs (FAISS, Pinecone, Weaviate), and cloud deployment practices. Hands-on experience with real-time applications and inference optimization. Leadership & Communication Ability to balance strategic thinking with hands-on execution. Strong mentorship and team management skills. Exceptional communication and stakeholder engagement capabilities. A passion for transforming business needs into scalable AI systems. Bonus Points For: Experience in healthcare, pharma, or life sciences conversational use cases. Exposure to knowledge graphs, RLHF , or multimodal AI . Demonstrated impact through cross-functional leadership and client-facing solutioning. What do you get in return? Competitive Salary: Your skills and contributions are highly valued here, and we make sure your salary reflects that, rewarding you fairly for the knowledge and experience you bring to the table. Dynamic Career Growth: Our vibrant environment offers you the opportunity to grow rapidly, providing the right tools, mentorship, and experiences to fast-track your career. Idea Tanks : Innovation lives here. Our "Idea Tanks" are your playground to pitch, experiment, and collaborate on ideas that can shape the future. Growth Chats : Dive into our casual "Growth Chats" where you can learn from the best whether it's over lunch or during a laid-back session with peers, it's the perfect space to grow your skills. Snack Zone: Stay fueled and inspired! In our Snack Zone, you'll find a variety of snacks to keep your energy high and ideas flowing. Recognition & Rewards : We believe great work deserves to be recognized. Expect regular Hive-Fives, shoutouts and the chance to see your ideas come to life as part of our reward program. Fuel Your Growth Journey with Certifications: We’re all about your growth groove! Level up your skills with our support as we cover the cost of your certifications . Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

In Norconsulting we are currently looking for an AI developer to join us in Chennai in a freelancer opportunity for a major Banking organization. Duration : long term Location : Chennai Rate : 110 USD/day (around 2200 USD per month) Type of assignment: Full-time (8h/day, Monday to Friday) SKILLS / EXPERIENCE REQUIRED AI Developer • Large Language Models (LLMs) & Prompt Engineering: Experience working with transformer-based models (e.g., GPT, BERT) and crafting effective prompts for tasks like summarization, text classification and document understanding. • Azure Document Intelligence: Hands-on experience with Azure AI Document Intelligence for extracting structured data from unstructured documents (invoices, forms, contracts). • Model Development & Evaluation: Strong foundation in ML algorithms, model evaluation metrics, and hyperparameter tuning using tools like Scikit-learn, XGBoost, or PyTorch. • MLOps (Machine Learning Operations): Proficient in building and managing ML pipelines using Azure ML, MLflow, and CI/CD tools for model training, deployment, and monitoring. • Azure Machine Learning (Azure ML): Experience with Azure ML Studio, automated ML, model registry, and deployment to endpoints or containers. • Azure Functions & Serverless AI: Building event-driven AI workflows using Azure Functions for real-time inference, data processing, and integration with other Azure services. • Programming Languages: Strong coding skills in Python (preferred), with knowledge of libraries like NumPy, Pandas, Scikit-learn, and Matplotlib. • Database & Data Lakes: Experience with SQL and NoSQL databases, and integration with data lakes for AI pipelines. • DevOps & Git Integration: Experience with Azure DevOps for version control, testing, and continuous integration of AI workflows. WBGJP00012309 Show more Show less

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

India

Remote

Linkedin logo

ob Title: AI Full stack Developer – GenAI & NLP Location: Pune, India (Hybrid) Work Mode: Remote Experience Required: 2+ Years (Relevant AI/ML with GenAI & NLP) Salary: Up to ₹15 LPA (CTC) Employment Type: Full-time Department: AI Research & Development Role Overview We are looking for a passionate AI Developer with strong hands-on experience in Generative AI and Natural Language Processing (NLP) to help build intelligent and scalable solutions. In this role, you will design and deploy advanced AI models for tasks such as language generation, summarization, chatbot development, document analysis, and more. You’ll work with cutting-edge LLMs (Large Language Models) and contribute to impactful AI initiatives. Key Responsibilities Design, fine-tune, and deploy NLP and GenAI models using LLMs like GPT, BERT, LLaMA, or similar. Build applications for tasks like text generation, question-answering, summarization, sentiment analysis, and semantic search. Integrate language models into production systems using RESTful APIs or cloud services. Evaluate and optimize models for accuracy, latency, and cost. Collaborate with product and engineering teams to implement intelligent user-facing features. Preprocess and annotate text data, create custom datasets, and manage model pipelines. Stay updated on the latest advancements in generative AI, transformer models, and NLP frameworks. Required Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, AI/ML, or a related field. Minimum 2 years of experience in fullstack development and AI/ML development, with recent work in NLP or Generative AI. Hands-on experience with models such as GPT, T5, BERT, or similar transformer-based architectures. Proficient in Python and libraries such as Hugging Face Transformers, spaCy, NLTK, or OpenAI APIs. Hands-on experience in any frontend/ backend technologies for software development. Experience with deploying models using Flask, FastAPI, or similar frameworks. Strong understanding of NLP tasks, embeddings, vector databases (e.g., FAISS, Pinecone), and prompt engineering. Familiarity with MLOps tools and cloud platforms (AWS, Azure, or GCP). Preferred Qualifications Experience with LangChain, RAG (Retrieval-Augmented Generation), or custom LLM fine-tuning. Knowledge of model compression, quantization, or inference optimization. Exposure to ethical AI, model interpretability, and data privacy practices. What We Offer Competitive salary package up to ₹15 LPA. Remote work flexibility with hybrid team collaboration in Pune. Opportunity to work on real-world generative AI and NLP applications. Access to resources for continuous learning and certification support. Inclusive, fast-paced, and innovative work culture. Skills: nltk,computer vision,inference optimization,model interpretability,gpt,bert,mlops,artificial intelligence,next.js,tensorflow,ai development,machine learning,generative ai,ml,openai,node.js,kubernetes,large language models (llms),openai apis,natural language processing,machine learning (ml),fastapi,natural language processing (nlp),java,azure,nlp tasks,model compression,embeddings,vector databases,aws,typescript,r,hugging face transformers,google cloud,hugging face,llama,ai tools,mlops tools,rag architectures,langchain,spacy,docker,retrieval-augmented generation (rag),pytorch,gcp,cloud,large language models,react.js,deep learning,python,ai technologies,flask,ci/cd,data privacy,django,quantization,javascript,ethical ai,nlp Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Experience : 3+ years Job Location : Bengaluru, Karnataka Work Modality : Fulltime work from office Job Description : To develop LLM-driven products from the ground up. We are looking for enthusiastic members who would like to design cutting-edge systems and implement AI solutions that scale globally. Strong communication skills Problem-solving abilities Strong programming background Understanding of Transformer architecture Required Qualifications : 3+ years of hands-on experience in AI/ML, with proven projects using Transformers (e.g., BERT, GPT, T5, ViTs, Small LLMs) Strong proficiency in Python and deep learning frameworks (PyTorch or TensorFlow) Ability to independently analyze open sources and code repositories Experience in fine-tuning Transformer models for NLP (e.g., text classification, summarization) or Computer Vision (e.g., image generation, recognition) Knowledge of GPU acceleration, optimization techniques, and model quantization Experience in deploying models using Flask, FastAPI, or cloud-based inference services Familiarity with data pre-processing, feature engineering, and training workflows

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Role - Senior AI Engineer Experience: 3+ years in AI/ML/Data Science Location: Gurgaon, work from office About Tap Health: Tap Health is a deep-tech startup transforming chronic care with AI and changing how people access health information. We build next-generation, AI-driven digital therapeutics for diabetes, PCOS, hypertension, asthma, pregnancy, obesity and more, eliminating the need for human support while significantly reducing costs, improving engagement and boosting outcomes. Tap Health's fully autonomous digital therapeutic for diabetes simplifies management by delivering real-time, daily guidance to optimize health outcomes at less than 10% of the cost of legacy products. Powered by adaptive AI and clinical protocols, it dynamically personalizes each user’s care journey, delivering tailored insights, lifestyle interventions, motivational nudges, adherence support, and improved clinical outcomes. Beyond digital therapeutics, Tap Health’s Health Assistant assists users in primary symptom diagnosis based on their inputs and provides instant health advice through a seamless, voice-first experience. www.tap.health Role Overview: Lead AI Engineer We are hiring a Senior AI Engineer in Gurgaon to drive AI-driven healthcare innovations. The ideal candidate has 3+ years of AI/ML experience, 1+ year of GenAI production experience, and 1+ year of hands-on GenAI product development. You need to have a strong data science background and have expertise in GenAI, Agentic AI deployments, causal inference, and Bayesian modelling, with a strong foundation in LLMs and traditional models. You will be collaborating with the AI, Engineering, and Product teams to build scalable, consumer-focused healthcare solutions. As a Lead AI Engineer, you will be the go-to expert—the engineer others turn to when they hit roadblocks. You will mentor, collaborate, and enable high product velocity while fostering a culture of continuous learning and innovation. Skills & Experience The ideal candidate should have the following qualities: Strong understanding of fine-tuning, optimisation, and neural architectures. Hands-on experience with Python, PyTorch, and FastAI frameworks. Experience running production workloads on one or more hyperscalers (AWS, GCP, Azure, Oracle, DigitalOcean, etc.). In-depth knowledge of LLMs—how they work and their limitations. Ability to assess the advantages of fine-tuning, including dataset selection strategies. Understanding of Agentic AI frameworks, MCPs (Multi-Component Prompting), ACP (Adaptive Control Policies), and autonomous workflows. Familiarity with evaluation metrics for fine-tuned models and industry-specific public benchmarking standards in healthcare. Knowledge of advanced statistical models, reinforcement learning, and Bayesian inference methods. Experience in Causal Inference and Experimentation Science to improve product and marketing outcomes. Proficiency in querying and analysing diverse datasets from multiple sources to build custom ML and optimisation models. Comfortable with code reviews and standard coding practices using Python, Git, Cursor, and CodeRabbit. Show more Show less

Posted 3 weeks ago

Apply

Exploring Inference Jobs in India

With the rapid growth of technology and data-driven decision making, the demand for professionals with expertise in inference is on the rise in India. Inference jobs involve using statistical methods to draw conclusions from data and make predictions based on available information. From data analysts to machine learning engineers, there are various roles in India that require inference skills.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Hyderabad
  5. Pune

These major cities are known for their thriving tech industries and are actively hiring professionals with expertise in inference.

Average Salary Range

The average salary range for inference professionals in India varies based on experience level. Entry-level positions may start at around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.

Career Path

In the field of inference, a typical career path may start as a Data Analyst or Junior Data Scientist, progress to a Data Scientist or Machine Learning Engineer, and eventually lead to roles like Senior Data Scientist or Principal Data Scientist. With experience and expertise, professionals can also move into leadership positions such as Data Science Manager or Chief Data Scientist.

Related Skills

In addition to expertise in inference, professionals in India may benefit from having skills in programming languages such as Python or R, knowledge of machine learning algorithms, experience with data visualization tools like Tableau or Power BI, and strong communication and problem-solving abilities.

Interview Questions

  • What is the difference between inferential statistics and descriptive statistics? (basic)
  • How do you handle missing data in a dataset when performing inference? (medium)
  • Can you explain the bias-variance tradeoff in the context of inference? (medium)
  • What are the assumptions of linear regression and how do you test them? (advanced)
  • How would you determine the significance of a coefficient in a regression model? (medium)
  • Explain the concept of p-value and its significance in hypothesis testing. (basic)
  • Can you discuss the difference between frequentist and Bayesian inference methods? (advanced)
  • How do you handle multicollinearity in a regression model? (medium)
  • What is the Central Limit Theorem and why is it important in statistical inference? (medium)
  • How would you choose between different machine learning algorithms for a given inference task? (medium)
  • Explain the concept of overfitting and how it can affect inference results. (medium)
  • Can you discuss the difference between parametric and non-parametric inference methods? (advanced)
  • Describe a real-world project where you applied inference techniques to draw meaningful conclusions from data. (advanced)
  • How do you assess the goodness of fit of a regression model in inference? (medium)
  • What is the purpose of cross-validation in machine learning and how does it impact inference? (medium)
  • Can you explain the concept of Type I and Type II errors in hypothesis testing? (basic)
  • How would you handle outliers in a dataset when performing inference? (medium)
  • Discuss the importance of sample size in statistical inference and hypothesis testing. (basic)
  • How do you interpret confidence intervals in an inference context? (medium)
  • Can you explain the concept of statistical power and its relevance in inference? (medium)
  • What are some common pitfalls to avoid when performing inference on data? (basic)
  • How do you test the normality assumption in a dataset for conducting inference? (medium)
  • Explain the difference between correlation and causation in the context of inference. (medium)
  • How would you evaluate the performance of a classification model in an inference task? (medium)
  • Discuss the importance of feature selection in building an effective inference model. (medium)

Closing Remark

As you explore opportunities in the inference job market in India, remember to prepare thoroughly by honing your skills, gaining practical experience, and staying updated with industry trends. With dedication and confidence, you can embark on a rewarding career in this field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies