Home
Jobs

1024 Inference Jobs - Page 18

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

9 - 9 Lacs

Hyderābād

On-site

GlassDoor logo

Kore.ai is a pioneering force in enterprise AI transformation, empowering organizations through our comprehensive agentic AI platform. With innovative offerings across "AI for Service," "AI for Work," and "AI for Process," we're enabling over 400+ Global 2000 companies to fundamentally reimagine their operations, customer experiences and employee productivity. Our end-to-end platform enables enterprises to build, deploy, manage, monitor, and continuously improve agentic applications at scale. We've automated over 1 billion interactions every year with voice and digital AI in customer service, and transformed employee experiences for tens of thousands of employees through productivity and AI-driven workflow automation. Recognized as a leader by Gartner, Forrester, IDC, ISG, and Everest, Kore.ai has secured Series D funding of $150M, including strategic investment from NVIDIA to drive Enterprise AI innovation. Founded in 2014 and headquartered in Florida, we maintain a global presence with offices in India, UK, Germany, Korea, and Japan. You can find full press coverage at https://kore.ai/press/ POSITION / TITLE: QA Lead POSITION SUMMARY: As a QA Automation Technical Lead for the Agent-Platform product at Kore.ai, you will be responsible for driving the end-to-end test automation strategy to ensure a high-quality and reliable enterprise-grade AI platform. You will lead a team of automation engineers and collaborate with cross-functional teams to validate complex features, services, and workflows at scale. LOCATION: Hyderabad (Work from Office) RESPONSIBILITIES: Begin with leading the daily QA stand-up, assigning priorities, removing blockers, and mentoring the team on test automation practices. Define and continuously evolve the automation strategy for web, API and data validation across our products. Drive the design, implementation, and maintenance of scalable test automation frameworks in Python and open-source tools such as Selenium and Behave. Lead test coverage and BDD test development with tools like Behave, ensuring reliable and maintainable tests. Collaborate closely with developers, product managers, and DevOps teams to integrate automated tests into CI/CD pipelines (Jenkins, Git). Perform code reviews, monitor test effectiveness, and ensure tests are consistently delivering accurate feedback on release readiness. Analyze failed test results, troubleshoot regressions, and proactively address quality risks in early development stages. Stay on top of emerging trends in QA, AI/ML testing, and generative automation tools to enhance the QA tech stack. Contribute to strategic quality initiatives such as test data management, environment stability, and performance validation. SKILLS REQUIRED: Minimum 7–8 years of experience in test automation, including 2 years of hands-on experience guiding or coordinating test automation efforts within a team Strong hands-on experience with Python, Selenium WebDriver, and API testing tools. Proficiency in BDD using Cucumber and test execution frameworks like pytest. Experience in designing test automation strategies for large-scale platforms with CI/CD pipelines (Jenkins, Git). Ability to troubleshoot complex systems, isolate automation failures, and mentor junior engineers. Familiarity with test reporting, test data generation, and defect lifecycle management. Excellent communication and collaboration skills to work with cross-functional product and engineering teams. Familiarity with AI/ML concepts and their validation, including model inference testing and chatbot workflows, is a plus EDUCATION QUALIFICATION: Bachelor’s in engineering or Master’s in computer applications Technologies We Use Python, Selenium, Cucumber, Behave, Jenkins, Git, Postman, JIRA, MongoDB (for test data), Cloud platforms (AWS preferred) Why Join Us? At Kore.ai, you won't be maintaining quality for conventional software—you'll be defining what quality means for an entirely new category of platform technology that enables enterprise-scale agentic applications. Your work will directly influence how the world's leading organizations build, deploy, and trust AI systems, establishing standards that could transform the industry. Join us in building not just a better platform, but the frameworks that ensure enterprise agentic applications deliver on their transformative promise safely, effectively, and responsibly at scale.

Posted 1 week ago

Apply

2.0 years

7 - 9 Lacs

Hyderābād

On-site

GlassDoor logo

Location: Hyderabad (In-Office) Faculty Name: Professor Gurvinder Sandhu Academic Area: Accounting The ISB Research Associate Program: ISB hosts a cutting-edge 2-year Research Associate Program within its Accounting area. This is akin to pre-doctoral programs run by many top research universities (especially in the U.S.), where students work alongside professors and typically pursue a Ph.D. in Business Administration (with a specialization in accounting) after that. This is a unique setting where candidates can attend Ph.D. courses on research methods (such as Panel Data Econometrics and Causal Inference) as well as seminar courses focusing on empirical archival research in Accounting, Auditing, and Corporate Finance. We invite applications from motivated individuals with a solid analytical academic background to work as a full-time Research Associate at ISB. Research Associates (RA) are expected to work for about two years and generally apply for the Ph.D. programs in their second year of the RA program. Research Summary of the Faculty Professor Gurvinder Sandhu Gurvinder Sandhu is an Assistant Professor of Accounting at the Indian School of Business (ISB). He holds a PhD in Management Science from the University of Texas at Dallas. He also holds an MBA from Melbourne Business School (University of Melbourne) and a B. Com from Kurukshetra University. Professor Sandhu’s research explores financial institutions, credit markets, and firms’ voluntary disclosures. His empirical work looks at what forces shape banks’ loan portfolios. He has developed a bank diversification measure that captures how diversified banks are in their commercial loan portfolio. His research has been accepted at various academic conferences, including the American Accounting Association and the European Accounting Association. His teaching interests lie in financial accounting, specifically financial statement analysis and introductory financial accounting. View Profile About ISB The Indian School of Business (ISB) evolved from the need for a world-class business school in Asia. The founders, some of the best minds from the corporate and academic worlds, anticipated the leadership needs of the emerging Asian economies. The ISB is committed to creating such leaders through its innovative programs, outstanding faculty, and thought leadership. The Indian School of Business (ISB) provides a robust environment that generates high-quality research that is both contemporary and rigorous. Roles and Responsibilities: Work Description Work with the faculty on research projects of common interest. The candidate will assist the professor in his ongoing. research, including support through data collection, data cleaning, literature review, and preliminary data analysis. You will be exposed to creating and handling big data sets, working extensively in statistical software/programming languages (such as Python, Stata, R, and SAS), and learning state-of-the-art research design methodologies (e.g., panel data regressions with fixed effects). This position is a good fit for a candidate looking to pursue a PhD in Accounting or excel in research (post-doctoral). Required Skills and Qualifications: Master’s degree in economics / finance / Statistics / Econometrics / Mathematics / Physics or engineering (electrical, signal processing, computer science), or a 4-year bachelor’s degree in mathematics/physics/economics from a premier institute. Strong background in Mathematics. Python/STATA/R/SAS coding skills are essential. The ideal candidate should be proficient in at least one of these languages/software. Knowledge of writing API queries/Web-Scraping algorithms, Machine Learning/AI, or textual analysis is an added advantage. Our Commitment towards you ISB is a research-focused business school. It offers a variety of opportunities to understand the current management phenomena in depth through research brown-bag seminars, workshops, and PhD-level courses. It provides several options to hone a person’s analytical skills. Along with the competitive salary and plethora of employee benefits, ISB hosts a world-class Learning Resource Centre and a comprehensive health and personal Accident Cover for you and your family members. ISB believes in creating a truly inclusive culture that values diversity, equity, and inclusion for everyone through our ideas and collaborations. If this role is your true calling, please complete the form using the link below. If you have any questions, please contact careers_ra_fd@isb.edu for any queries. Kindly do not send your resume through email id, as it becomes difficult to keep track. Use the link given below only to apply. https://www.cognitoforms.com/IndianSchoolOfBusiness9/FDOHiringForm We will connect with you shortly. Hyderabad Campus Indian School of Business Gachibowli, Hyderabad - 500111 Timings : Monday- Friday, 08:00 AM IST to 06:00 PM IST 040 23187777 0172 4591800 careers_hyderabad@isb.edu careers_mohali@isb.edu careers_ra@isb.edu Mohali Campus Indian School of Business Knowledge City Sector 81, SAS Nagar ,Mohali - 140 306

Posted 1 week ago

Apply

8.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job description: Job Description Role Purpose The purpose of the role is to create exceptional architectural solution design and thought leadership and enable delivery teams to provide exceptional client engagement and satisfaction. ͏ Mandatory Skills: Data Science, ML, DL, Python for Data Science, Tensorflow, Pytorch, Django, SQL, MLOps Preferred Skills: NLP, Gen AI, LLM, PowerBI, Advanced Analytics, Banking exposure ͏ Strong understanding of Data Science, machine learning and deep learning principles and algorithms. Proficiency in programming languages such as Python, TensorFlow, and PyTorch. Experienced data scientist who can using python build various AI models for banking product acquisition, deepening, retention. Drive data driven personalisation, customer segmentation, in accordance with banks data privacy and security standards Expert in applying ML techniques such as: classification, clustering, deep learning, optimization methods, supervised and unsupervised techniques Optimize model performance and scalability for real-time inference and deployment. Experiment with different hyperparameters and model configurations to improve AI model quality. Ensure AI ML solutions are developed, and validations are performed in accordance with Responsible AI guidelines & Standards Working knowledge ane experience in ML Ops is a must and engineering background is preferred Excellent command of data warehousing concepts and SQL Knowledge of personal banking products is a plus Mandatory Skills: AI Cognitive . Experience: 8-10 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a skilled and motivated AI/ML Engineer with 3–5 years of experience to join our team. The ideal candidate will have hands-on expertise in building and deploying AI/ML solutions on the Azure platform, with a solid focus on Large Language Models (LLMs), Retrieval-Augmented Generation (RAG) systems, and Azure ML Studio. You will play a key role in designing intelligent systems, deploying scalable models, and integrating advanced AI capabilities into enterprise applications. Primary Responsibilities: AI/ML Development & Deployment: Design, develop, and deploy machine learning models using Azure ML Studio and Azure Machine Learning services Build and fine-tune LLM-based solutions for enterprise use cases Develop and implement RAG pipelines using Azure services and vector databases Deploy and monitor AI/ML models in production environments ensuring scalability and performance Azure Platform Engineering: Leverage Azure services such as Azure Data Lake, Azure Synapse, Azure Blob Storage, and Azure Cognitive Search for data ingestion and processing Integrate AI models with Azure-based data pipelines and APIs Use Azure DevOps for CI/CD of ML workflows and model versioning Data Engineering & Processing: Build and maintain ETL/ELT pipelines for structured and unstructured data using Databricks and Apache Spark Prepare and transform data for training and inference using Python, PySpark and SQL LLM & RAG System Implementation: Implement LLM-based agents and chatbots using frameworks like Langchain Design and optimize RAG architectures for domain-specific knowledge retrieval Work with vector databases (e.g., Azure Cognitive Search, FAISS) for embedding-based search Collaboration & Innovation: Collaborate with data scientists, product managers, and engineers to deliver AI-driven features Stay current with advancements in generative AI, LLMs, and Azure AI services Contribute to the continuous improvement of AI/ML pipelines and best practices Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: 3+ years of hands-on experience in AI/ML engineering with a focus on Azure Proven experience in deploying ML models using Azure ML Studio and Azure Machine Learning Experience working with LLMs, RAG systems, and AI agents Experience with Databricks, Apache Spark, and Azure Data services Knowledge of Azure DevOps and CI/CD for ML workflows Understanding of data governance and security in cloud environments Familiarity with MLOps practices and model monitoring tools Familiarity with vector databases and embedding models Proficiency in Python, SQL, and PySpark Proven solid analytical and problem-solving skills Proven effective communication and collaboration with cross-functional teams Proven ability to translate business requirements into technical solutions At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission. #Nic

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurgaon

On-site

GlassDoor logo

About the Role: Grade Level (for internal use): 10 S&P Global Commodity Insights The Role: Senior Cloud Engineer The Location: Hyderabad, Gurgaon The Team: The Cloud Engineering Team is responsible for designing, implementing, and maintaining cloud infrastructure that supports various applications and services within the S&P Global Commodity Insights organization. This team collaborates closely with data science, application development, and security teams to ensure the reliability, security, and scalability of our cloud solutions. The Impact: As a Cloud Engineer, you will play a vital role in deploying and managing cloud infrastructure that supports our strategic initiatives. Your expertise in AWS and cloud technologies will help streamline operations, enhance service delivery, and ensure the security and compliance of our environments. What’s in it for you: This position offers the opportunity to work on cutting-edge cloud technologies and collaborate with various teams across the organization. You will gain exposure to multiple S&P Commodity Insights Divisions and contribute to projects that have a significant impact on the business. This role opens doors for tremendous career opportunities within S&P Global. Responsibilities: Design and deploy cloud infrastructure using core AWS services such as EC2, S3, RDS, IAM, VPC, and CloudFront, ensuring high availability and fault tolerance. Deploy, manage, and scale Kubernetes clusters using Amazon EKS, ensuring high availability, secure networking, and efficient resource utilization. Develop secure, compliant AWS environments by configuring IAM roles/policies, KMS encryption, security groups, and VPC endpoints. Configure logging, monitoring, and alerting with CloudWatch, CloudTrail, and GuardDuty to support observability and incident response. Enforce security and compliance controls via IAM policy audits, patching schedules, and automated backup strategies. Monitor infrastructure health, respond to incidents, and maintain SLAs through proactive alerting and runbook execution. Collaborate with data science teams to deploy machine learning models using Amazon SageMaker, managing model training, hosting, and monitoring. Automate and schedule data processing workflows using AWS Glue, Step Functions, Lambda, and EventBridge to support ML pipelines. Optimize infrastructure for cost and performance using AWS Compute Optimizer, CloudWatch metrics, auto-scaling, and Reserved Instances/Savings Plans. Write and maintain Infrastructure as Code (IaC) using Terraform or AWS CloudFormation for repeatable, automated infrastructure deployments. Implement disaster recovery, backups, and versioned deployments using S3 versioning, RDS snapshots, and CloudFormation change sets. Set up and manage CI/CD pipelines using AWS services like CodePipeline, CodeBuild, and CodeDeploy to support application and model deployments. Manage and optimize real-time inference pipelines using SageMaker Endpoints, Amazon Bedrock, and Lambda with API Gateway to ensure reliable, scalable model serving. Support containerized AI workloads using Amazon ECS or EKS, including model serving and microservices for AI-based features. Collaborate with SecOps and SRE teams to uphold security baselines, manage change control, and conduct root cause analysis for outages. Participate in code reviews, design discussions, and architectural planning to ensure scalable and maintainable cloud infrastructure. Maintain accurate and up-to-date infrastructure documentation, including architecture diagrams, access control policies, and deployment processes. Collaborate cross-functionally with application, data, and security teams to align cloud solutions with business and technical goals. Stay current with AWS and AI/ML advancements, suggesting improvements or new service adoption where applicable. What We’re Looking For: Strong understanding of cloud infrastructure, particularly AWS services and Kubernetes. Proven experience in deploying and managing cloud solutions in a collaborative Agile environment. Ability to present technical concepts to both business and technical audiences. Excellent multi-tasking skills and the ability to manage multiple projects under tight deadlines. Basic Qualifications: BA/BS in computer science, information technology, or a related field. 5+ years of experience in cloud engineering or related roles, specifically with AWS. Experience with Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation. Knowledge of container orchestration and microservices architecture. Familiarity with security best practices in cloud environments. Preferred Qualifications: Extensive Hands-on Experience with AWS Services. Excellent problem-solving skills and the ability to work independently as well as part of a team. Strong communication skills and the ability to influence stakeholders at all levels. Experience with greenfield projects and building cloud infrastructure from scratch. About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. We’re a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights’ coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit http://www.spglobal.com/commodity-insights . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 315801 Posted On: 2025-06-05 Location: Hyderabad, Telangana, India

Posted 1 week ago

Apply

0 years

0 - 0 Lacs

Gurgaon

On-site

GlassDoor logo

Job description Job Summary: We are looking for a skilled MLOps Engineer who specializes in deploying and managing machine learning models using cloud-native CI/CD pipelines , FastAPI , and Kubernetes , without Docker . The ideal candidate should be well-versed in scalable model serving, API development, and infrastructure automation on the cloud using native container alternatives or pre-built images. Key Responsibilities: Design, develop, and maintain CI/CD pipelines for ML model training, testing, and deployment on cloud platforms (Azure/AWS/GCP) . Develop REST APIs using FastAPI for model inference and data services. Deploy and orchestrate microservices and ML workloads on Kubernetes clusters (EKS, AKS, GKE, or on-prem K8s). Implement model monitoring, logging, and version control without Docker-based containers. Utilize alternatives such as Singularity, Buildah, or cloud-native container orchestration . Automate deployment pipelines using tools like GitHub Actions, GitLab CI, Jenkins, Azure DevOps , etc. Manage secrets, configurations, and infrastructure using Kubernetes secrets, ConfigMaps, Helm, or Kustomize . Work closely with Data Scientists and Backend Engineers to integrate ML models with APIs and UIs. Optimize performance, scalability, and reliability of ML services in production. Required Skills: Strong experience with Kubernetes (deployment, scaling, Helm/Kustomize) . Deep understanding of CI/CD tools like Jenkins, GitHub Actions, GitLab CI/CD, or Azure DevOps. Experience with FastAPI for high-performance ML/REST APIs. Proficient in cloud platforms (AWS, GCP, or Azure) for ML pipeline orchestration. Experience with non-Docker containerization or deployment tools (e.g., Singularity , Podman , or OCI-compliant methods ). Strong Python skills and familiarity with ML libraries and model serialization (e.g., Pickle, ONNX, TorchServe). Good understanding of DevOps principles, GitOps, and IaC (Terraform or similar) . Preferred Qualifications: Experience with Kubeflow, MLflow , or similar tools. Familiarity with model monitoring tools like Prometheus, Grafana, or Seldon Core . Understanding of security and compliance in production ML systems. Bachelor's or Master’s degree in Computer Science, Engineering, or related field. Industry Technology, Information and Internet Employment Type Full-time Job Types: Full-time, Permanent Pay: ₹35,000.00 - ₹50,000.00 per month Work Location: In person

Posted 1 week ago

Apply

5.0 years

50 Lacs

Cuttack, Odisha, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Kolkata, West Bengal, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Guwahati, Assam, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Ranchi, Jharkhand, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Hello, Truecaller is calling you from Bangalore, India! Ready to pick up? Our goal is to make communication smarter, safer, and more efficient, all while building trust everywhere. We're all about bringing you smart services with a big social impact, keeping you safe from fraud, harassment, scam calls or messages, so you can focus on the conversations that matter. Top 20 most downloaded apps globally, and world’s #1 caller ID and spam-blocking service for Android and iOS, with extensive AI capabilities, with more than 450 million active users per month. Founded in 2009, listed on Nasdaq OMX Stockholm and is categorized as a Large Cap. Our focus on innovation, operational excellence, sustainable growth, and collaboration has resulted in consistently high profitability and strong EBITDA margins. A team of 400 people from ~35 different nationalities spread across our headquarters in Stockholm and offices in Bangalore, Mumbai, Gurgaon and Tel Aviv with high ambitions. We in the Insights Team are responsible for SMS Categorization, Fraud detection and other Smart SMS features within the Truecaller app. The OTP & bank notifications, bill & travel reminder alerts are some examples of the Smart SMS features. The team has developed a patented offline text parser that powers all these features and the team is also exploring cutting edge technologies like LLM to enhance the Smart SMS features. The team’s mission is to become the World’s most loved and trusted SMS app which is aligned with Truecaller’s vision to make communication safe and efficient. Smart SMS is used by over 90M users every day. As an ML Engineer , you will be responsible for collecting, organizing, analyzing, and interpreting Truecaller data with a focus on NLP. In this role, you will be working hands-on to optimize the training and deployment of ML models to be quick and cost-efficient. Also, you will be pivotal in advancing our work with large language models and in-device models across diverse regions. Your expertise will enhance our natural language processing, machine learning, and predictive analytics capabilities. What You Bring In 3+ years in machine learning engineering, with hands-on involvement in feature engineering, model development, and deployment. Experience in Natural Language Processing (NLP), with a deep understanding of text processing, model development, and deployment challenges in the domain. Proven ability to develop, deploy, and maintain machine learning models in production environments, ensuring scalability, reliability, and performance. Strong familiarity with ML frameworks like TensorFlow, PyTorch, and ONNX, and experience in tech stack such as Kubernetes, Docker, APIs, Vertex AI, GCP. Experience deploying models across backend and mobile platforms. Fine-tune and optimize LLMs prompts for domain-specific applications Ability to optimize feature engineering, model training, and deployment strategies for performance and efficiency. Strong SQL and statistical skills. Programming knowledge in at least one language, such as Python or R. Preferably python. Knowledge of machine learning algorithms. Excellent teamwork and communication skills, with the ability to work cross-functionally with product, engineering, and data science teams. Good to have the knowledge in retrieval-based pipelines to enhance LLM performance The Impact You Will Create Collaborate with Product and Engineering to scope, design, and implement systems that solve complex business problems ensuring they are delivered on time and within scope. Design, develop, and deploy state-of-the-art NLP models, contributing directly to message classification and fraud detection at scale for millions of users. Leverage cutting-edge NLP techniques to enhance message understanding, spam filtering, and fraud detection, ensuring a safer and more efficient messaging experience. Build and optimize ML models that can efficiently handle large-scale data processing while maintaining accuracy and performance. Work closely with data scientists and data engineers to enable rapid experimentation, development, and productionization of models in a cost-effective manner. Streamline the ML lifecycle, from training to deployment, by implementing automated workflows, CI/CD pipelines, and monitoring tools for model health and performance. Stay ahead of advancements in ML and NLP, proactively identifying opportunities to enhance model performance, reduce latency, and improve user experience. Your work will directly impact millions of users, improving message classification, fraud detection, and the overall security of messaging platforms. It Would Be Great If You Also Have Understanding of Conversational AI Deploying NLP models in production Working knowledge of GCP components Cloud-based LLM inference with Ray, Kubernetes, and serverless architectures. Life at Truecaller - Behind the code: https://www.instagram.com/lifeattruecaller/ Sounds like your dream job? We will fill the position as soon as we find the right candidate, so please send your application as soon as possible. As part of the recruitment process, we will conduct a background check. This position is based in Bangalore , India. We only accept applications in English. What We Offer A smart, talented and agile team: An international team where ~35 nationalities are working together in several locations and time zones with a learning, sharing and fun environment. A great compensation package: Competitive salary, 30 days of paid vacation, flexible working hours, private health insurance, parental leave, telephone bill reimbursement, Udemy membership to keep learning and improving and Wellness allowance. Great tech tools: Pick the computer and phone that you fancy the most within our budget ranges. Office life: We strongly believe in the in-person collaboration and follow an office-first approach while offering some flexibility. Enjoy your days with great colleagues with loads of good stuff to learn from, daily lunch and breakfast and a wide range of healthy snacks and beverages. In addition, every now and then check out the playroom for a fun break or join our exciting parties and or team activities such as Lab days, sports meetups etc. There something for everyone! Come as you are: Truecaller is diverse, equal and inclusive. We need a wide variety of backgrounds, perspectives, beliefs and experiences in order to keep building our great products. No matter where you are based, which language you speak, your accent, race, religion, color, nationality, gender, sexual orientation, age, marital status, etc. All those things make you who you are, and that’s why we would love to meet you. Job info Location Bengaluru, Karnataka, India Category Data Science Team Insights Posted 15 days ago Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Coimbatore

On-site

GlassDoor logo

Key Responsibilities: Develop, fine-tune, and evaluate vision-language models (e.g., CLIP, Flamingo, BLIP, GPT-4V, LLaVA, etc.). Design and build multimodal pipelines that integrate image/video input with natural language understanding or generation. Work with large-scale image-text datasets (e.g., LAION, COCO, Visual Genome) for training and validation. Implement zero-shot/few-shot multimodal inference, retrieval, captioning, VQA (Visual Question Answering), grounding, etc. Collaborate closely with product teams, ML engineers, and data scientists to deliver real-world multimodal applications. Optimize model inference performance and resource utilization in production environments (ONNX, TensorRT, etc.). Conduct error analysis, ablation studies, and propose improvements in visual-language alignment. Contribute to research papers, documentation, or patents if in a research-driven team. Required Skills & Qualifications: Bachelor’s/Master’s/PhD in Computer Science, AI, Machine Learning, or a related field. 2+ years experience in computer vision or NLP, with at least 1+ year in multimodal ML or VLMs. Strong programming skills in Python, with experience in libraries like PyTorch, HuggingFace Transformers, OpenCV, torchvision. Familiarity with VLM architectures: CLIP, BLIP, Flamingo, LLaVA, Kosmos, GPT-4V, etc. Experience with dataset curation, image-caption pair processing, and image-text embedding strategies. Solid understanding of transformers, cross-attention mechanisms, and contrastive learning. Job Type: Full-time Pay: ₹25,000.00 - ₹40,000.00 per year Schedule: Day shift Work Location: In person

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You Will Be Doing... The Commercial Data & Analytics - Impact Analytics team is part of the Verizon Global Services (VGS) organization.The Impact Analytics team addresses high-impact, analytically driven projects focused within three core pillars: Customer Experience, Pricing & Monetization, Network & Sustainability. In this role, you will analyze large data sets to draw insights and solutions to help drive actionable business decisions. You will also apply advanced analytical techniques and algorithms to help us solve some of Verizon’s most pressing challenges. Use your analysis of large structured and unstructured datasets to draw meaningful and actionable insights Envision and test for corner cases. Build analytical solutions and models by manipulating large data sets and integrating diverse data sources Present the results and recommendations of statistical modeling and data analysis to management and other stakeholders Identify data sources and apply your knowledge of data structures, organization, transformation, and aggregation techniques to prepare data for in-depth analysis Deeply understand business requirements and translate them into well-defined analytical problems, identifying the most appropriate statistical techniques to deliver impactful solutions. Assist in building data views from disparate data sources which powers insights and business cases Apply statistical modeling techniques / ML to data and perform root cause analysis and forecasting Develop and implement rigorous frameworks for effective base management. Collaborate with cross-functional teams to discover the most appropriate data sources, fields which caters to the business need Design modular, reusable Python scripts to automate data processing Clearly and effectively communicate complex statistical concepts and model results to both technical and non-technical audiences, translating your findings into actionable insights for stakeholders. What We’re Looking For... You have strong analytical skills, and are eager to work in a collaborative environment with global teams to drive ML applications in business problems, develop end to end analytical solutions and communicate insights and findings to leadership. You work independently and are always willing to learn new technologies. You thrive in a dynamic environment and are able to interact with various partners and cross functional teams to implement data science driven business solutions. You Will Need To Have Bachelor’s degree in computer science or another technical field or four or more years of work experience Four or more years of relevant work experience Proficiency in SQL, including writing queries for reporting, analysis and extraction of data from big data systems (Google Cloud Platform, Teradata, Spark, Splunk etc) Curiosity to dive deep into data inconsistencies and perform root cause analysis Programming experience in Python (Pandas, NumPy, Scipy and Scikit-Learn) Experience with Visualization tools matplotlib, seaborn, tableau, grafana etc A deep understanding of various machine learning algorithms and techniques, including supervised and unsupervised learning Understanding of time series modeling and forecasting techniques Even better if you have one or more of the following: Experience with cloud computing platforms (e.g., AWS, Azure, GCP) and deploying machine learning models at scale using platforms like Domino Data Lab or Vertex AI Experience in applying statistical ideas and methods to data sets to answer business problems. Ability to collaborate effectively across teams for data discovery and validation Experience in deep learning, recommendation systems, conversational systems, information retrieval, computer vision Expertise in advanced statistical modeling techniques, such as Bayesian inference or causal inference. Excellent interpersonal, verbal and written communication skills. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Kochi, Kerala, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

4.0 - 6.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Job Family Data Science & Analysis (India) Travel Required Up to 10% Clearance Required None What You Will Do Design, train, and fine-tune advanced foundational models (text, audio, vision) using healthcare-and other relevant datasets, focusing on accuracy and context relevance. Collaborate with cross-functional teams (Business, engineering, IT) to seamlessly integrate AI/ML technologies into our solution offerings. Deploy, monitor, and manage AI models in a production environment, ensuring high availability, scalability, and performance. Continuously research and evaluate the latest advancements in AI/ML and industry trends to drive innovation. Develop and maintain comprehensive documentation for AI models, including development, training, fine-tuning, and deployment procedures. Provide technical guidance and mentorship to junior AI engineers and team members. Collaborate with stakeholders to understand business needs and translate them into technical requirements for model fine-tuning and development. Select and curate appropriate datasets for fine-tuning foundational models to address specific use cases. Ensure AI solutions can seamlessly integrate with existing systems and applications. What You Will Need Bachelors or master’s in computer science, Artificial Intelligence, Machine Learning, or a related field. 4 to 6 years of hands-on experience in AI/ML, with a demonstrable track record of training and deploying LLMs and other machine learning models. Strong proficiency in Python and familiarity with popular AI/ML frameworks (TensorFlow, PyTorch, Hugging Face Transformers, etc.). Practical experience deploying and managing AI models in production environments, including expertise in serving and inference frameworks (Triton, TensorRT, VLLM, TGI, etc.). Experience in Voice AI applications, a solid understanding of healthcare data standards (FHIR, HL7, EDI) and regulatory compliance (HIPAA, SOC2) is preferred. Excellent problem-solving and analytical abilities, capable of tackling complex challenges and evaluating multiple factors. Exceptional communication and collaboration skills, enabling effective teamwork in a dynamic environment. Worked on a minimum of 2 AI/LLM projects from the beginning to the end with proven value for business. What Would Be Nice To Have Experience with cloud computing platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes) is a plus. Familiarity with MLOps practices for continuous integration, continuous deployment (CI/CD), and automated monitoring of AI models. What We Offer Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. About Guidehouse Guidehouse is an Equal Opportunity Employer–Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse’s Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant’s dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee. Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Visakhapatnam, Andhra Pradesh, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Indore, Madhya Pradesh, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Vijayawada, Andhra Pradesh, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You Will Be Doing... The Commercial Data & Analytics - Impact Analytics team is part of the Verizon Global Services (VGS) organization.The Impact Analytics team addresses high-impact, analytically driven projects focused within three core pillars: Customer Experience, Pricing & Monetization, Network & Sustainability. In this role, you will analyze large data sets to draw insights and solutions to help drive actionable business decisions. You will also apply advanced analytical techniques and algorithms to help us solve some of Verizon’s most pressing challenges. Use your analysis of large structured and unstructured datasets to draw meaningful and actionable insights Envision and test for corner cases. Build analytical solutions and models by manipulating large data sets and integrating diverse data sources Present the results and recommendations of statistical modeling and data analysis to management and other stakeholders Identify data sources and apply your knowledge of data structures, organization, transformation, and aggregation techniques to prepare data for in-depth analysis Deeply understand business requirements and translate them into well-defined analytical problems, identifying the most appropriate statistical techniques to deliver impactful solutions. Assist in building data views from disparate data sources which powers insights and business cases Apply statistical modeling techniques / ML to data and perform root cause analysis and forecasting Develop and implement rigorous frameworks for effective base management. Collaborate with cross-functional teams to discover the most appropriate data sources, fields which caters to the business need Design modular, reusable Python scripts to automate data processing Clearly and effectively communicate complex statistical concepts and model results to both technical and non-technical audiences, translating your findings into actionable insights for stakeholders. What We’re Looking For... You have strong analytical skills, and are eager to work in a collaborative environment with global teams to drive ML applications in business problems, develop end to end analytical solutions and communicate insights and findings to leadership. You work independently and are always willing to learn new technologies. You thrive in a dynamic environment and are able to interact with various partners and cross functional teams to implement data science driven business solutions. You Will Need To Have Bachelor’s degree in computer science or another technical field or four or more years of work experience Four or more years of relevant work experience Proficiency in SQL, including writing queries for reporting, analysis and extraction of data from big data systems (Google Cloud Platform, Teradata, Spark, Splunk etc) Curiosity to dive deep into data inconsistencies and perform root cause analysis Programming experience in Python (Pandas, NumPy, Scipy and Scikit-Learn) Experience with Visualization tools matplotlib, seaborn, tableau, grafana etc A deep understanding of various machine learning algorithms and techniques, including supervised and unsupervised learning Understanding of time series modeling and forecasting techniques Even better if you have one or more of the following: Experience with cloud computing platforms (e.g., AWS, Azure, GCP) and deploying machine learning models at scale using platforms like Domino Data Lab or Vertex AI Experience in applying statistical ideas and methods to data sets to answer business problems. Ability to collaborate effectively across teams for data discovery and validation Experience in deep learning, recommendation systems, conversational systems, information retrieval, computer vision Expertise in advanced statistical modeling techniques, such as Bayesian inference or causal inference. Excellent interpersonal, verbal and written communication skills. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Ghaziabad, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

About Us Do you want to help transform the global economy? Join the movement disrupting the financial world and changing how businesses gain access to the working capital they need to grow. As the largest online platform for working capital, we are named one of Forbes’ “Fintech 50” and we serve over one million businesses in 180 countries, representing more than $10.5 trillion in annual sales. Headquartered in Kansas City, C2FO has more than 700 employees worldwide, with operations throughout Europe, India, Asia Pacific, and Australia. For more information, visit www.c2fo.com. Here at C2FO, we value the quality of our technical solutions and are passionate about building the right thing, the right way to best solve the problem at hand. But beyond that, we also value our employees' work-life balance and promote a continuous learning culture. We host bi-annual hackathons, have multiple book clubs focused on constant growth, and embrace a hyrbrid working environment. If you want to work at a place where your voice will be heard and can make a real impact, C2FO is the place for you. Role Summary We are seeking an experienced and talented Senior Engineer to join our MLOps team. In this role, you will play a crucial part in designing, developing, and maintaining scalable and reliable machine learning operations (MLOps) pipelines and infrastructure. You will collaborate closely with data scientists, software engineers, and other stakeholders to ensure the successful deployment and monitoring of machine learning models in production environments. Responsibilities Design and implement robust MLOps pipelines for model training, evaluation, deployment, and monitoring using industry-standard tools and frameworks Collaborate with data scientists to streamline the model development process and ensure seamless integration with MLOps pipelines. Optimize and scale machine learning infrastructure to support high-performance model training and inference. Contribute to the development of MLOps standards, processes, and documentation within the organization. Mentor and support junior team members in MLOps practices and technologies. Stay up-to-date with the latest trends and best practices in MLOps, and explore opportunities for continuous improvement. Qualifications Bachelor's or Master's degree in Computer Science, Statistics, or a related field. 5+ years of experience in software engineering, with 2+ years experience in ML Proficient in Python and at least one other programming language (e.g., Java, Go, C++). Extensive experience with containerization technologies (Docker, Kubernetes) and cloud platforms (AWS, GCP, Azure). Familiarity with machine learning frameworks and MLOps tools Experience with big data technologies Strong understanding of CI/CD principles and practices. Preferred Qualifications Familiarity with model serving frameworks Knowledge of infrastructure as code (IaC) tools Experience with monitoring and observability tools Contributions to open-source MLOps projects or communities. Benefits At C2FO, we care for our customers and people – the vital human capital that helps our customers thrive. That's why we offer a comprehensive benefits package, flexible work options for work/life balance, volunteer time off, and more. Learn more about our benefits here. Commitment To Diversity And Inclusion As an Equal Opportunity Employer, we value diversity and equality and empower our team members to bring their authentic selves to work daily. We recognize the power of inclusion, emphasizing that each team member was chosen for their unique ability to contribute to the overall success of our mission. Our goal is to create a workplace that reflects the communities we serve and our global, multicultural clients. We do not discriminate based on race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status, or any other basis covered by appropriate law. All employment decisions are based on qualifications, merit, and business needs. Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Surat, Gujarat, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Tamil Nadu, India

On-site

Linkedin logo

Job Description We are hiring a Computer Vision Engineer who is not only skilled in the theoretical aspects of AI/ML but can demonstrate strong coding capabilities, especially in Python , through hands-on problem-solving. This role is critical to real-time, production-grade system development and requires practical experience with Triton Inference Server , NVIDIA DeepStream , and end-to-end model deployment. Key Responsibilities Develop and deploy computer vision applications for real-world use cases like object detection , action recognition , and multi-camera human tracking . Write robust, testable Python code and implement real-time inference pipelines. Optimize ML models using Triton Inference Server and integrate them with NVIDIA DeepStream SDK . Troubleshoot model deployment issues and tune models for production scalability. Collaborate with ML engineers, data scientists, and DevOps for solution delivery. Ensure deliverables meet quality benchmarks through code reviews and performance analysis. Required Qualifications Minimum 4 years of solid hands-on experience in Python – must be able to solve real coding problems without assistance. At least 3 years of experience working in computer vision projects. Deep expertise in object detection , action recognition , and human tracking from multiple cameras . Strong knowledge of Machine Learning concepts : Overfitting, Underfitting, TP/FN, Precision, Recall, etc. Proven experience with Triton Inference Server and NVIDIA DeepStream for real-time deployment. Familiarity with NumPy , Pandas , Scikit-learn (sklearn) , and related ML libraries. Clear and confident communicator – must be able to explain technical solutions and assessment answers. Additional Information Immediate or short notice availability preferred. Must have strong coding skills in Python – this will be a key focus area during screening and assessments. Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Nielsen is seeking an organized, detail oriented, team player, to join the Engineering team in the role of Machine learning Engineer . Nielsen’s Audience Measurement Engineering platforms support the measurement of television viewing in more than 30 countries around the world. The Software Engineer will be responsible to define, develop, test, analyze, and deliver technology solutions within Nielsen’s Collections platforms. Qualifications: Experience having led multiple projects leveraging LLMs, GenAI and Prompt Engineering Exposure to real-world MLOps deploying models into production adding features to products Knowledge of working in a cloud environment Strong understanding of LLMs, GenAI, Prompt Engineering and Copilot Responsibilities: Bachelor's degree in Computer Science or equivalent degree 6+ years of software experience Experience with Machine learning frameworks and models The ML Engineer is expected to fully own the services that are built with the ML Scientists. This cuts across scalability, availability, having the metrics in place, alarms/alerts in place – and be responsible for the latency of the services Data quality checks & onboarding the data on to the cloud for modeling purposes Prompt Engineering, FT work, Evaluation, Data End-end AI Solution architecture, latency tradeoffs, LLM Inference Optimization, Control Plane, Data Plate, Platform Engineering Comfort in Python and Java is highly desirable The ML Engineer will head the ML engineering for a pod and be the technical leader for all ML/AI Engineering issues in a delivery pod Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @ nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or other characteristics protected by law. Show more Show less

Posted 1 week ago

Apply

Exploring Inference Jobs in India

With the rapid growth of technology and data-driven decision making, the demand for professionals with expertise in inference is on the rise in India. Inference jobs involve using statistical methods to draw conclusions from data and make predictions based on available information. From data analysts to machine learning engineers, there are various roles in India that require inference skills.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Hyderabad
  5. Pune

These major cities are known for their thriving tech industries and are actively hiring professionals with expertise in inference.

Average Salary Range

The average salary range for inference professionals in India varies based on experience level. Entry-level positions may start at around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.

Career Path

In the field of inference, a typical career path may start as a Data Analyst or Junior Data Scientist, progress to a Data Scientist or Machine Learning Engineer, and eventually lead to roles like Senior Data Scientist or Principal Data Scientist. With experience and expertise, professionals can also move into leadership positions such as Data Science Manager or Chief Data Scientist.

Related Skills

In addition to expertise in inference, professionals in India may benefit from having skills in programming languages such as Python or R, knowledge of machine learning algorithms, experience with data visualization tools like Tableau or Power BI, and strong communication and problem-solving abilities.

Interview Questions

  • What is the difference between inferential statistics and descriptive statistics? (basic)
  • How do you handle missing data in a dataset when performing inference? (medium)
  • Can you explain the bias-variance tradeoff in the context of inference? (medium)
  • What are the assumptions of linear regression and how do you test them? (advanced)
  • How would you determine the significance of a coefficient in a regression model? (medium)
  • Explain the concept of p-value and its significance in hypothesis testing. (basic)
  • Can you discuss the difference between frequentist and Bayesian inference methods? (advanced)
  • How do you handle multicollinearity in a regression model? (medium)
  • What is the Central Limit Theorem and why is it important in statistical inference? (medium)
  • How would you choose between different machine learning algorithms for a given inference task? (medium)
  • Explain the concept of overfitting and how it can affect inference results. (medium)
  • Can you discuss the difference between parametric and non-parametric inference methods? (advanced)
  • Describe a real-world project where you applied inference techniques to draw meaningful conclusions from data. (advanced)
  • How do you assess the goodness of fit of a regression model in inference? (medium)
  • What is the purpose of cross-validation in machine learning and how does it impact inference? (medium)
  • Can you explain the concept of Type I and Type II errors in hypothesis testing? (basic)
  • How would you handle outliers in a dataset when performing inference? (medium)
  • Discuss the importance of sample size in statistical inference and hypothesis testing. (basic)
  • How do you interpret confidence intervals in an inference context? (medium)
  • Can you explain the concept of statistical power and its relevance in inference? (medium)
  • What are some common pitfalls to avoid when performing inference on data? (basic)
  • How do you test the normality assumption in a dataset for conducting inference? (medium)
  • Explain the difference between correlation and causation in the context of inference. (medium)
  • How would you evaluate the performance of a classification model in an inference task? (medium)
  • Discuss the importance of feature selection in building an effective inference model. (medium)

Closing Remark

As you explore opportunities in the inference job market in India, remember to prepare thoroughly by honing your skills, gaining practical experience, and staying updated with industry trends. With dedication and confidence, you can embark on a rewarding career in this field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies