Home
Jobs

1020 Inference Jobs - Page 17

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

3 - 7 Lacs

Bengaluru

On-site

GlassDoor logo

Role: Backend Developer Industry Type: Space Technology Location: Bangalore / Ahmedabad Employment Type: Full-time Role Overview: As a Backend Developer, you will be responsible for architecting and implementing the core server-side logic of our satellite intelligence platform. You will create scalable, secure APIs and manage data-intensive workflows across distributed systems. The ideal candidate has deep expertise in backend technologies, asynchronous processing, and geospatial data handling, and is motivated to work in a fast-paced environment where performance and reliability are paramount. Key Responsibilities: 1. API & Scalable Backend Development Design and implement robust RESTful and GraphQL APIs to support frontend, and third-party clients. Architect and maintain a multi-tenant SaaS backend that supports modular features for different user roles and subscriptions. Optimize API performance through intelligent caching (Redis), asynchronous I/O, and pagination strategies. Implement API throttling, versioning, and rate limiting for secure and scalable usage. 2. Real-Time Data Processing & Task Orchestration Develop asynchronous task queues using Celery, RabbitMQ, or Kafka to handle satellite image processing workflows. Design event-driven services to trigger analytics and anomaly detection pipelines in real time. Tune worker concurrency and manage queue priorities to adapt to fluctuating data loads. 3. Database & Data Management Design normalized, spatially-indexed schemas in PostgreSQL/PostGIS for satellite and AIS metadata storage. Integrate and query MongoDB for storing unstructured sensor, detection, and inference data. 4. Security, Observability & Compliance Implement secure user authentication (OAuth2, JWT) and RBAC to control feature access and data visibility. Set up structured logging, tracing, and monitoring (e.g., with Prometheus, Grafana, OpenTelemetry) for reliability and debugging. Ensure compliance with enterprise SaaS security standards (SOC2, GDPR, ISO 27001), including audit logging and data encryption. Required Skills & Experience: 1.Backend Technologies Proficiency in Python (FastAPI, Django) and/or Node.js (Express.js). Experience building microservices and API-first systems at scale. 2. Databases Strong command of PostgreSQL/PostGIS for spatial data. Working knowledge of MongoDB, Redis. 3. Task Orchestration Proven experience with Celery, RabbitMQ, Kafka, or similar asynchronous frameworks. Ability to implement task queues, event triggers, and parallel processing pipelines. 4. Cloud-Native Development Experience working in Docker/Kubernetes environments. Familiarity with cloud-agnostic backend development and infrastructure-as-code principles. 5. Security & Observability Understanding of API security, access control, and encryption techniques. Experience with monitoring tools and logs aggregation for debugging and performance optimization. Preferred Qualifications Built or contributed to multi-tenant SaaS platforms with tiered billing and access controls. Experience with cost-optimized backend design, including use of spot/preemptible instances. Knowledge of geospatial processing frameworks. This role is ideal for backend engineers passionate about building high-performance, scalable SaaS solutions for real-time geospatial intelligence and satellite data processing. You will play a central role in shaping the data and processing engine of a platform trusted for maritime surveillance, national security, and environmental monitoring.

Posted 1 week ago

Apply

5.0 years

0 - 0 Lacs

India

On-site

GlassDoor logo

About the Role We are looking for a Senior LLM (Large Language Models) Engineer to design, build, and optimize intelligent agents powered by Large Language Models (LLMs). You will work on cutting-edge AI applications , pre-train LLMs, fine-tune open-source models, integrate multi-agent systems, and deploy scalable solutions in production environments. Key Responsibilities – (Must Have) Develop and fine-tune LLM-based modesl and AI agents for automation, reasoning, and decision-making. Build multi-agent systems that coordinate tasks efficiently. Design prompt engineering, retrieval-augmented generation (RAG), and memory architectures . Optimize inference performance and reduce hallucinations in LLMs. Integrate LLMs with APIs, databases, and external tools for real-world applications . Implement reinforcement learning with human feedback (RLHF) and continual learning strategies. Collaborate with research and engineering teams to enhance model capabilities. Requirements 5+ years in AI/ML, with at least 2+ years in LLMs, or AI agents . Strong experience in Python, LangChain, LlamaIndex, Autogen, Hugging Face, etc. Experience with open-source LLMs (LLaMA, Mistral, Falcon, etc.) . Hands-on experience in LLM deployments with strong inference capabilities using robust frameworks such as vLLM. building multi-modal RAG systems. Knowledge of vector databases (FAISS, Chroma) for retrieval-based systems. Experience with LLM fine-tuning, downscaling, prompt engineering, and model inference optimization . Familiarity with multi-agent systems, cognitive architectures, or autonomous AI workflows . Expertise in cloud platforms (AWS, GCP, Azure) and scalable AI deployments . Strong problem-solving and debugging skills. Nice to Have Contributions to AI research, GitHub projects, or open-source communities . Experience with open-source LLMs (LLaMA, Mistral, Falcon, etc.) . Knowledge of Neural Symbolic AI, AutoGPT, BabyAGI, or similar frameworks . Job Type: Full-time Pay: ₹20,000.00 - ₹70,000.00 per month Schedule: Day shift Work Location: In person

Posted 1 week ago

Apply

5.0 years

50 Lacs

Bhubaneswar, Odisha, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Jamshedpur, Jharkhand, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Raipur, Chhattisgarh, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Greater Hyderabad Area

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We are seeking an experienced Devops/ AIOps Architect to design, architect, and implement an AI-driven operations solution that integrates various cloud-native services across AWS, Azure, and cloud-agnostic environments. The AIOps platform will be used for end-to-end machine learning lifecycle management, automated incident detection, and root cause analysis (RCA). The architect will lead efforts in developing a scalable solution utilizing data lakes, event streaming pipelines, ChatOps integration, and model deployment services. This platform will enable real-time intelligent operations in hybrid cloud and multi-cloud setups. Responsibilities Assist in the implementation and maintenance of cloud infrastructure and services Contribute to the development and deployment of automation tools for cloud operations Participate in monitoring and optimizing cloud resources using AIOps and MLOps techniques Collaborate with cross-functional teams to troubleshoot and resolve cloud infrastructure issues Support the design and implementation of scalable and reliable cloud architectures Conduct research and evaluation of new cloud technologies and tools Work on continuous improvement initiatives to enhance cloud operations efficiency and performance Document cloud infrastructure configurations, processes, and procedures Adhere to security best practices and compliance requirements in cloud operations Requirements Bachelor’s Degree in Computer Science, Engineering, or related field 12+ years of experience in DevOps roles, AIOps, OR Cloud Architecture Hands-on experience with AWS services such as SageMaker, S3, Glue, Kinesis, ECS, EKS Strong experience with Azure services such as Azure Machine Learning, Blob Storage, Azure Event Hubs, Azure AKS Strong experience with Infrastructure as Code (IAC)/ Terraform/ Cloud formation Proficiency in container orchestration (e.g., Kubernetes) and experience with multi-cloud environments Experience with machine learning model training, deployment, and data management across cloud-native and cloud-agnostic environments Expertise in implementing ChatOps solutions using platforms like Microsoft Teams, Slack, and integrating them with AIOps automation Familiarity with data lake architectures, data pipelines, and inference pipelines using event-driven architectures Strong programming skills in Python for rule management, automation, and integration with cloud services Nice to have Any certifications in the AI/ ML/ Gen AI space Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

We are seeking a highly skilled and motivated Lead DS/ML engineer to join our team. The role is critical to the development of a cutting-edge reporting platform designed to measure and optimize online marketing campaigns. We are seeking a highly skilled Data Scientist / ML Engineer with a strong foundation in data engineering (ELT, data pipelines) and advanced machine learning to develop and deploy sophisticated models. The role focuses on building scalable data pipelines, developing ML models, and deploying solutions in production to support a cutting-edge reporting, insights, and recommendations platform for measuring and optimizing online marketing campaigns. The ideal candidate should be comfortable working across data engineering, ML model lifecycle, and cloud-native technologies. Job Description: Key Responsibilities: Data Engineering & Pipeline Development Design, build, and maintain scalable ELT pipelines for ingesting, transforming, and processing large-scale marketing campaign data. Ensure high data quality, integrity, and governance using orchestration tools like Apache Airflow, Google Cloud Composer, or Prefect. Optimize data storage, retrieval, and processing using BigQuery, Dataflow, and Spark for both batch and real-time workloads. Implement data modeling and feature engineering for ML use cases. Machine Learning Model Development & Validation Develop and validate predictive and prescriptive ML models to enhance marketing campaign measurement and optimization. Experiment with different algorithms (regression, classification, clustering, reinforcement learning) to drive insights and recommendations. Leverage NLP, time-series forecasting, and causal inference models to improve campaign attribution and performance analysis. Optimize models for scalability, efficiency, and interpretability. MLOps & Model Deployment Deploy and monitor ML models in production using tools such as Vertex AI, MLflow, Kubeflow, or TensorFlow Serving. Implement CI/CD pipelines for ML models, ensuring seamless updates and retraining. Develop real-time inference solutions and integrate ML models into BI dashboards and reporting platforms. Cloud & Infrastructure Optimization Design cloud-native data processing solutions on Google Cloud Platform (GCP), leveraging services such as BigQuery, Cloud Storage, Cloud Functions, Pub/Sub, and Dataflow. Work on containerized deployment (Docker, Kubernetes) for scalable model inference. Implement cost-efficient, serverless data solutions where applicable. Business Impact & Cross-functional Collaboration Work closely with data analysts, marketing teams, and software engineers to align ML and data solutions with business objectives. Translate complex model insights into actionable business recommendations. Present findings and performance metrics to both technical and non-technical stakeholders. Qualifications & Skills: Educational Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, Artificial Intelligence, Statistics, or a related field. Certifications in Google Cloud (Professional Data Engineer, ML Engineer) is a plus. Must-Have Skills: Experience: 5-10 years with the mentioned skillset & relevant hands-on experience Data Engineering: Experience with ETL/ELT pipelines, data ingestion, transformation, and orchestration (Airflow, Dataflow, Composer). ML Model Development: Strong grasp of statistical modeling, supervised/unsupervised learning, time-series forecasting, and NLP. Programming: Proficiency in Python (Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch) and SQL for large-scale data processing. Cloud & Infrastructure: Expertise in GCP (BigQuery, Vertex AI, Dataflow, Pub/Sub, Cloud Storage) or equivalent cloud platforms. MLOps & Deployment: Hands-on experience with CI/CD pipelines, model monitoring, and version control (MLflow, Kubeflow, Vertex AI, or similar tools). Data Warehousing & Real-time Processing: Strong knowledge of modern data platforms for batch and streaming data processing. Nice-to-Have Skills: Experience with Graph ML, reinforcement learning, or causal inference modeling. Working knowledge of BI tools (Looker, Tableau, Power BI) for integrating ML insights into dashboards. Familiarity with marketing analytics, attribution modeling, and A/B testing methodologies. Experience with distributed computing frameworks (Spark, Dask, Ray). Location: Bengaluru Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

FORD Requirement - Order Number: 33929-26 L PA Chennai - Contract - Non-Hacker rank Notice Period - Immediate Joiners / serving upto 30 days Position Title: Specialty Development Consultant Duration: 658 days Interview Required: N Estimated Regular Hours: 40 Estimated Overtime Hours: 0 Division: Global Data Insight & Analytics Position Description > Train, Build and Deploy ML, DL Models > Software development using Python, > work with Tech Anchors, Product Managers and the Team internally and across other Teams > Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end > Software development using TDD approach > Experience using GCP products & services > Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Skills Required 3+ years of experience in Python software development 3+ years’ experience in Cloud technologies & services, preferably GCP 3+ years of experience of practicing statistical methods and their accurate application e.g. ANOVA, principal component analysis, correspondence analysis, k-means clustering, factor analysis, multi-variate analysis, Neural Networks, causal inference, Gaussian regression, etc. 3+ years’ experience with Python, SQL, BQ. Experience in SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Vertex AI, Airflow, TensorFlow, etc., Experience in Train, Build and Deploy ML, DL Models Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end. Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Building and deploying Models (Scikit learn, DataRobots, TensorFlow PyTorch, etc.) Developing and deploying On-Prem & Cloud environments Kubernetes, Tekton, OpenShift, Terraform, Vertex AI Skills Preferred Good Communication, Presentation and Collaboration Skills Experience Required 2 to 5 yrs Experience Preferred GCP products & services Education Required BE, BTech, MCA, M.Sc, ME Education Preferred Additional Information: HackerRank Test on Python, Cloud and Machine Learning is must. Skills: ml,cicd,bigquery,pytorch,python,sonarqube,kubernetes,gcp looker,scikit learn,tekton,gcs,cloud,vertex ai,gcp,tensorflow,datarobots,openshift,sql,cloud technologies,airflow,statistical methods,terraform Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About The Role The Core Analytics & Science Team (CAS) is Uber's primary science organisation, covering both our main lines of business as well as the underlying platform technologies on which those businesses are built. We are a key part of Uber's cross-functional product development teams, helping to drive every stage of product development through data analytic, statistical, and algorithmic expertise. CAS owns the experience and algorithms powering Uber's global Mobility and Delivery products. We optimise and personalise the rider experience, target incentives and introduce customizations for routing and matching for products and use cases that go beyond the core Uber capabilities. What The Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Refine ambiguous questions and generate new hypotheses and design ML based solutions that benefit product through a deep understanding of the data, our customers, and our business Deliver end-to-end solutions rather than algorithms, working closely with the engineers on the team to productionize, scale, and deploy models world-wide. Use statistical techniques to measure success, develop northstar metrics and KPIs to help provide a more rigorous data-driven approach in close partnership with Product and other subject areas such as engineering, operations and marketing Design experiments and interpret the results to draw detailed and impactful conclusions. Collaborate with data scientists and engineers to build and improve on the availability, integrity, accuracy, and reliability of data logging and data pipelines. Develop data-driven business insights and work with cross-functional partners to find opportunities and recommend prioritisation of product, growth, and optimisation initiatives. Present findings to senior leadership to drive business decisions ---- Basic Qualifications ---- Undergraduate and/or graduate degree in Math, Economics, Statistics, Engineering, Computer Science, or other quantitative fields. 6+ years experience as a Data Scientist, Machine learning engineer, or other types of data science-focused functions Knowledge of underlying mathematical foundations of machine learning, statistics, optimization, economics, and analytics Hands-on experience building and deployment ML models Ability to use a language like Python or R to work efficiently at scale with large data sets Significant experience in setting up and evaluation of complex experiments Experience with exploratory data analysis, statistical analysis and testing, and model development Knowledge in modern machine learning techniques applicable to marketplace, platforms Proficiency in technologies in one or more of the following: SQL, Spark, Hadoop ---- Preferred Qualifications ---- Advanced SQL expertise Proven track record to wrangle large datasets, extract insights from data, and summarise learnings/takeaways. Proven aptitude toward Data Storytelling and Root Cause Analysis using data Advanced understanding of statistics, causal inference, and machine learning Experience designing and analyzing large scale online experiments Ability to deliver on tight timelines and prioritise multiple tasks while maintaining quality and detail Ability to work in a self-guided manner Ability to mentor, coach and develop junior team members Superb communication and organisation skills Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Autodesk Software Engineering, Product, Data Science Pune, Maharashtra, India Posted on Jun 6, 2025 Apply now Job Requisition ID # 25WD88152 Position Overview As a ML Platform Product Manager, you will help define and execute the multi-year product roadmap for Autodesk Machine Learning Platform, ensuring understanding of our strategic vision and the evolving needs of our diverse customer base. You will inspire creativity for us and lead enhanced user experiences for our customers. You will collaborate closely with teams including data scientists, engineers, and partners to develop and launch platform tools that enable the seamless integration of ML and AI features into Autodesk's software products, driving values and great user experiences for our platform users. Responsibilities You will define and execute the product roadmap for the ML platform, with a focus on our Inference Service and AI/ML observability You will collaborate with other product managers to identify ML/AI opportunities at Autodesk and guide platform adoption You will maintain documentation for the ML platform, including user guides, API documentation, and release notes Analyze platform performance, usage, and customer feedback to inform product improvements and feature enhancements Establish relationships with internal and external stakeholders, including customers, partners, and vendors Stay current on industry trends, emerging technologies, and best practices in the ML and AI space to inform product strategy and maintain a competitive edge Promote a culture of innovation, experimentation, and data-driven decision-making within the product management team and across the organization Minimum Qualifications Bachelor’s degree in computer science, Engineering, Data Science, or a related field 5+ years of experience in product management, with a focus on ML or AI platforms, tools, and neural network technologies Deep understanding of machine learning and AI concepts, algorithms, and technologies, and challenges faced by ML and AI practitioners Experience drawing relevant insights from ambiguous data, with the ability to synthesize complex information and develop data-driven decision making Experience communicating product updates, milestones, and progress to internal and external partners, with the ability to collaborate across diverse teams and influence partners at all levels Understands risks and dependencies within a product area to drive complex projects Familiarity with large foundation models (LLM), generative models and transformer-based architecture Works collaboratively with architecture or engineering and XD to tackle risks up front before engineers are asked to build Proficient in at least several of both qualitative and quantitative techniques, participating in and continuing to build their toolkit Exposure to ethical or trustworthy AI Learn More About Autodesk Welcome to Autodesk! Amazing things are created every day with our software – from the greenest buildings and cleanest cars to the smartest factories and biggest hit movies. We help innovators turn their ideas into reality, transforming not only how things are made, but what can be made. We take great pride in our culture here at Autodesk – our Culture Code is at the core of everything we do. Our values and ways of working help our people thrive and realize their potential, which leads to even better outcomes for our customers. When you’re an Autodesker, you can be your whole, authentic self and do meaningful work that helps build a better future for all. Ready to shape the world and your future? Join us! Salary transparency Salary is one part of Autodesk’s competitive compensation package. Offers are based on the candidate’s experience and geographic location. In addition to base salaries, we also have a significant emphasis on discretionary annual cash bonuses, commissions for sales roles, stock or long-term incentive cash grants, and a comprehensive benefits package. Diversity & Belonging We take pride in cultivating a culture of belonging and an equitable workplace where everyone can thrive. Learn more here: https://www.autodesk.com/company/diversity-and-belonging Are you an existing contractor or consultant with Autodesk? Please search for open jobs and apply internally (not on this external site). Apply now See more open positions at Autodesk Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Location: In-Person (sftwtrs.ai Lab) Experience Level: Early Career / 1–3 years About sftwtrs.ai sftwtrs.ai is a leading AI lab focused on security automation, adversarial machine learning, and scalable AI-driven solutions for enterprise clients. Under the guidance of our Principal Scientist, we combine cutting-edge research with production-grade development to deliver next-generation AI products in cybersecurity and related domains. Role Overview As a Research Engineer I , you will work closely with our Principal Scientist and Senior Research Engineers to ideate, prototype, and implement AI/ML models and pipelines. This role bridges research and software development: you’ll both explore novel algorithms (especially in adversarial ML and security automation) and translate successful prototypes into robust, maintainable code. This position is ideal for someone who is passionate about pushing the boundaries of AI research while also possessing strong software engineering skills. Key Responsibilities Research & Prototyping Dive into state-of-the-art AI/ML literature (particularly adversarial methods, anomaly detection, and automation in security contexts). Rapidly prototype novel model architectures, training schemes, and evaluation pipelines. Design experiments, run benchmarks, and analyze results to validate research hypotheses. Software Development & Integration Collaborate with DevOps and MLOps teams to containerize research prototypes (e.g., Docker, Kubernetes). Develop and maintain production-quality codebases in Python (TensorFlow, PyTorch, scikit-learn, etc.). Implement data pipelines for training and inference: data ingestion, preprocessing, feature extraction, and serving. Collaboration & Documentation Work closely with Principal Scientist and cross-functional stakeholders (DevOps, Security Analysts, QA) to align on research objectives and engineering requirements. Author clear, concise documentation: experiment summaries, model design notes, code review comments, and API specifications. Participate in regular code reviews, design discussions, and sprint planning sessions. Model Deployment & Monitoring Assist in deploying models to staging or production environments; integrate with internal tooling (e.g., MLflow, Kubeflow, or custom MLOps stack). Implement automated model-monitoring scripts to track performance drift, data quality, and security compliance metrics. Troubleshoot deployment issues, optimize inference pipelines for latency and throughput. Continuous Learning & Contribution Stay current with AI/ML trends—present findings to the team and propose opportunities for new research directions. Contribute to open-source libraries or internal frameworks as needed (e.g., adding new modules to our adversarial-ML toolkit). Mentor interns or junior engineers on machine learning best practices and coding standards. Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Data Science, or a closely related field. Research Experience: 1–3 years of hands-on experience in AI/ML research or equivalent internships. Familiarity with adversarial machine learning concepts (evasion attacks, poisoning attacks, adversarial training). Exposure to security-related ML tasks (e.g., anomaly detection in logs, malware classification using neural networks) is a strong plus. Development Skills: Proficient in Python, with solid experience using at least one major deep-learning framework (TensorFlow 2.x, PyTorch). Demonstrated ability to write clean, modular, and well-documented code (PEP 8 compliant). Experience building data pipelines (using pandas, Apache Beam, or equivalent) and integrating with RESTful APIs. Software Engineering Practices: Familiarity with version control (Git), CI/CD pipelines, and containerization (Docker). Comfortable writing unit tests (pytest or unittest) and conducting code reviews. Understanding of cloud services (AWS, GCP, or Azure) for training and serving models. Analytical & Collaborative Skills: Strong problem-solving mindset, attention to detail, and ability to work under tight deadlines. Excellent written and verbal communication skills; able to present technical concepts clearly to both research and engineering audiences. Demonstrated ability to collaborate effectively in a small, agile team. Preferred Skills (Not Mandatory) Experience with MLOps tools (MLflow, Kubeflow, or TensorFlow Extended). Hands-on knowledge of graph databases (e.g., JanusGraph, Neo4j) or NLP techniques (transformer models, embeddings). Familiarity with security compliance standards (HIPAA, GDPR) and secure software development practices. Exposure to Rust or Go for high-performance inference code. Contributions to open-source AI or security automation projects. Why Join Us? Cutting-Edge Research & Production Impact: Work on adversarial ML and security–automation projects that go from concept to real-world deployment. Hands-On Mentorship: Collaborate directly with our Principal Scientist and Senior Engineers, learning best practices in both research methodology and production engineering. Innovative Environment: Join a lean, highly specialized team where your contributions are immediately visible and valued. Professional Growth: Access to conferences, lab resources, and continuous learning opportunities in AI, cybersecurity, and software development. Competitive Compensation & Benefits: Attractive salary, health insurance, and opportunities for performance-based bonuses. How to Apply Please send a résumé/CV, a brief cover letter outlining relevant AI/ML projects, and any GitHub or portfolio links to careers@sftwtrs.ai with the subject line “RE: Research Engineer I Application.” sftwtrs.ai is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Years of Experience: Candidates with 4+ years of hands on experience Position: Senior Associate Industry: Supply Chain/Forecasting/Financial Analytics Required Skills: Successful candidates will have demonstrated the following skills and characteristics: Must Have Strong supply chain domain knowledge (inventory planning, demand forecasting, logistics) Well versed and hands-on experience of working on optimization methods like linear programming, mixed integer programming, scheduling optimization. Having understanding of working on third party optimization solvers like Gurobi will be an added advantage Proficiency in forecasting techniques (e.g., Holt-Winters, ARIMA, ARIMAX, SARIMA, SARIMAX, FBProphet, NBeats) and machine learning techniques (supervised and unsupervised) Experience using at least one major cloud platform (AWS, Azure, GCP), such as: AWS: Experience with AWS SageMaker, Redshift, Glue, Lambda, QuickSight Azure: Experience with Azure ML Studio, Synapse Analytics, Data Factory, Power BI GCP: Experience with BigQuery, Vertex AI, Dataflow, Cloud Composer, Looker Experience developing, deploying, and monitoring ML models on cloud infrastructure Expertise in Python, SQL, data orchestration, and cloud-native data tools Hands-on experience with cloud-native data lakes and lakehouses (e.g., Delta Lake, BigLake) Familiarity with infrastructure-as-code (Terraform/CDK) for cloud provisioning Knowledge of visualization tools (PowerBI, Tableau, Looker) integrated with cloud backends Strong command of statistical modeling, testing, and inference Advanced capabilities in data wrangling, transformation, and feature engineering Familiarity with MLOps, containerization (Docker, Kubernetes), and orchestration tools (e.g., Airflow) Strong communication and stakeholder engagement skills at the executive level Roles And Responsibilities Assist analytics projects within the supply chain domain, driving design, development, and delivery of data science solutions Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity like data quality, model robustness, and explainability for deployments. Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA from reputed institute Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

This role is for one of Weekday's clients Min Experience: 8 years Location: Bengaluru JobType: full-time Requirements As an SDE-3 in AI/ML, you will: Translate business asks and requirements into technical requirements, solutions, architectures, and implementations Define clear problem statements and technical requirements by aligning business goals with AI research objectives Lead the end-to-end design, prototyping, and implementation of AI systems, ensuring they meet performance, scalability, and reliability targets Architect solutions for GenAI and LLM integrations, including prompt engineering, context management, and agentic workflows Develop and maintain production-grade code with high test coverage and robust CI/CD pipelines on AWS, Kubernetes, and cloud-native infrastructures Establish and maintain post-deployment monitoring, performance testing, and alerting frameworks to ensure performance and quality SLAs are met Conduct thorough design and code reviews, uphold best practices, and drive technical excellence across the team Mentor and guide junior engineers and interns, fostering a culture of continuous learning and innovation Collaborate closely with product management, QA, data engineering, DevOps, and customer facing teams to deliver cohesive AI-powered product features Key Responsibilities Problem Definition & Requirements Translate business use cases into detailed AI/ML problem statements and success metrics Gather and document functional and non-functional requirements, ensuring traceability throughout the development lifecycle Architecture & Prototyping Design end-to-end architectures for GenAI and LLM solutions, including context orchestration, memory modules, and tool integrations Build rapid prototypes to validate feasibility, iterate on model choices, and benchmark different frameworks and vendors Development & Productionization Write clean, maintainable code in Python, Java, or Go, following software engineering best practices Implement automated testing (unit, integration, and performance tests) and CI/CD pipelines for seamless deployments Optimize model inference performance and scale services using containerization (Docker) and orchestration (Kubernetes) Post-Deployment Monitoring Define and implement monitoring dashboards and alerting for model drift, latency, and throughput Conduct regular performance tuning and cost analysis to maintain operational efficiency Mentorship & Collaboration Mentor SDE-1/SDE-2 engineers and interns, providing technical guidance and career development support Lead design discussions, pair-programming sessions, and brown-bag talks on emerging AI/ML topics Work cross-functionally with product, QA, data engineering, and DevOps to align on delivery timelines and quality goals Required Qualification Bachelor's or Master's degree in Computer Science, Engineering, or a related field 8+ years of professional software development experience, with at least 3 years focused on AI/ML systems Proven track record of architecting and deploying production AI applications at scale Strong programming skills in Python and one or more of Java, Go, or C++ Hands-on experience with cloud platforms (AWS, GCP, or Azure) and containerized deployments Deep understanding of machine learning algorithms, LLM architectures, and prompt engineering Expertise in CI/CD, automated testing frameworks, and MLOps best practices Excellent written and verbal communication skills, with the ability to distill complex AI concepts for diverse audiences Preferred Experience Prior experience building Agentic AI or multi-step workflow systems (using tools like Langgrah, CrewAI or similar) Familiarity with open-source LLMs (e.g., Hugging Face hosted) and custom fine-tuning Familiarity with ASR (Speech to Text) and TTS (Text to Speech), and other multi-modal systems Experience with monitoring and observability tools (e.g. Datadog, Prometheus, Grafana) Publications or patents in AI/ML or related conference presentations Knowledge of GenAI evaluation frameworks (e.g., Weights & Biases, CometML) Proven experience designing, implementing, and rigorously testing AI-driven voice agents - integrating with platforms such as Google Dialogflow, Amazon Lex, and Twilio Autopilot - and ensuring high performance and reliability What we offer? Opportunity to work at the forefront of GenAI, LLMs, and Agentic AI in a fast-growing SaaS environment Collaborative, inclusive culture focused on innovation, continuous learning, and professional growth Competitive compensation, comprehensive benefits, and equity options Flexible work arrangements and support for professional development Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

India

On-site

Linkedin logo

About Oportun Oportun (Nasdaq: OPRT) is a mission-driven fintech that puts its 2.0 million members' financial goals within reach. With intelligent borrowing, savings, and budgeting capabilities, Oportun empowers members with the confidence to build a better financial future. Since inception, Oportun has provided more than $16.6 billion in responsible and affordable credit, saved its members more than $2.4 billion in interest and fees, and helped its members save an average of more than $1,800 annually. Oportun has been certified as a Community Development Financial Institution (CDFI) since 2009. WORKING AT OPORTUN Working at Oportun means enjoying a differentiated experience of being part of a team that fosters a diverse, equitable and inclusive culture where we all feel a sense of belonging and are encouraged to share our perspectives. This inclusive culture is directly connected to our organization's performance and ability to fulfill our mission of delivering affordable credit to those left out of the financial mainstream. We celebrate and nurture our inclusive culture through our employee resource groups. Company Overview At Oportun, we are on a mission to foster financial inclusion for all by providing affordable and responsible lending solutions to underserved communities. As a purpose-driven financial technology company, we believe in empowering our customers with access to responsible credit that can positively transform their lives. Our relentless commitment to innovation and data-driven practices has positioned us as a leader in the industry, and we are actively seeking exceptional individuals to join our team as Senior Software Engineer to play a critical role in driving positive change. Position Overview We are seeking a highly skilled Platform Engineer with expertise in building self-serve platforms that combine real-time ML deployment and advanced data engineering capabilities. This role requires a blend of cloud-native platform engineering, data pipeline development, and deployment expertise. The ideal candidate will have a strong background in implementing data workflows, building platforms to enable self-serve for ML pipelines while enabling seamless deployments. Responsibilities Platform Engineering Design and build self-serve platforms that support real-time ML deployment and robust data engineering workflows. Create APIs and backend services using Python and FastAPI to manage and monitor ML workflows and data pipelines. Real-Time ML Deployment Implement platforms for real-time ML inference using tools like AWS SageMaker and Databricks. Enable model versioning, monitoring, and lifecycle management with observability tools such as New Relic. Data Engineering Build and optimise ETL/ELT pipelines for data preprocessing, transformation, and storage using PySpark and Pandas. Develop and manage feature stores to ensure consistent, high-quality data for ML model training and deployment. Design scalable, distributed data pipelines on platforms like AWS, integrating tools such as DynamoDB, PostgreSQL, MongoDB, and MariaDB. CI/CD and Automation Use CI/CD pipelines using Jenkins, GitHub Actions, and other tools for automated deployments and testing. Automate data validation and monitoring processes to ensure high-quality and consistent data workflows. Documentation and Collaboration Create and maintain detailed technical documentation, including high-level and low-level architecture designs. Collaborate with cross-functional teams to gather requirements and deliver solutions that align with business goals. Participate in Agile processes such as sprint planning, daily standups, and retrospectives using tools like Jira. Experience Required Qualifications 5-10 years experience in IT 5-8 years experience in platform backend engineering 1 year experience in DevOps & data engineering roles. Hands-on experience with real-time ML model deployment and data engineering workflows. Technical Skills Strong expertise in Python and experience with Pandas, PySpark, and FastAPI. Proficiency in container orchestration tools such as Kubernetes (K8s) and Docker. Advanced knowledge of AWS services like SageMaker, Lambda, DynamoDB, EC2, and S3. Proven experience building and optimizing distributed data pipelines using Databricks and PySpark. Solid understanding of databases such as MongoDB, DynamoDB, MariaDB, and PostgreSQL. Proficiency with CI/CD tools like Jenkins, GitHub Actions, and related automation frameworks. Hands-on experience with observability tools like New Relic for monitoring and troubleshooting. We are proud to be an Equal Opportunity Employer and consider all qualified applicants for employment opportunities without regard to race, age, color, religion, gender, national origin, disability, sexual orientation, veteran status or any other category protected by the laws or regulations in the locations where we operate. California applicants can find a copy of Oportun's CCPA Notice here: https://oportun.com/privacy/california-privacy-notice/. We will never request personal identifiable information (bank, credit card, etc.) before you are hired. We do not charge you for pre-employment fees such as background checks, training, or equipment. If you think you have been a victim of fraud by someone posing as us, please report your experience to the FBI’s Internet Crime Complaint Center (IC3). Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Solutions Architect / Technical Lead - AI & Automation1 Key Responsibilities Solution Architecture & Development: Design end-to-end solutions using Node.JS (backend) and Vue.JS (frontend) for custom portals and administration interfaces. Integrate Azure AI services, Google OCR, and Azure OCR into client workflows. AI/ML Engineering Develop and optimize vision-based AI models (Layout Parsing/LP, Layout Inference/LI, Layout Transformation/LT) using Python. Implement NLP pipelines for document extraction, classification, and data enrichment. Cloud & Database Management Architect and optimize MongoDB databases hosted on Azure for scalability, security, and performance. Manage cloud infrastructure (Azure) for AI workloads, including containerization and serverless deployments. Technical Leadership Lead cross-functional teams (AI engineers, DevOps, BAs) in solution delivery. Troubleshoot complex technical issues in OCR accuracy, AI model drift, or system integration. Client Enablement Advise clients on technical best practices for scaling AI solutions. Document architectures, conduct knowledge transfers, and mentor junior engineers. Required Technical Expertise Frontend/Portal: Vue.JS (advanced components, state management), Node.JS (Express, REST/GraphQL APIs). AI/ML Stack: Python (PyTorch/TensorFlow), Azure AI (Cognitive Services, Computer Vision), NLP techniques (NER, summarization). Layout Engineering: LP/LI/LT for complex documents (invoices, contracts). OCR Technologies: Production experience with Google Vision OCR and Azure Form Recognizer. Database & Cloud: MongoDB (sharding, aggregation, indexing) hosted on Azure (Cosmos DB, Blob Storage, AKS). Infrastructure-as-Code (Terraform/Bicep), CI/CD pipelines (Azure DevOps). Experience: 10+ years in software development, including 5+ years specializing in AI/ML, OCR, or document automation. Proven track record deploying enterprise-scale solutions in cloud environments (Azure preferred). Preferred Qualifications Certifications: Azure Solutions Architect Expert, MongoDB Certified Developer, or Google Cloud AI/ML. Experience with alternative OCR tools (ABBYY, Tesseract) or AI platforms (GCP Vertex AI, AWS SageMaker). Knowledge of DocuSign CLM, Coupa, or SAP Ariba integrations. Familiarity with Kubernetes, Docker, and MLOps practices. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Overview: Seeking an engineer to build and optimize high-throughput, low-latency LLM inference infrastructure using open-source models (Qwen, LLaMA, Mixtral) on multi-GPU systems (A100/H100). You’ll own performance tuning, model hosting, routing logic, speculative decoding, and cost-efficiency tooling. Must-Have Skills: Deep experience with vLLM, tensor/pipe parallelism, KV cache management Strong grasp of CUDA-level inference bottlenecks, FlashAttention2, quantization Familiarity with FP8, INT4, speculative decoding (e.g., TwinPilots, PowerInfer) Proven ability to scale LLMs across multi-GPU nodes (TP, DDP, inference routing) Python (systems-level), containerized deployments (Docker, GCP/AWS), load testing (Locust) Bonus: Experience with any-to-any model routing (e.g., text2sql, speech2text) Exposure to LangGraph, Triton kernels, or custom inference engines Has tuned models for <$0.50/M token inference at scale Highlight: Very good rate card for the best candidate fit. Show more Show less

Posted 1 week ago

Apply

10.0 years

2 - 7 Lacs

Hyderābād

On-site

GlassDoor logo

Key Responsibilities Solution Architecture & Development: Design end-to-end solutions using Node.JS (backend) and Vue.JS (frontend) for custom portals and administration interfaces. Integrate Azure AI services , Google OCR , and Azure OCR into client workflows. AI/ML Engineering: Develop and optimize vision-based AI models ( Layout Parsing/LP, Layout Inference/LI, Layout Transformation/LT ) using Python . Implement NLP pipelines for document extraction, classification, and data enrichment. Cloud & Database Management: Architect and optimize MongoDB databases hosted on Azure for scalability, security, and performance. Manage cloud infrastructure (Azure) for AI workloads, including containerization and serverless deployments. Technical Leadership: Lead cross-functional teams (AI engineers, DevOps, BAs) in solution delivery. Troubleshoot complex technical issues in OCR accuracy, AI model drift, or system integration. Client Enablement: Advise clients on technical best practices for scaling AI solutions. Document architectures, conduct knowledge transfers, and mentor junior engineers. Required Technical Expertise Frontend/Portal: Vue.JS (advanced components, state management), Node.JS (Express, REST/GraphQL APIs). AI/ML Stack: Python (PyTorch/TensorFlow), Azure AI (Cognitive Services, Computer Vision), NLP techniques (NER, summarization). Layout Engineering : LP/LI/LT for complex documents (invoices, contracts). OCR Technologies: Production experience with Google Vision OCR and Azure Form Recognizer . Database & Cloud: MongoDB (sharding, aggregation, indexing) hosted on Azure (Cosmos DB, Blob Storage, AKS). Infrastructure-as-Code (Terraform/Bicep), CI/CD pipelines (Azure DevOps). Experience: 10+ years in software development, including 5+ years specializing in AI/ML, OCR, or document automation . Proven track record deploying enterprise-scale solutions in cloud environments (Azure preferred). Preferred Qualifications Certifications: Azure Solutions Architect Expert , MongoDB Certified Developer , or Google Cloud AI/ML. Experience with alternative OCR tools (ABBYY, Tesseract) or AI platforms (GCP Vertex AI, AWS SageMaker). Knowledge of DocuSign CLM , Coupa , or SAP Ariba integrations. Familiarity with Kubernetes , Docker , and MLOps practices.

Posted 1 week ago

Apply

5.0 years

50 Lacs

Dehradun, Uttarakhand, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

About the Role: Grade Level (for internal use): 10 S&P Global Commodity Insights The Role: Senior Cloud Engineer The Location: Hyderabad, Gurgaon The Team: The Cloud Engineering Team is responsible for designing, implementing, and maintaining cloud infrastructure that supports various applications and services within the S&P Global Commodity Insights organization. This team collaborates closely with data science, application development, and security teams to ensure the reliability, security, and scalability of our cloud solutions. The Impact: As a Cloud Engineer, you will play a vital role in deploying and managing cloud infrastructure that supports our strategic initiatives. Your expertise in AWS and cloud technologies will help streamline operations, enhance service delivery, and ensure the security and compliance of our environments. What’s in it for you: This position offers the opportunity to work on cutting-edge cloud technologies and collaborate with various teams across the organization. You will gain exposure to multiple S&P Commodity Insights Divisions and contribute to projects that have a significant impact on the business. This role opens doors for tremendous career opportunities within S&P Global. Responsibilities: Design and deploy cloud infrastructure using core AWS services such as EC2, S3, RDS, IAM, VPC, and CloudFront, ensuring high availability and fault tolerance. Deploy, manage, and scale Kubernetes clusters using Amazon EKS, ensuring high availability, secure networking, and efficient resource utilization. Develop secure, compliant AWS environments by configuring IAM roles/policies, KMS encryption, security groups, and VPC endpoints. Configure logging, monitoring, and alerting with CloudWatch, CloudTrail, and GuardDuty to support observability and incident response. Enforce security and compliance controls via IAM policy audits, patching schedules, and automated backup strategies. Monitor infrastructure health, respond to incidents, and maintain SLAs through proactive alerting and runbook execution. Collaborate with data science teams to deploy machine learning models using Amazon SageMaker, managing model training, hosting, and monitoring. Automate and schedule data processing workflows using AWS Glue, Step Functions, Lambda, and EventBridge to support ML pipelines. Optimize infrastructure for cost and performance using AWS Compute Optimizer, CloudWatch metrics, auto-scaling, and Reserved Instances/Savings Plans. Write and maintain Infrastructure as Code (IaC) using Terraform or AWS CloudFormation for repeatable, automated infrastructure deployments. Implement disaster recovery, backups, and versioned deployments using S3 versioning, RDS snapshots, and CloudFormation change sets. Set up and manage CI/CD pipelines using AWS services like CodePipeline, CodeBuild, and CodeDeploy to support application and model deployments. Manage and optimize real-time inference pipelines using SageMaker Endpoints, Amazon Bedrock, and Lambda with API Gateway to ensure reliable, scalable model serving. Support containerized AI workloads using Amazon ECS or EKS, including model serving and microservices for AI-based features. Collaborate with SecOps and SRE teams to uphold security baselines, manage change control, and conduct root cause analysis for outages. Participate in code reviews, design discussions, and architectural planning to ensure scalable and maintainable cloud infrastructure. Maintain accurate and up-to-date infrastructure documentation, including architecture diagrams, access control policies, and deployment processes. Collaborate cross-functionally with application, data, and security teams to align cloud solutions with business and technical goals. Stay current with AWS and AI/ML advancements, suggesting improvements or new service adoption where applicable. What We’re Looking For: Strong understanding of cloud infrastructure, particularly AWS services and Kubernetes. Proven experience in deploying and managing cloud solutions in a collaborative Agile environment. Ability to present technical concepts to both business and technical audiences. Excellent multi-tasking skills and the ability to manage multiple projects under tight deadlines. Basic Qualifications: BA/BS in computer science, information technology, or a related field. 5+ years of experience in cloud engineering or related roles, specifically with AWS. Experience with Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation. Knowledge of container orchestration and microservices architecture. Familiarity with security best practices in cloud environments. Preferred Qualifications: Extensive Hands-on Experience with AWS Services. Excellent problem-solving skills and the ability to work independently as well as part of a team. Strong communication skills and the ability to influence stakeholders at all levels. Experience with greenfield projects and building cloud infrastructure from scratch. About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. We’re a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights’ coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit http://www.spglobal.com/commodity-insights . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 315801 Posted On: 2025-06-05 Location: Hyderabad, Telangana, India

Posted 1 week ago

Apply

7.0 years

9 - 9 Lacs

Hyderābād

On-site

GlassDoor logo

Kore.ai is a pioneering force in enterprise AI transformation, empowering organizations through our comprehensive agentic AI platform. With innovative offerings across "AI for Service," "AI for Work," and "AI for Process," we're enabling over 400+ Global 2000 companies to fundamentally reimagine their operations, customer experiences and employee productivity. Our end-to-end platform enables enterprises to build, deploy, manage, monitor, and continuously improve agentic applications at scale. We've automated over 1 billion interactions every year with voice and digital AI in customer service, and transformed employee experiences for tens of thousands of employees through productivity and AI-driven workflow automation. Recognized as a leader by Gartner, Forrester, IDC, ISG, and Everest, Kore.ai has secured Series D funding of $150M, including strategic investment from NVIDIA to drive Enterprise AI innovation. Founded in 2014 and headquartered in Florida, we maintain a global presence with offices in India, UK, Germany, Korea, and Japan. You can find full press coverage at https://kore.ai/press/ POSITION / TITLE: QA Lead POSITION SUMMARY: As a QA Automation Technical Lead for the Agent-Platform product at Kore.ai, you will be responsible for driving the end-to-end test automation strategy to ensure a high-quality and reliable enterprise-grade AI platform. You will lead a team of automation engineers and collaborate with cross-functional teams to validate complex features, services, and workflows at scale. LOCATION: Hyderabad (Work from Office) RESPONSIBILITIES: Begin with leading the daily QA stand-up, assigning priorities, removing blockers, and mentoring the team on test automation practices. Define and continuously evolve the automation strategy for web, API and data validation across our products. Drive the design, implementation, and maintenance of scalable test automation frameworks in Python and open-source tools such as Selenium and Behave. Lead test coverage and BDD test development with tools like Behave, ensuring reliable and maintainable tests. Collaborate closely with developers, product managers, and DevOps teams to integrate automated tests into CI/CD pipelines (Jenkins, Git). Perform code reviews, monitor test effectiveness, and ensure tests are consistently delivering accurate feedback on release readiness. Analyze failed test results, troubleshoot regressions, and proactively address quality risks in early development stages. Stay on top of emerging trends in QA, AI/ML testing, and generative automation tools to enhance the QA tech stack. Contribute to strategic quality initiatives such as test data management, environment stability, and performance validation. SKILLS REQUIRED: Minimum 7–8 years of experience in test automation, including 2 years of hands-on experience guiding or coordinating test automation efforts within a team Strong hands-on experience with Python, Selenium WebDriver, and API testing tools. Proficiency in BDD using Cucumber and test execution frameworks like pytest. Experience in designing test automation strategies for large-scale platforms with CI/CD pipelines (Jenkins, Git). Ability to troubleshoot complex systems, isolate automation failures, and mentor junior engineers. Familiarity with test reporting, test data generation, and defect lifecycle management. Excellent communication and collaboration skills to work with cross-functional product and engineering teams. Familiarity with AI/ML concepts and their validation, including model inference testing and chatbot workflows, is a plus EDUCATION QUALIFICATION: Bachelor’s in engineering or Master’s in computer applications Technologies We Use Python, Selenium, Cucumber, Behave, Jenkins, Git, Postman, JIRA, MongoDB (for test data), Cloud platforms (AWS preferred) Why Join Us? At Kore.ai, you won't be maintaining quality for conventional software—you'll be defining what quality means for an entirely new category of platform technology that enables enterprise-scale agentic applications. Your work will directly influence how the world's leading organizations build, deploy, and trust AI systems, establishing standards that could transform the industry. Join us in building not just a better platform, but the frameworks that ensure enterprise agentic applications deliver on their transformative promise safely, effectively, and responsibly at scale.

Posted 1 week ago

Apply

2.0 years

7 - 9 Lacs

Hyderābād

On-site

GlassDoor logo

Location: Hyderabad (In-Office) Faculty Name: Professor Gurvinder Sandhu Academic Area: Accounting The ISB Research Associate Program: ISB hosts a cutting-edge 2-year Research Associate Program within its Accounting area. This is akin to pre-doctoral programs run by many top research universities (especially in the U.S.), where students work alongside professors and typically pursue a Ph.D. in Business Administration (with a specialization in accounting) after that. This is a unique setting where candidates can attend Ph.D. courses on research methods (such as Panel Data Econometrics and Causal Inference) as well as seminar courses focusing on empirical archival research in Accounting, Auditing, and Corporate Finance. We invite applications from motivated individuals with a solid analytical academic background to work as a full-time Research Associate at ISB. Research Associates (RA) are expected to work for about two years and generally apply for the Ph.D. programs in their second year of the RA program. Research Summary of the Faculty Professor Gurvinder Sandhu Gurvinder Sandhu is an Assistant Professor of Accounting at the Indian School of Business (ISB). He holds a PhD in Management Science from the University of Texas at Dallas. He also holds an MBA from Melbourne Business School (University of Melbourne) and a B. Com from Kurukshetra University. Professor Sandhu’s research explores financial institutions, credit markets, and firms’ voluntary disclosures. His empirical work looks at what forces shape banks’ loan portfolios. He has developed a bank diversification measure that captures how diversified banks are in their commercial loan portfolio. His research has been accepted at various academic conferences, including the American Accounting Association and the European Accounting Association. His teaching interests lie in financial accounting, specifically financial statement analysis and introductory financial accounting. View Profile About ISB The Indian School of Business (ISB) evolved from the need for a world-class business school in Asia. The founders, some of the best minds from the corporate and academic worlds, anticipated the leadership needs of the emerging Asian economies. The ISB is committed to creating such leaders through its innovative programs, outstanding faculty, and thought leadership. The Indian School of Business (ISB) provides a robust environment that generates high-quality research that is both contemporary and rigorous. Roles and Responsibilities: Work Description Work with the faculty on research projects of common interest. The candidate will assist the professor in his ongoing. research, including support through data collection, data cleaning, literature review, and preliminary data analysis. You will be exposed to creating and handling big data sets, working extensively in statistical software/programming languages (such as Python, Stata, R, and SAS), and learning state-of-the-art research design methodologies (e.g., panel data regressions with fixed effects). This position is a good fit for a candidate looking to pursue a PhD in Accounting or excel in research (post-doctoral). Required Skills and Qualifications: Master’s degree in economics / finance / Statistics / Econometrics / Mathematics / Physics or engineering (electrical, signal processing, computer science), or a 4-year bachelor’s degree in mathematics/physics/economics from a premier institute. Strong background in Mathematics. Python/STATA/R/SAS coding skills are essential. The ideal candidate should be proficient in at least one of these languages/software. Knowledge of writing API queries/Web-Scraping algorithms, Machine Learning/AI, or textual analysis is an added advantage. Our Commitment towards you ISB is a research-focused business school. It offers a variety of opportunities to understand the current management phenomena in depth through research brown-bag seminars, workshops, and PhD-level courses. It provides several options to hone a person’s analytical skills. Along with the competitive salary and plethora of employee benefits, ISB hosts a world-class Learning Resource Centre and a comprehensive health and personal Accident Cover for you and your family members. ISB believes in creating a truly inclusive culture that values diversity, equity, and inclusion for everyone through our ideas and collaborations. If this role is your true calling, please complete the form using the link below. If you have any questions, please contact careers_ra_fd@isb.edu for any queries. Kindly do not send your resume through email id, as it becomes difficult to keep track. Use the link given below only to apply. https://www.cognitoforms.com/IndianSchoolOfBusiness9/FDOHiringForm We will connect with you shortly. Hyderabad Campus Indian School of Business Gachibowli, Hyderabad - 500111 Timings : Monday- Friday, 08:00 AM IST to 06:00 PM IST 040 23187777 0172 4591800 careers_hyderabad@isb.edu careers_mohali@isb.edu careers_ra@isb.edu Mohali Campus Indian School of Business Knowledge City Sector 81, SAS Nagar ,Mohali - 140 306

Posted 1 week ago

Apply

8.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job description: Job Description Role Purpose The purpose of the role is to create exceptional architectural solution design and thought leadership and enable delivery teams to provide exceptional client engagement and satisfaction. ͏ Mandatory Skills: Data Science, ML, DL, Python for Data Science, Tensorflow, Pytorch, Django, SQL, MLOps Preferred Skills: NLP, Gen AI, LLM, PowerBI, Advanced Analytics, Banking exposure ͏ Strong understanding of Data Science, machine learning and deep learning principles and algorithms. Proficiency in programming languages such as Python, TensorFlow, and PyTorch. Experienced data scientist who can using python build various AI models for banking product acquisition, deepening, retention. Drive data driven personalisation, customer segmentation, in accordance with banks data privacy and security standards Expert in applying ML techniques such as: classification, clustering, deep learning, optimization methods, supervised and unsupervised techniques Optimize model performance and scalability for real-time inference and deployment. Experiment with different hyperparameters and model configurations to improve AI model quality. Ensure AI ML solutions are developed, and validations are performed in accordance with Responsible AI guidelines & Standards Working knowledge ane experience in ML Ops is a must and engineering background is preferred Excellent command of data warehousing concepts and SQL Knowledge of personal banking products is a plus Mandatory Skills: AI Cognitive . Experience: 8-10 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a skilled and motivated AI/ML Engineer with 3–5 years of experience to join our team. The ideal candidate will have hands-on expertise in building and deploying AI/ML solutions on the Azure platform, with a solid focus on Large Language Models (LLMs), Retrieval-Augmented Generation (RAG) systems, and Azure ML Studio. You will play a key role in designing intelligent systems, deploying scalable models, and integrating advanced AI capabilities into enterprise applications. Primary Responsibilities: AI/ML Development & Deployment: Design, develop, and deploy machine learning models using Azure ML Studio and Azure Machine Learning services Build and fine-tune LLM-based solutions for enterprise use cases Develop and implement RAG pipelines using Azure services and vector databases Deploy and monitor AI/ML models in production environments ensuring scalability and performance Azure Platform Engineering: Leverage Azure services such as Azure Data Lake, Azure Synapse, Azure Blob Storage, and Azure Cognitive Search for data ingestion and processing Integrate AI models with Azure-based data pipelines and APIs Use Azure DevOps for CI/CD of ML workflows and model versioning Data Engineering & Processing: Build and maintain ETL/ELT pipelines for structured and unstructured data using Databricks and Apache Spark Prepare and transform data for training and inference using Python, PySpark and SQL LLM & RAG System Implementation: Implement LLM-based agents and chatbots using frameworks like Langchain Design and optimize RAG architectures for domain-specific knowledge retrieval Work with vector databases (e.g., Azure Cognitive Search, FAISS) for embedding-based search Collaboration & Innovation: Collaborate with data scientists, product managers, and engineers to deliver AI-driven features Stay current with advancements in generative AI, LLMs, and Azure AI services Contribute to the continuous improvement of AI/ML pipelines and best practices Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: 3+ years of hands-on experience in AI/ML engineering with a focus on Azure Proven experience in deploying ML models using Azure ML Studio and Azure Machine Learning Experience working with LLMs, RAG systems, and AI agents Experience with Databricks, Apache Spark, and Azure Data services Knowledge of Azure DevOps and CI/CD for ML workflows Understanding of data governance and security in cloud environments Familiarity with MLOps practices and model monitoring tools Familiarity with vector databases and embedding models Proficiency in Python, SQL, and PySpark Proven solid analytical and problem-solving skills Proven effective communication and collaboration with cross-functional teams Proven ability to translate business requirements into technical solutions At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission. #Nic

Posted 1 week ago

Apply

Exploring Inference Jobs in India

With the rapid growth of technology and data-driven decision making, the demand for professionals with expertise in inference is on the rise in India. Inference jobs involve using statistical methods to draw conclusions from data and make predictions based on available information. From data analysts to machine learning engineers, there are various roles in India that require inference skills.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Hyderabad
  5. Pune

These major cities are known for their thriving tech industries and are actively hiring professionals with expertise in inference.

Average Salary Range

The average salary range for inference professionals in India varies based on experience level. Entry-level positions may start at around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.

Career Path

In the field of inference, a typical career path may start as a Data Analyst or Junior Data Scientist, progress to a Data Scientist or Machine Learning Engineer, and eventually lead to roles like Senior Data Scientist or Principal Data Scientist. With experience and expertise, professionals can also move into leadership positions such as Data Science Manager or Chief Data Scientist.

Related Skills

In addition to expertise in inference, professionals in India may benefit from having skills in programming languages such as Python or R, knowledge of machine learning algorithms, experience with data visualization tools like Tableau or Power BI, and strong communication and problem-solving abilities.

Interview Questions

  • What is the difference between inferential statistics and descriptive statistics? (basic)
  • How do you handle missing data in a dataset when performing inference? (medium)
  • Can you explain the bias-variance tradeoff in the context of inference? (medium)
  • What are the assumptions of linear regression and how do you test them? (advanced)
  • How would you determine the significance of a coefficient in a regression model? (medium)
  • Explain the concept of p-value and its significance in hypothesis testing. (basic)
  • Can you discuss the difference between frequentist and Bayesian inference methods? (advanced)
  • How do you handle multicollinearity in a regression model? (medium)
  • What is the Central Limit Theorem and why is it important in statistical inference? (medium)
  • How would you choose between different machine learning algorithms for a given inference task? (medium)
  • Explain the concept of overfitting and how it can affect inference results. (medium)
  • Can you discuss the difference between parametric and non-parametric inference methods? (advanced)
  • Describe a real-world project where you applied inference techniques to draw meaningful conclusions from data. (advanced)
  • How do you assess the goodness of fit of a regression model in inference? (medium)
  • What is the purpose of cross-validation in machine learning and how does it impact inference? (medium)
  • Can you explain the concept of Type I and Type II errors in hypothesis testing? (basic)
  • How would you handle outliers in a dataset when performing inference? (medium)
  • Discuss the importance of sample size in statistical inference and hypothesis testing. (basic)
  • How do you interpret confidence intervals in an inference context? (medium)
  • Can you explain the concept of statistical power and its relevance in inference? (medium)
  • What are some common pitfalls to avoid when performing inference on data? (basic)
  • How do you test the normality assumption in a dataset for conducting inference? (medium)
  • Explain the difference between correlation and causation in the context of inference. (medium)
  • How would you evaluate the performance of a classification model in an inference task? (medium)
  • Discuss the importance of feature selection in building an effective inference model. (medium)

Closing Remark

As you explore opportunities in the inference job market in India, remember to prepare thoroughly by honing your skills, gaining practical experience, and staying updated with industry trends. With dedication and confidence, you can embark on a rewarding career in this field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies