Jobs
Interviews

657 Xgboost Jobs - Page 23

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

JOB DESCRIPTION • Strong in Python with libraries such as polars, pandas, numpy, scikit-learn, matplotlib, tensorflow, torch, transformers • Must have: Deep understanding of modern recommendation systems including two-tower , multi-tower , and cross-encoder architectures • Must have: Hands-on experience with deep learning for recommender systems using TensorFlow , Keras , or PyTorch • Must have: Experience generating and using text and image embeddings (e.g., CLIP , ViT , BERT , Sentence Transformers ) for content-based recommendations • Must have: Experience with semantic similarity search and vector retrieval for matching user-item representations • Must have: Proficiency in building embedding-based retrieval models , ANN search , and re-ranking strategies • Must have: Strong understanding of user modeling , item representations , temporal/contextual personalization • Must have: Experience with Vertex AI for training, tuning, deployment, and pipeline orchestration • Must have: Experience designing and deploying machine learning pipelines on Kubernetes (e.g., using Kubeflow Pipelines , Kubeflow on GKE , or custom Kubernetes orchestration ) • Should have experience with Vertex AI Matching Engine or deploying Qdrant , F AISS , ScaNN , on GCP for large-scale retrieval • Should have experience working with Dataproc (Spark/PySpark) for feature extraction, large-scale data prep, and batch scoring • Should have a strong grasp of cold-start problem solving using metadata and multi-modal embeddings • Good to have: Familiarity with Multi-Modal Retrieval Models combining text, image, and tabular features • Good to have: Experience building ranking models (e.g., XGBoost , LightGBM , DLRM ) for candidate re-ranking • Must have: Knowledge of recommender metrics (Recall@K, nDCG, HitRate, MAP) and offline evaluation frameworks • Must have: Experience running A/B tests and interpreting results for model impact • Should be familiar with real-time inference using Vertex AI , Cloud Run , or TF Serving • Should understand feature store concepts , embedding versioning , and serving pipelines • Good to have: Experience with streaming ingestion (Pub/Sub, Dataflow) for updating models or embeddings in near real-time • Good to have: Exposure to LLM-powered ranking or personalization , or hybrid recommender setups • Must follow MLOps practices — version control, CI/CD, monitoring, and infrastructure automation GCP Tools Experience: ML & AI : Vertex AI, Vertex Pipelines, Vertex AI Matching Engine, Kubeflow on GKE, AI Platform Embedding & Retrieval : Matching Engine, FAISS, ScaNN, Qdrant, GKE-hosted vector DBs (Milvus) Storage : BigQuery, Cloud Storage, Firestore Processing : Dataproc (PySpark), Dataflow (batch & stream) Ingestion : Pub/Sub, Cloud Functions, Cloud Run Serving : Vertex AI Online Prediction, TF Serving, Kubernetes-based custom APIs, Cloud Run CI/CD & IaC : GitHub Actions, GitLab CI Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

JOB DESCRIPTION • Strong in Python and experience with Jupyter notebooks , Python packages like polars, pandas, numpy, scikit-learn, matplotlib, etc. • Must have: Experience with machine learning lifecycle , including data preparation , training , evaluation , and deployment • Must have: Hands-on experience with GCP services for ML & data science • Must have: Experience with Vector Search and Hybrid Search techniques • Must have: Experience with embeddings generation using models like BERT , Sentence Transformers , or custom models • Must have: Experience in embedding indexing and retrieval (e.g., Elastic, FAISS, ScaNN, Annoy) • Must have: Experience with LLMs and use cases like RAG (Retrieval-Augmented Generation) • Must have: Understanding of semantic vs lexical search paradigms • Must have: Experience with Learning to Rank (LTR) techniques and libraries (e.g., XGBoost, LightGBM with LTR support) • Should be proficient in SQL and BigQuery for analytics and feature generation • Should have experience with Dataproc clusters for distributed data processing using Apache Spark or PySpark • Should have experience deploying models and services using Vertex AI , Cloud Run , or Cloud Functions • Should be comfortable working with BM25 ranking (via Elasticsearch or OpenSearch ) and blending with vector-based approaches • Good to have: Familiarity with Vertex AI Matching Engine for scalable vector retrieval • Good to have: Familiarity with TensorFlow Hub , Hugging Face , or other model repositories • Good to have: Experience with prompt engineering , context windowing , and embedding optimization for LLM-based systems • Should understand how to build end-to-end ML pipelines for search and ranking applications • Must have: Awareness of evaluation metrics for search relevance (e.g., precision@k , recall , nDCG , MRR ) • Should have exposure to CI/CD pipelines and model versioning practices GCP Tools Experience: ML & AI : Vertex AI, Vertex AI Matching Engine, AutoML, AI Platform Storage : BigQuery, Cloud Storage, Firestore Ingestion : Pub/Sub, Cloud Functions, Cloud Run Search : Vector Databases (e.g., Matching Engine, Qdrant on GKE), Elasticsearch/OpenSearch Compute : Cloud Run, Cloud Functions, Vertex Pipelines , Cloud Dataproc (Spark/PySpark) CI/CD & IaC : GitLab/GitHub Actions Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

India

On-site

About the Role We’re looking for an experienced AI Developer with hands-on expertise in Large Language Models (LLMs) , Azure AI services , and end-to-end ML pipeline deployment . If you’re passionate about building scalable AI solutions, integrating document intelligence, and deploying models in production using Azure, this role is for you. 💡 Key Responsibilities Design and develop AI applications leveraging LLMs (e.g., GPT, BERT) for tasks like summarization, classification, and document understanding Implement solutions using Azure Document Intelligence to extract structured data from forms, invoices, and contracts Train, evaluate, and tune ML models using Scikit-learn, XGBoost , or PyTorch Build ML pipelines and workflows using Azure ML , MLflow , and integrate with CI/CD tools Deploy models to production using Azure ML endpoints , containers, or Azure Functions for real-time AI workflows Write clean, efficient, and scalable code in Python and manage code versioning using Git Work with structured and unstructured data from SQL/NoSQL databases and Data Lakes Ensure performance monitoring and logging for deployed models ✅ Skills & Experience Required Proven experience with LLMs and Prompt Engineering (e.g., GPT, BERT) Hands-on with Azure Document Intelligence for OCR and data extraction Solid background in ML model development, evaluation , and hyperparameter tuning Proficient in Azure ML Studio , model registry , and automated ML workflows Familiar with MLOps tools such as Azure ML pipelines, MLflow , and CI/CD practices Experience with Azure Functions for building serverless, event-driven AI apps Strong coding skills in Python ; familiarity with libraries like NumPy, Pandas, Scikit-learn, Matplotlib Working knowledge of SQL/NoSQL databases and Data Lakes Proficiency with Azure DevOps , Git version control, and testing frameworks Show more Show less

Posted 2 months ago

Apply

3.0 - 4.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

We’re seeking a skilled Data Scientist with expertise in SQL, Python, AWS SageMaker , and Commercial Analytics to contribute to Team. You’ll design predictive models, uncover actionable insights, and deploy scalable solutions to recommend optimal customer interactions. This role is ideal for a problem-solver passionate about turning data into strategic value. Key Responsibilities Model Development: Build, validate, and deploy machine learning models (e.g., recommendation engines, propensity models) using Python and AWS SageMaker to drive next-best-action decisions. Data Pipeline Design: Develop efficient SQL queries and ETL pipelines to process large-scale commercial datasets (e.g., customer behavior, transactional data). Commercial Analytics: Analyze customer segmentation, lifetime value (CLV), and campaign performance to identify high-impact NBA opportunities. Cross-functional Collaboration: Partner with marketing, sales, and product teams to align models with business objectives and operational workflows. Cloud Integration: Optimize model deployment on AWS, ensuring scalability, monitoring, and performance tuning. Insight Communication: Translate technical outcomes into actionable recommendations for non-technical stakeholders through visualizations and presentations. Continuous Improvement: Stay updated on advancements in AI/ML, cloud technologies, and commercial analytics trends. Qualifications Education: Bachelor’s/Master’s in Data Science, Computer Science, Statistics, or a related field. Experience: 3-4 years in data science, with a focus on commercial/customer analytics (e.g., pharma, retail, healthcare, e-commerce, or B2B sectors). Technical Skills: Proficiency in SQL (complex queries, optimization) and Python (Pandas, NumPy, Scikit-learn). Hands-on experience with AWS SageMaker (model training, deployment) and cloud services (S3, Lambda, EC2). Familiarity with ML frameworks (XGBoost, TensorFlow/PyTorch) and A/B testing methodologies. Analytical Mindset: Strong problem-solving skills with the ability to derive insights from ambiguous data. Communication: Ability to articulate technical concepts to business stakeholders. Preferred Qualifications AWS Certified Machine Learning Specialty or similar certifications. Experience with big data tools (Spark, Redshift) or ML Ops practices. Knowledge of NLP, reinforcement learning, or real-time recommendation systems. Exposure to BI tools (Tableau, Power BI) for dashboarding. Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Job Title: AI/ML Developer (5 Years Experience) Location : Remote Job Type : Full-time Experience:5 Year Job Summary: We are looking for an experienced AI/ML Developer with at least 5 years of hands-on experience in designing, developing, and deploying machine learning models and AI-driven solutions. The ideal candidate should have strong knowledge of machine learning algorithms, data preprocessing, model evaluation, and experience with production-level ML pipelines. Key Responsibilities Model Development : Design, develop, train, and optimize machine learning and deep learning models for classification, regression, clustering, recommendation, NLP, or computer vision tasks. Data Engineering : Work with data scientists and engineers to preprocess, clean, and transform structured and unstructured datasets. ML Pipelines : Build and maintain scalable ML pipelines using tools such as MLflow, Kubeflow, Airflow, or SageMaker. Deployment : Deploy ML models into production using REST APIs, containers (Docker), or cloud services (AWS/GCP/Azure). Monitoring and Maintenance : Monitor model performance and implement retraining pipelines or drift detection techniques. Collaboration : Work cross-functionally with data scientists, software engineers, and product managers to integrate AI capabilities into applications. Research and Innovation : Stay current with the latest advancements in AI/ML and recommend new techniques or tools where applicable. Required Skills & Qualifications Bachelor's or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field. Minimum 5 years of experience in AI/ML development. Proficiency in Python and ML libraries such as Scikit-learn, TensorFlow, PyTorch, XGBoost, or LightGBM. Strong understanding of statistics, data structures, and ML/DL algorithms. Experience with cloud platforms (AWS/GCP/Azure) and deploying ML models in production. Experience with CI/CD tools and containerization (Docker, Kubernetes). Familiarity with SQL and NoSQL databases. Excellent problem-solving and communication skills. Preferred Qualifications Experience with NLP frameworks (e.g., Hugging Face Transformers, spaCy, NLTK). Knowledge of MLOps best practices and tools. Experience with version control systems like Git. Familiarity with big data technologies (Spark, Hadoop). Contributions to open-source AI/ML projects or publications in relevant fields. Show more Show less

Posted 2 months ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description Job summary: Company Chase & Co. (NYSE: JPM) is a leading global financial services firm with operations worldwide. The firm is a leader in investment banking, financial services for consumers and small business, commercial banking, financial transaction processing, and asset management. A component of the Dow Jones Industrial Average, Company Chase & Co. serves millions of consumers in the United States and many of the world's most prominent corporate, institutional and government clients under its Company and Chase brands. Information about Company Chase & Co. is available at Company website. Chase Consumer & Community Banking (CCB) serves consumers and small businesses with a broad range of financial services. CCB Risk Management partners with each CCB sub-line of business to identify, assess, prioritize, and remediate risk. We are currently seeking an Applied ML AI Executive Director as the Head of Credit Card Collections Risk Modeling team . In this critical role you will be managing a team of applied machine learning modelers in multiple working locations who are responsible for developing and maintaining best-in-class credit risk models catering to the collections and recovery functions within Chase Card Services. You will be responsible for identifying business opportunities for applying suitable machine learning algorithms to develop ML models that enhance the effectiveness of credit loss control. Your expertise and thought leadership in big data platforms (Hadoop/Cloud) and advanced ML techniques, such as deep learning, reinforcement learning and graph ML will substantially influence the direction of the next generation of risk models. In this highly visible role, the successful candidate will demonstrate analytic leadership through business acumen, collaboration, and effective communication skills with senior management. Success in this role requires a strong foundation in machine learning and artificial intelligence, along with deep understanding of credit risk management. The candidate should have a proven ability to manage end-end ML/AI solutions, especially deploy ML models harnessing vast amounts of data and computation into distributed systems. Job Responsibilities Collaborate with risk strategy teams and operations to understand business needs, data generating process, system capability, and potential model impact. Design machine learning solutions to address business needs, including explainable machine learning models and reinforcement learning models Manage multiple model development projects Collaborate with various partners in Marketing, Finance, Technology, Model Governance, Compliance, Risk, Legal, etc. throughout the entire modeling lifecycle. Manage model risk and related governance and controls Synthesize the findings at various points through the model development process to share actionable insights with senior leadership and other stakeholders Drive constant innovations to drive sustained improvement in collections and recovery capabilities of the firm Required Qualifications, Capabilities, And Skills Ph.D. or MS degree in Mathematics, Statistics, Computer Science, Operational Research, Econometrics, Physics, or other related quantitative fields Minimal 10-year of experience in developing and managing ML or predictive risk models in financial institutions Hand-on experience in developing and deploying real-time transaction models with massive data from various sources, internally and externally Developed ML/AI models in big data platform (Hadoop and Cloud) and deployed them into real-time scoring engines, such as mainframe, cloud or distributed computing systems Experience in developing and deploying commercial applications for machine learning that are interpretable Experience in open source programming languages for large scale data analysis such as Python / Scala / Java / PySpark Experience with supervised and unsupervised machine learning algorithms such as XGBoost, CNN, RNN, SVM, Reinforcement Learning, Markov Process Minimal 3-year experience managing a sizable team of data scientists/ modelers/ machine learning engineers Experience in managing a team in a dynamic environment of high mobility Polished and clear communications with senior management Proven leadership in client/stakeholder/partner relationship management and high-performance team development ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our Consumer & Community Banking division serves our Chase customers through a range of financial services, including personal banking, credit cards, mortgages, auto financing, investment advice, small business loans and payment processing. We’re proud to lead the U.S. in credit card sales and deposit growth and have the most-used digital solutions – all while ranking first in customer satisfaction. The CCB Data & Analytics team responsibly leverages data across Chase to build competitive advantages for the businesses while providing value and protection for customers. The team encompasses a variety of disciplines from data governance and strategy to reporting, data science and machine learning. We have a strong partnership with Technology, which provides cutting edge data and analytics infrastructure. The team powers Chase with insights to create the best customer and business outcomes. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Description Winbold is a startup working on a AI driven investment platform. Job Description This requires immediate joining. Do not apply if you have notice period. We are looking for a very motivated Data Science and Machine Learning Engineer with hands-on experience in feature engineering, model training and building models that run and provide real-time predictions and forecasts. The candidate must be well versed with the following: ML Packages - sklean, xgboost, timeseries models Feature engineering and feature stores MLFlow for Model experimentation Model visualization Model deployment in Cloud and in hosted environments at scale For immediate consideration you can send email to me directly with a note on why you think you are the better suited for this role at madhu at winbolddatasystems.com Qualifications We dont care what degree you have or if you have one. We want a hands-on developer who can get shit done. Additional Information All your information will be kept confidential according to EEO guidelines. Show more Show less

Posted 2 months ago

Apply

3.0 - 5.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Job Description This is a remote position. AI Engineer Duration: 6 months Location: Remote Timings: Full Time (As per company timings) Notice Period: (Immediate Joiner - Only) Experience: 3-5 Years Jd AI/ML Models: Experience with Automated Valuation Models (AVM) and real-world deployment of machine learning models LangChain: Proficient in using LangChain for building LLM-powered applications Data Science Toolkit: Hands-on with Pandas, NumPy, Scikit-learn, XGBoost, LightGBM, and Jupyter Feature Engineering: Strong background in feature engineering and data intelligence extraction Data Handling: Comfortable with structured, semi-structured, and unstructured data Production Integration: Experience integrating models into APIs and production environments using Python-based frameworks Additional Info Generally, should have worked with a different, varied set of data in multiple projects, and problem-solving acumen Requirements AI/ML Models: Experience with Automated Valuation Models (AVM) and real-world deployment of machine learning models LangChain: Proficient in using LangChain for building LLM-powered applications Data Science Toolkit: Hands-on with Pandas, NumPy, Scikit-learn, XGBoost, LightGBM, and Jupyter Feature Engineering: Strong background in feature engineering and data intelligence extraction Data Handling: Comfortable with structured, semi-structured, and unstructured data Production Integration: Experience integrating models into APIs and production environments using Python-based frameworks Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

We're Hiring: AI DevOps Engineer – ML, LLM & Cloud for Battery & Livestock Intelligence 📍 Hyderabad / Remote | 🧠 3–8 Years Experience 💼 Full-time | Deep Tech | AI-Driven IoT At Vanix Technologies , we’re solving real-world problems using AI built on top of IoT data — from predicting the health of electric vehicle batteries to monitoring livestock behavior with BLE sensors. We’re looking for a hands-on AI DevOps Engineer who understands not just how to build ML/DL models, but also how to turn them into intelligent cloud-native services . If you've worked on battery analytics or sensor-driven ML , and you're excited by the potential of LLMs + real-time IoT — this is for you. What You’ll Work On 🔋 EV Battery Intelligence Build models for SOH, true SOH, SOC, RUL prediction , thermal event detection, and high-risk condition classification. Ingest and process time-series data from BMS, CAN bus, GPS , and environmental sensors. Deliver analytics that plug into our BatteryTelematicsPro SaaS for OEMs and fleet customers. 🐄 Livestock Monitoring AI Analyze BLE sensor data from our cattle wearables (motion, temp, rumination proxies). Develop models for health anomaly detection, estrus prediction, movement patterns , and outlier behaviors. Power actionable insights for farmers via mobile dashboards and alerts. 🤖 Agentic AI & LLM Integration Chain ML outputs with LLMs (e.g., GPT-4, Claude) using LangChain or similar frameworks . Build AI assistants that summarize events, auto-generate alerts, and respond to user queries using both structured and ML-derived data. Support AI-powered explainability and insight generation layers on top of raw telemetry. ☁️ Cloud ML Engineering & DevOps Deploy models on AWS (Lambda, SageMaker, EC2, ECS, CloudWatch). Design and maintain CI/CD pipelines for data, models, and APIs. Optimize performance, cost, and scalability of cloud workloads. ✅ You Must Have Solid foundation in ML/DL for time-series / telemetry data Hands-on with PyTorch / TensorFlow / Scikit-learn / XGBoost Experience with battery analytics or sensor-based animal behavior prediction Understanding of LangChain / OpenAI APIs / LLM orchestration AWS fluency: Lambda, EC2, S3, SageMaker, CloudWatch Python APIs Nice to Have MLOps stack (MLFlow, DVC, W&B) BLE signal processing or CAN bus protocol parsing Prompt engineering or fine-tuning experience Exposure to edge-to-cloud model deployment Why Vanix Technologies? Because we're not another AI lab — we're a deep-tech company building production-ready AI platforms that interact with real devices in the field , used by farmers, OEMs, and EV fleets . You’ll work at the intersection of: IoT + AI + LLMs Hardware + Cloud Mission-critical data + Everyday impact Show more Show less

Posted 2 months ago

Apply

1.5 - 2.0 years

0 Lacs

Sahibzada Ajit Singh Nagar, Punjab, India

On-site

Job Summary: We are looking for a highly motivated and analytical Data Scientist / Machine Learning (ML) Engineer / AI Specialist with 1.5 -2 years of experience in Health data analysis, particularly with data sourced from wearable devices such as smartwatches and fitness trackers. The ideal candidate will be proficient in developing data models, analyzing complex datasets, and translating insights into actionable strategies that enhance health-related applications. Key Responsibilities: Develop and implement data models tailored to health data from wearable devices. Stay updated on industry trends and emerging technologies in health data analytics. Ensure data integrity and security throughout the analysis process , correlations relevant to health metrics. Analyze large datasets to extract actionable insights using statistical methods and machine learning techniques. Develop, train, test, and deploy machine learning models for classification, regression, clustering, NLP, recommendation, or computer vision tasks. Collaborate with cross-functional teams including product, engineering, and domain experts to define problems and deliver solutions. Design and build scalable ML pipelines for model development and deployment. Conduct exploratory data analysis (EDA), data wrangling, feature engineering, and model validation. Monitor model performance in production and iterate based on feedback and data drift. Stay up to date with the latest research and trends in machine learning, deep learning, and AI. Document processes, code, and methodologies to ensure reproducibility and collaboration. Required Qualifications: Bachelor's or Master’s degree in Computer Science, Statistics, Mathematics, Engineering, or related field. 1.5-2 years of experience in data analysis, preferably within the health tech sector. Strong knowledge of Python or R and libraries such as NumPy, pandas, scikit-learn, TensorFlow, PyTorch, or XGBoost. Strong experience with data modeling, machine learning algorithms, and statistical analysis. Familiarity with health data privacy regulations (e.g., HIPAA) and data visualization tools (e.g., Tableau, Power BI). Proficiency in SQL and experience working with large-scale data systems (e.g., Spark, Hadoop, BigQuery, Snowflake). Ability to clearly communicate complex technical concepts to both technical and non-technical audiences. Experience with version control tools (e.g., Git) and ML pipeline tools (e.g., MLflow, Airflow, Kubeflow). Experience deploying models in cloud environments (AWS, GCP, Azure). Knowledge of NLP (e.g., Transformers, LLMs), computer vision, or reinforcement learning. Familiarity with MLOps, CI/CD for ML, and model monitoring tools. Experience - 1.5 - 2 years (Only Local Candidates) Location - Mohali Phase 8b Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

AI Agent Development - Python (CrewAI + LangChain) Location: Noida / Gwalior (On-site) Experience Required: Minimum 3+ years Employment Type: Full-time 🚀 About the Role We're seeking a AI Agent Developer (Python) with hands-on experience in CrewAI and LangChain to join our cutting-edge AI product engineering team. If you thrive at the intersection of LLMs, agentic workflows, and autonomous tooling — this is your opportunity to build real-world AI agents that solve complex problems at scale. You’ll be responsible for designing, building, and deploying intelligent agents that leverage prompt engineering, memory systems, vector databases, and multi-step tool execution strategies. 🧠 Core Responsibilities Design and develop modular, asynchronous Python applications using clean code principles. Build and orchestrate intelligent agents using CrewAI: defining agents, tasks, memory, and crew dynamics. Develop custom chains and tools using LangChain (LLMChain, AgentExecutor, memory, structured tools). Implement prompt engineering techniques like ReAct, Few-Shot, and Chain-of-Thought reasoning. Integrate with APIs from OpenAI, Anthropic, HuggingFace, or Mistral for advanced LLM capabilities. Use semantic search and vector stores (FAISS, Chroma, Pinecone, etc.) to build RAG pipelines. Extend tool capabilities: web scraping, PDF/document parsing, API integrations, and file handling. Implement memory systems for persistent, contextual agent behavior. Leverage DSA and algorithmic skills to structure efficient reasoning and execution logic. Deploy containerized applications using Docker, Git, and modern Python packaging tools. 🛠️ Must-Have Skills Python 3.x (Async, OOP, Type Hinting, Modular Design) CrewAI (Agent, Task, Crew, Memory, Orchestration) – Must Have LangChain (LLMChain, Tools, AgentExecutor, Memory) Prompt Engineering (Few-Shot, ReAct, Dynamic Templates) LLMs & APIs (OpenAI, HuggingFace, Anthropic) Vector Stores (FAISS, Chroma, Pinecone, Weaviate) Retrieval-Augmented Generation (RAG) Pipelines Memory Systems: BufferMemory, ConversationBuffer, VectorStoreMemory Asynchronous Programming (asyncio, LangChain hooks) DSA / Algorithms (Graphs, Queues, Recursion, Time/Space Optimization) 💡 Bonus Skills Experience with Machine Learning libraries (Scikit-learn, XGBoost, TensorFlow basics) Familiarity with NLP concepts (Embeddings, Tokenization, Similarity scoring) DevOps familiarity (Docker, GitHub Actions, Pipenv/Poetry) 🧭 Why Join Us? Work on cutting-edge LLM agent architecture with real-world impact. Be part of a fast-paced, experiment-driven AI team. Collaborate with passionate developers and AI researchers. Opportunity to build from scratch and influence core product design. Show more Show less

Posted 2 months ago

Apply

5.0 - 8.0 years

0 Lacs

Kolkata metropolitan area, West Bengal, India

On-site

We are looking for an experienced and results-driven Data Scientist with a strong background in machine learning to join our analytics and AI team. Title : Senior Data Scientist Location: Noida ( Sector 62 ) and Kolkata ( New Town ) Experience: 5 to 8 years Shift : Rotational shifts The ideal candidate must have hands-on experience in building, deploying, and optimizing machine learning models to solve real-world problems and drive business value. Key Responsibilities: * Design, develop, and deploy machine learning models and algorithms to address business challenges and opportunities. * Analyze large and complex datasets to extract actionable insights using statistical and ML techniques. * Collaborate with data engineers, analysts, and product teams to implement data-driven strategies. * Evaluate model performance and iterate based on feedback and new data. * Stay current with the latest ML trends, tools, and research to drive innovation. * Present findings and model results to technical and non-technical stakeholders. Skills: * Proficiency in Python (preferred) or R; experience with ML libraries such as scikit-learn, XGBoost, TensorFlow, PyTorch, etc. * Strong understanding of supervised, unsupervised, and reinforcement learning methods. Please share your resume at @trishita.mistry@empaxis.com Show more Show less

Posted 2 months ago

Apply

3.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

We’re seeking a skilled Data Scientist with expertise in SQL, Python, AWS SageMaker , and Commercial Analytics to contribute to Team. You’ll design predictive models, uncover actionable insights, and deploy scalable solutions to recommend optimal customer interactions. This role is ideal for a problem-solver passionate about turning data into strategic value. Key Responsibilities Model Development: Build, validate, and deploy machine learning models (e.g., recommendation engines, propensity models) using Python and AWS SageMaker to drive next-best-action decisions. Data Pipeline Design: Develop efficient SQL queries and ETL pipelines to process large-scale commercial datasets (e.g., customer behavior, transactional data). Commercial Analytics: Analyze customer segmentation, lifetime value (CLV), and campaign performance to identify high-impact NBA opportunities. Cross-functional Collaboration: Partner with marketing, sales, and product teams to align models with business objectives and operational workflows. Cloud Integration: Optimize model deployment on AWS, ensuring scalability, monitoring, and performance tuning. Insight Communication: Translate technical outcomes into actionable recommendations for non-technical stakeholders through visualizations and presentations. Continuous Improvement: Stay updated on advancements in AI/ML, cloud technologies, and commercial analytics trends. Qualifications Education: Bachelor’s/Master’s in Data Science, Computer Science, Statistics, or a related field. Experience: 3-4 years in data science, with a focus on commercial/customer analytics (e.g., pharma, retail, healthcare, e-commerce, or B2B sectors). Technical Skills: Proficiency in SQL (complex queries, optimization) and Python (Pandas, NumPy, Scikit-learn). Hands-on experience with AWS SageMaker (model training, deployment) and cloud services (S3, Lambda, EC2). Familiarity with ML frameworks (XGBoost, TensorFlow/PyTorch) and A/B testing methodologies. Analytical Mindset: Strong problem-solving skills with the ability to derive insights from ambiguous data. Communication: Ability to articulate technical concepts to business stakeholders. Preferred Qualifications AWS Certified Machine Learning Specialty or similar certifications. Experience with big data tools (Spark, Redshift) or ML Ops practices. Knowledge of NLP, reinforcement learning, or real-time recommendation systems. Exposure to BI tools (Tableau, Power BI) for dashboarding. Show more Show less

Posted 2 months ago

Apply

4.0 - 9.0 years

14 - 24 Lacs

Gurugram, Bengaluru

Work from Office

Job Description: We are seeking an experienced Data Scientist with expertise in advanced machine learning techniques to join our dynamic team. The ideal candidate will have hands-on experience developing and deploying models using ensemble methods and other cutting-edge ML algorithms, mostly in the US banking domain. Key Responsibilities: Develop and help deploy advanced machine learning models, including ensemble techniques, for customer lifecycle use cases (e.g., prescreen, acquisition, account management, collections, and fraud). Collaborate with cross-functional teams to define, develop, and improve predictive models that drive business decisions. Work with large datasets, utilizing tools like Python and SQL , to extract, clean, and transform data for modeling purposes. Ensure model robustness, interpretability, and scalability within banking environments. Strong problem-solving skills with the ability to handle complex datasets and turn them into actionable insights.

Posted 2 months ago

Apply

3.0 - 7.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Data Scientist with experience in projects related to Churn- Machine learning-random Forest-XGBoost-Logistics Regression. candidate must be an immediate Joiner or 30 Days NP

Posted 2 months ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Description Job summary: Our Firmwide Risk Function is focused on cultivating a stronger, unified culture that embraces a sense of personal accountability for developing the highest corporate standards in governance and controls across the firm. Business priorities are built around the need to strengthen and guard the firm from the many risks we face, financial rigor, risk discipline, fostering a transparent culture and doing the right thing in every situation. We are equally focused on nurturing talent, respecting the diverse experiences that our team of Risk professionals bring and embracing an inclusive environment. Chase Consumer & Community Banking serves consumers and small businesses with a broad range of financial services, including personal banking, small business banking and lending, mortgages, credit cards, payments, auto finance and investment advice. Consumer & Community Banking Risk Management partners with each CCB sub-line of business to identify, assess, prioritize and remediate risk. Types of risk that occur in consumer businesses include fraud, reputation, operational, credit, market and regulatory, among others. Join our Model Insights Team , a Center of Excellence within Consumer & Community Banking (CCB) Risk Modeling, committed to tracking of comprehensive health of machine learning models. We are responsible for sanity of model inputs and score performance tracking for CCB risk decision models. Team collaborates with model developers to identify and recommend potential opportunities for model calibration. We are constantly seeking for opportunities to enhance model performance tracking framework, with aim of providing feedback loop to risk strategies. We are seeking candidates who possess extensive knowledge of data science techniques, appreciation for data combined with of domain expertise, and a keen eye for detail and logic. It’s an opportunity to make an impact to model performance monitoring and governance practices for CCB risk models. Job Responsibilities Drive synergy in model performance tracking across different sub-lines of business. Enhance model performance framework to holistically capture model health, providing actionable insights to model users. Collaborate with model developers to identify potential opportunities for model calibration and conduct preliminary Root Cause Analysis in case of model performance decay. Design and build robust framework to monitor quality of model inputs. Explore opportunities to drive efficiency in model inputs and performance tracking through use of Large Language Model (LLM). Partner with teams across, Risk, Technology, Data Governance, and Control to support effective model performance management and insights. Deliver regular updates on model health to senior leadership of risk organization and the first line of defense. Required Qualifications, Capabilities, And Skills Advanced degree in Mathematics, Statistics, Computer Science, Operations Research, Econometrics, Physics, or a related quantitative field. Minimum of 3 years of experience in developing and managing predictive risk models in financial industry. Proficiency in programming languages such as Python, PySpark, and SQL, along with familiarity with cloud services like AWS SageMaker and Amazon EMR. Deep understanding of advanced machine learning algorithms (e.g. Decision Trees, Random Forest, XGBoost, Neural Networks, Clustering etc) Strong conceptual understanding of performance metrics used to monitor health of machine learning models. Fundamental understanding of the consumer lending business and risk management practices. Experience of working with large datasets with strong ability to analyze, interpret, and derive insights from data. Advanced problem-solving and analytical skills, with a keen attention to detail. Excellent communication skills, with the ability to convey complex information clearly and effectively to senior management. Preferred Qualifications, Capabilities, And Skills Experience of data wrangling and model building on a distributed Spark computation environment (with stability, scalability and efficiency). Proven expertise in designing, building, and deploying production-quality machine learning models. Ability to effectively collaborate with multiple stakeholders on projects of strategic importance, ensuring alignment and successful outcomes. Basic level of proficiency in Tableau ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our Consumer & Community Banking division serves our Chase customers through a range of financial services, including personal banking, credit cards, mortgages, auto financing, investment advice, small business loans and payment processing. We’re proud to lead the U.S. in credit card sales and deposit growth and have the most-used digital solutions – all while ranking first in customer satisfaction. The CCB Data & Analytics team responsibly leverages data across Chase to build competitive advantages for the businesses while providing value and protection for customers. The team encompasses a variety of disciplines from data governance and strategy to reporting, data science and machine learning. We have a strong partnership with Technology, which provides cutting edge data and analytics infrastructure. The team powers Chase with insights to create the best customer and business outcomes. Show more Show less

Posted 2 months ago

Apply

175.0 years

0 Lacs

Gurugram, Haryana, India

On-site

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. Function Description: The ‘Prospect Direct Mail Analytics’ team is part of the Analytics, Investments and Marketing Enablement (AIM) team within Global Commercial Services Marketing, American Express. AIM team is responsible for targeting, acquiring, engaging, and retaining commercial customers over online and offline channels and delivering world-class analytics, insights and data products for the Global Commercial Services (GCS) business. In this role, the incumbent will lead the Prospects Direct Mail Analytics team within AIM. Purpose of the Role: The Analyst will own the end-to-end analytics required for executing Commercial Prospects Direct Mail campaigns and would be responsible to profitably drive growth in commercial acquisition and charge volume through the Direct Mail channel. The Analyst will be challenged with designing and creating world class prospect marketing analytics solutions by leveraging machine learning and advanced methodologies. The person will be responsible for performing strategic analyses, synthesizing conclusions, and communicating recommendations to partners aimed at driving revenue through acquisitions. The ideal candidate can drive strategic decision making and execute new strategies via advanced analytics, disciplined test & learn, and effective partnership. The position is part of a highly collaborative environment, interacting with and influencing partners across the Global Commercial Services business at American Express. Responsibilities: Drive profitable acquisitions in Direct Mail channel by meeting ROI / Acquisition / Revenue goals with optimization, experimentation and analytics driven insights. Define, Design, Create, and Implement data science & analytical solutions required throughout the life cycle of a Direct Mail campaign starting from lead generation all the way to performance measurement. Collaborate with stakeholders within GCS Prospect marketing, Finance and investment optimization on various initiatives including setting goals for the channel / influencing data-driven strategy changes / introducing offer personalization etc. Researching and evaluating new commercial data sources working with external data vendors to improve data quality. Creating data segmentation & optimization strategies for targeting profitable prospects with the right product/incentive in the Direct Mail channel Translate business problems into Machine Learning problems. Collaborate with Decision Science teams to quantitatively determine the value of ML models, and ensure key insights are leveraged to create the most suitable ML models to solve the business problems. Collaborate with ML and Tech teams to manage, guide and build analytical solutions to improve targeting efficiency in Direct Mail channel. Minimum Qualifications: Bachelor's degree in quantitative field (e.g. Mathematics, Computer Science, Physics, Engineering, Finance and Economics). Demonstrated ability to lead cross-functional teams directly or indirectly to achieve key business outcomes. Strong programming skills are required. Experience with BIG DATA PROGRAMMING LANGUAGES (HIVE, PIG, SPARK), PYTHON (or R or JAVA). Expertise or ability to pick up strong SQL skills. Strong technical and analytical skills with the ability to apply both quantitative methods and business skills to create insights and drive results, such as A/B testing analysis. Strong analytical/conceptual thinking acumen to solve unstructured and complex business problems and articulate key findings to senior leaders/stakeholders in a succinct and concise manner. Demonstrated ability to work independently and across a matrix organization partnering with capabilities, marketing, decision sciences, risk teams and external vendors to deliver solutions at top speed. Preferred Qualifications: Master’s in quantitative field (e.g. Mathematics, Computer Science, Physics, Engineering, Finance and Economics) or MBA with quantitative background. Strong knowledge of machine learning techniques, including XGBoost, Decision Trees and NLP models. Knowledge of commercial data experience is a plus. We back our colleagues and their loved ones with benefits and programs that support their holistic well-being. That means we prioritize their physical, financial, and mental health through each stage of life. Benefits include: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

India

Remote

FairMoney is a pioneering mobile banking institution specializing in extending credit to emerging markets. Established in 2017, the company currently operates primarily within Nigeria, and it has secured nearly €50 million in funding from renowned global investors, including Tiger Global, DST, and Flourish Ventures. In alignment with its vision, FairMoney is actively constructing the foremost mobile banking platform and point-of-sale (POS) solution tailored for emerging markets. The journey began with the introduction of a digital microcredit application exclusively available on Android and iOS devices. Today, FairMoney has significantly expanded its range of services, encompassing a comprehensive suite of financial products, such as current accounts, savings accounts, debit cards, and state-of-the-art POS solutions designed to meet the needs of both merchants and agents. FairMoney thrives on its diverse workforce, bringing together talent from over 27 nationalities. This multicultural team drives the company's mission of reshaping financial services for underserved communities.To gain deeper insights into FairMoney's pivotal role in reshaping Africa's financial landscape, we invite you to watch informative video. Job Summary: Your mission is to develop data science-driven algorithms and applications to improve decisions in business processes like risk and debt collection, offering the best-tailored credit services to as many clients as possible. Requirements Strong background in Mathematics / Statistics / Econometrics / Computer science or related field 5+ years of work experience in analytics, data mining, and predictive data modelling, preferably in the fintech domain Being best friends with Python and SQL Hands-on experience in handling large volumes of tabular data Strong analytical skills: ability to make sense out of a variety of data and its relation/applicability to a specific business problem Feeling confident working with key Machine learning algorithms (GBM, XG-Boost, Random Forest, Logistic regression) Being at home building and deploying models around credit risk, debt collection, fraud, and growth . Track record of designing, executing and interpreting A/B tests in business environment . Strong focus on business impact and experience driving it end-to-end using data science applications . Strong communication skills Being passionate about all things data. Our tool stack Programming language: Python Production: Python API deployed on Amazon EKS (Docker, Kubernetes, Flask) ML: Scikit-Learn, LightGBM, XGBoost, shap ETL: Python, Apache Airflow Cloud: AWS, GCP Database: MySQL DWH: BigQuery, Snowflake BI: Tableau, Metabase, dbt Streaming Applications: Flink, Kinesis Role and Responsibilities Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions Mine and analyze data from company databases and external data sources to drive optimization and improvement of risk strategies, product development, marketing techniques, and other business decisions Assess the effectiveness and accuracy of new data sources and data gathering techniques Use predictive modelling to increase and optimize customer experiences, revenue generation, and other business outcomes Coordinate with different functional teams to make the best use of developed data science applications Develop processes and tools to monitor and analyze model performance and data quality Apply advanced statistical and data mining techniques in order to derive patterns from the data Own data science projects end-to-end and proactively drive improvements in both data Benefits Paid Time Off (25 days Vacation, Sick & Public Holidays) Family Leave (Maternity, Paternity) Training & Development budget Paid company business trips (not mandatory) Remote work Recruitment Process Screening call with Senior Recruiter Home Test assignment Technical interview Interview with the team and key stakeholders Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking a highly motivated and experienced ML Engineer/Data Scientist to join our growing ML/GenAI team. You will play a key role in designing, developing and productionalizing ML applications by evaluating models, training and/or fine tuning them. You will play a crucial role in developing Gen AI based solutions for our customers. As a senior member of the team, you will take ownership of projects, collaborating with engineers and stakeholders to ensure successful project delivery. What we're looking for: At least 3 years of experience in designing & building AI applications for customer and deploying them into production At least 5 years of Software engineering experience in building Secure, scalable and performant applications for customers. Experience with Document extraction using AI, Conversational AI, Vision AI, NLP or Gen AI. Design, develop, and operationalize existing ML models by fine tuning, personalizing it. Evaluate machine learning models and perform necessary tuning. Develop prompts that instruct LLM to generate relevant and accurate responses. Collaborate with data scientists and engineers to analyze and preprocess datasets for prompt development, including data cleaning, transformation, and augmentation. Conduct thorough analysis to evaluate LLM responses, iteratively modify prompts to improve LLM performance. Hands on customer experience with RAG solution or fine tuning of LLM model. Build and deploy scalable machine learning pipelines on GCP or any equivalent cloud platform involving data warehouses, machine learning platforms, dashboards or CRM tools. Experience working with the end-to-end steps involving but not limited to data cleaning, exploratory data analysis, dealing outliers, handling imbalances, analyzing data distributions (univariate, bivariate, multivariate), transforming numerical and categorical data into features, feature selection, model selection, model training and deployment. Proven experience building and deploying machine learning models in production environments for real life applications Good understanding of natural language processing, computer vision or other deep learning techniques. Expertise in Python, Numpy, Pandas and various ML libraries (e.g., XGboost, TensorFlow, PyTorch, Scikit-learn, LangChain). Familiarity with Google Cloud or any other Cloud Platform and its machine learning services. Excellent communication, collaboration, and problem-solving skills. Good to Have Google Cloud Certified Professional Machine Learning or TensorFlow Certified Developer certifications or equivalent. Experience of working with one or more public cloud platforms - namely GCP, AWS or Azure. Experience with Amazon Lex or Google DialogFlow CX or Microsoft Copilot studio for CCAI Agent workflows Experience with AutoML and vision techniques. Master’s degree in statistics, machine learning or related fields. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

India

Remote

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for India’s top 1% Data Scientists for a unique job opportunity to work with the industry leaders. Who can be a part of the community? We are looking for top-tier Data Scientists with expertise in predictive modeling, statistical analysis, and A/B testing. If you have experience in this field then this is your chance to collaborate with industry leaders. What’s in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 - 12 months , or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be: Remote (Highly likely) Onsite on client location Deccan AI’s Office: Hyderabad or Bangalore Responsibilities: Lead design, development, and deployment of scalable data science solutions optimizing large-scale data pipelines in collaboration with engineering teams. Architect advanced machine learning models (deep learning, RL, ensemble) and apply statistical analysis for business insights. Apply statistical analysis, predictive modeling, and optimization techniques to derive actionable business insights. Own the full lifecycle of data science projects—from data acquisition, preprocessing, and exploratory data analysis (EDA) to model development, deployment, and monitoring. Implement MLOps workflows (model training, deployment, versioning, monitoring) and conduct A/B testing to validate models. Required Skills: Expert in Python, data science libraries (Pandas, NumPy, Scikit-learn), and R with extensive experience with machine learning (XGBoost, PyTorch, TensorFlow) and statistical modeling. Proficient in building scalable data pipelines (Apache Spark, Dask) and cloud platforms (AWS, GCP, Azure). Expertise in MLOps (Docker, Kubernetes, MLflow, CI/CD) along with strong data visualization skills (Tableau, Plotly Dash) and business acumen. Nice to Have: Experience with NLP, computer vision, recommendation systems, or real-time data processing (Kafka, Flink). Knowledge of data privacy regulations (GDPR, CCPA) and ethical AI practices. Contributions to open-source projects or published research. What are the next steps? 1. Register on our Soul AI website. 2. Our team will review your profile. 3. Clear all the screening rounds: Clear the assessments once you are shortlisted. As soon as you qualify all the screening rounds (assessments, interviews) you will be added to our Expert Community! 4. Profile matching and Project Allocation: Be patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You! Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Who we are Wayfair’s Advertising business is rapidly expanding, adding hundreds of millions of dollars in profits to Wayfair. We are building Sponsored Products, Display & Video Ad offerings that cater to a variety of Advertiser goals while showing highly relevant and engaging Ads to millions of customers. We are evolving our Ads Platform to empower advertisers across all sophistication levels to grow their business on Wayfair at a strong, positive ROI and are leveraging state of the art Machine Learning techniques. The Advertising Optimization & Automation Science team is central to this effort. We leverage machine learning and generative AI to streamline campaign workflows, delivering impactful recommendations on budget allocation, target Return on Ad Spend (tROAS), and SKU selection. Additionally, we are developing intelligent systems for creative optimization and exploring agentic frameworks to further simplify and enhance advertiser interactions. We are looking for Machine Learning Scientists to join the Advertising Optimization & Automation Science team. In this role, you will be responsible for the development of budget, tROAS and SKU recommendations and other machine learning capabilities supporting our ads business. You will work closely with other scientists, as well as members of our internal Product and Engineering teams, to apply your engineering and machine learning skills to solve some of our most impactful and intellectually challenging problems to directly impact Wayfair’s revenue. What you’ll do Design, build, deploy and refine large-scale machine learning models and algorithmic decision-making systems that solve real-world problems for customers Work cross-functionally with commercial stakeholders to understand business problems or opportunities and develop appropriately scoped analytical solutions Collaborate closely with various engineering, infrastructure, and machine learning platform teams to ensure adoption of best-practices in how we build and deploy scalable machine learning services Identify new opportunities and insights from the data (where can the models be improved? What is the projected ROI of a proposed modification?) Be obsessed with the customer and maintain a customer-centric lens in how we frame, approach, and ultimately solve every problem we work on. What you’ll need 3+ years of industry experience with a Bachelor/ Master’s degree or minimum of 1-2 years of industry experience with PhD in Computer Science, Mathematics, Statistics, or related field. Proficiency in Python or one other high-level programming language Solid hands-on expertise deploying machine learning solutions into production Strong theoretical understanding of statistical models such as regression, clustering and machine learning algorithms such as decision trees, neural networks, etc. Strong written and verbal communication skills Intellectual curiosity and enthusiastic about continuous learning Nice to have Experience with Python machine learning ecosystem (numpy, pandas, sklearn, XGBoost, etc.) and/or Apache Spark Ecosystem (Spark SQL, MLlib/Spark ML) Familiarity with GCP (or AWS, Azure), machine learning model development frameworks, machine learning orchestration tools (Airflow, Kubeflow or MLFlow) Experience in information retrieval, query/intent understanding, search ranking, recommender systems etc. Experience with deep learning frameworks like PyTorch, Tensorflow, etc. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

• Develop strategies/solutions to solve problems in logical yet creative ways, leveraging state-of-the-art machine learning, deep learning and GEN AI techniques. • Technically lead a team of data scientists to produce project deliverables on time and with high quality. • Identify and address client needs in different domains, by analyzing large and complex data sets, processing, cleansing, and verifying the integrity of data, and performing exploratory data analysis (EDA) using state-of-the-art methods. • Select features, build and optimize classifiers/regressors, etc. using machine learning and deep learning techniques. • Enhance data collection procedures to include information that is relevant for building analytical systems, and ensure data quality and accuracy. • Perform ad-hoc analysis and present results in a clear manner to both technical and non-technical stakeholders. • Create custom reports and presentations with strong data visualization and storytelling skills to effectively communicate analytical conclusions to senior officials in a company and other stakeholders. • Expertise in data mining, EDA, feature selection, model building, and optimization using machine learning and deep learning techniques. • Strong programming skills in Python. • Excellent communication and interpersonal skills, with the ability to present complex analytical concepts to both technical and non-technical stakeholders. Primary Skills : - Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc. - Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc). - Proficient in coding in common data science language & tools such as R, Python. - Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc. - Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc. - Exposure or knowledge in cloud (Azure/AWS). - Experience on deployment of model in production. Show more Show less

Posted 2 months ago

Apply

2.0 - 4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Navtech is looking for a AI/ML Engineer to join our growing data science and machine learning team. In this role, you will be responsible for building, deploying, and maintaining machine learning models and pipelines that power intelligent products and data-driven decisions. Working as an AI/ML Engineer at Navtech, you will : Design, develop, and deploy machine learning models for classification, regression, clustering, recommendations, or NLP tasks. Clean, preprocess, and analyze large datasets to extract meaningful insights and features. Work closely with data engineers to develop scalable and reliable data pipelines. Experiment with different algorithms and techniques to improve model performance. Monitor and maintain production ML models, including retraining and model drift detection. Collaborate with software engineers to integrate ML models into applications and services. Document processes, experiments, and decisions for reproducibility and transparency. Stay current with the latest research and trends in machine learning and AI. Who Are We Looking for Exactly ? 2- 4 years of hands-on experience in building and deploying ML models in real-world applications. Strong knowledge of Python and ML libraries such as Scikit-learn, TensorFlow, PyTorch, XGBoost, or similar. Experience with data preprocessing, feature engineering, and model evaluation techniques. Solid understanding of ML concepts such as supervised and unsupervised learning, overfitting, regularization, etc. Experience working with Jupyter, pandas, NumPy, and visualization libraries like Matplotlib or Seaborn. Familiarity with version control (Git) and basic software engineering practices. You consistently demonstrate strong verbal and written communication skills as well as strong analytical and problem-solving abilities You should have a masters degree /Bachelors (BS) in computer science, Software Engineering, IT, Technology Management or related degrees and throughout education in English medium. Well REALLY Love You If You Have knowledge of cloud platforms (AWS, Azure, GCP) and ML services (SageMaker, Vertex AI, etc.) Have knowledge of GenAI prompting and hosting of LLMs. Have experience with NLP libraries (spaCy, Hugging Face Transformers, NLTK). Have familiarity with MLOps tools and practices (MLflow, DVC, Kubeflow, etc.). Have exposure to deep learning and neural network architectures. Have knowledge of REST APIs and how to serve ML models (e.g., Flask, FastAPI, Docker). Why Navtech? Performance review and Appraisal Twice a year. Competitive pay package with additional bonus & benefits. Work with US, UK & Europe based industry renowned clients for exponential technical growth. Medical Insurance cover for self & immediate family. Work with a culturally diverse team from different us : Navtech is a premier IT software and Services provider. Navtechs mission is to increase public cloud adoption and build cloud-first solutions that become trendsetting platforms of the future. We have been recognized as the Best Cloud Service Provider at GoodFirms for ensuring good results with quality services. Here, we strive to innovate and push technology and service boundaries to provide best-in-class technology solutions to our clients at scale. We deliver to our clients globally from our state-of-the-art design and development centers in the US & Hyderabad. Were a fast-growing company with clients in the United States, UK, and Europe. We are also a certified AWS partner. You will join a team of talented developers, quality engineers, product managers whose mission is to impact above 100 million people across the world with technological services by the year 2030. (ref:hirist.tech) Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Title : Consultant/Senior Consultant Location : Gurgaon Job Description : We are hiring consultant/senior consultant with expertise in SQL, Hive, Python, Tableau and good exposure to Machine Learning would be an advantage. Must be able to work independently with multiple stakeholders. For BI task, you will design, develop, and maintain excel and Tableau solutions, transforming raw data into actionable insights and work independently. Collaborate with stakeholders, write SQL queries, and utilize Hive and Python for data manipulation. Ensure data accuracy, provide technical support, and stay updated with the latest Tableau features. Strong problem-solving skills and attention to detail required. Prior experience with Payments and fraud domain is preferred. Responsibilities Collaborate with stakeholders to understand their data requirements and translate them into Tableau dashboards and reports. Conduct adhoc analysis basis the business trends and recommend the next action items Develop and maintain Tableau visualizations, interactive dashboards, and reports to provide actionable insights to business users. Design and optimize data models and data structures to support efficient data retrieval and analysis. Write complex SQL queries to extract, transform, and load data from various data sources into Tableau. Utilize Hive and Python for data manipulation, transformation, and automation tasks. Conduct thorough testing and debugging of Tableau solutions to ensure data accuracy and dashboard performance. Stay updated with the latest Tableau features, techniques, and best practices, and apply them to enhance existing reports and dashboards. Collaborate with data engineers and data scientists to ensure seamless integration and alignment of Tableau solutions with the overall data ecosystem. Requirements Tableau Developer Strong Proficiency in SQL, Hive Prior experience in SQL for data extraction, manipulation, and analysis. Proficiency in Python Understanding of Big Data Ecosystem Strong problem-solving and attention to detail Payments and Fraud prior experience good to have Added advantage to those with understanding of Machine Learning algorithms in classification such as random forest, XGBoost etc Show more Show less

Posted 2 months ago

Apply

7.0 - 12.0 years

35 - 40 Lacs

Mumbai

Work from Office

Entity :- Accenture Strategy & Consulting Job location :- Mumbai About S&C - Global Network :- Accenture Global Network - Data & AI practice help our clients grow their business in entirely new ways. Analytics enables our clients to achieve high performance through insights from data - insights that inform better decisions and strengthen customer relationships. From strategy to execution, Accenture works with organizations to develop analytic capabilities - from accessing and reporting on data to predictive modelling - to outperform the competition WHAT'S IN IT FOR YOU? Accenture CFO & EV team under Data & AI team has comprehensive suite of capabilities in Risk, Fraud, Financial crime, and Finance. Within risk realm, our focus revolves around the model development, model validation, and auditing of models. Additionally, our work extends to ongoing performance evaluation, vigilant monitoring, meticulous governance, and thorough documentation of models. Get to work with top financial clients globally Access resources enabling you to utilize cutting-edge technologies, fostering innovation with the world's most recognizable companies. Accenture will continually invest in your learning and growth and will support you in expanding your knowledge. You'll be part of a diverse and vibrant team collaborating with talented individuals from various backgrounds and disciplines continually pushing the boundaries of business capabilities, fostering an environment of innovation. What you would do in this role Engagement Execution Work independently/with minimal supervision in client engagements that may involve model development, validation, governance, strategy, transformation, implementation and end-to-end delivery of risk solutions for Accenture's clients. Ability to manage workstream of large projects / small projects with responsibilities of managing quality of deliverables for junior team members. Demonstrated ability of managing day to day interactions with the Client stakeholders Practice Enablement Guide junior team members. Support development of the Practice by driving innovations, initiatives. Develop thought capital and disseminate information around current and emerging trends in Risk. Qualifications Who we are looking for? 7 - 12 years of relevant Risk Analytics experience at one or more Financial Services firms, or Professional Services / Risk Advisory with significant exposure to one or more of the following areas: Development, validation, and audit of: Credit Risk- PD/LGD/EAD Models, CCAR/DFAST Loss Forecasting and Revenue Forecasting Models, IFRS9/CECL Loss Forecasting Models across Retail and Commercial portfolios Credit Acquisition/Behavior/Collections/Recovery Modeling and Strategies, Credit Policies, Limit Management, Acquisition Frauds, Collections Agent Matching/Channel Allocations across Retail and Commercial portfolios Regulatory Capital and Economic Capital Models Liquidity Risk Liquidity models, stress testing models, Basel Liquidity reporting standards Anti Money Laundering AML scenarios/alerts, Network Analysis Operational risk AMA modeling, operational risk reporting Conceptual understanding of Basel/CCAR/DFAST/CECL/IFRS9 and other risk regulations Experience in conceptualizing and creating risk reporting and dashboarding solutions. Experience in modeling with statistical techniques such as linear regression, logistic regression, GLM, GBM, XGBoost, CatBoost, Neural Networks, Time series ARMA/ARIMA, ML interpretability and bias algorithms etc. Programing Languages - SAS, R, Python, Spark, Scala etc., Tools such as Tableau, QlikView, PowerBI, SAS VA etc. Strong understanding of Risk function and ability to apply them in client discussions and project implementation. Academic Requirements: Masters degree in a quantitative discipline mathematics, statistics, economics, financial engineering, operations research or related field or MBA from top-tier universities. Strong academic credentials and publications, if applicable. Industry certifications such as FRM, PRM, CFA preferred. Excellent communication and interpersonal skills.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies