Home
Jobs
Companies
Resume

285 Xgboost Jobs - Page 3

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.5 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Linkedin logo

🚀 AI/ML Solution Engineer Internship (On-site, Ahmedabad) Start Your Real Tech Career — Build. Deploy. Scale. Location: Ahmedabad (Office-based Only) Internship Duration: 8 Months Training + 1.5 Year Post-Training Bond About the Opportunity: Are you a recent graduate eager to move beyond online courses and actually build and deploy real AI models ? Whatmaction is offering a career-launching opportunity as an AI/ML Solution Engineer Intern — not just an internship, but a full-fledged deep tech journey . If you’re serious about working with production-grade AI, this is where your transformation begins. What You’ll Experience: 🧠 8 Months of Intense Training Like a bootcamp for your brain — hands-on learning, project-based tasks, and direct mentoring from senior engineers. ⚙️ Real-World AI/ML Projects Build and deploy production-level AI systems , including resume parsing tools, chatbots, and smart automation systems. 📦 Model Development: From Scratch & Prebuilt You’ll train models using open-source datasets, fine-tune pre-trained models, and learn how to build custom models from scratch — the real way. 🚀 Production Deployment You’ll also gain hands-on experience in model deployment , REST API integration, and making models accessible in real-time applications. Key Skills & Tech Stack: Languages: Python (strong foundation required) Bash/Shell scripting (basic) AI/ML Libraries: Scikit-learn SpaCy TensorFlow / PyTorch Keras Transformers (Hugging Face) XGBoost / LightGBM OpenCV (for CV tasks) Pandas / NumPy Matplotlib / Seaborn Tools & Platforms: Jupyter Notebooks / Google Colab Git & GitHub Postman Docker (for packaging ML apps) MLflow (for model tracking and versioning) Streamlit / FastAPI / Flask (for ML APIs) Firebase / AWS / Render (for model hosting) PostgreSQL / MongoDB (data handling) Core Concepts: Supervised / Unsupervised Learning Data Preprocessing & Feature Engineering Natural Language Processing (NLP) Model Evaluation & Hyperparameter Tuning Working with Pre-trained Embeddings ML Model Deployment (API-first mindset) Bonus Skills (Good to Have): Working with OpenAI / Hugging Face APIs Building AI Chatbots Basic Backend API skills (FastAPI preferred) Version control with Git Using LangChain / RAG techniques (optional) Who Should Apply: Fresh graduates (2023–2025) from B.Tech / MCA / BCA / M.Sc IT or equivalent. Passionate about AI, ML, and solving real-world problems . Willing to commit to 8 months of full-time training and 1.5-year project bond . Looking to build an actual career , not just do tasks. 📩 How to Apply: Send your updated resume to hr@whatmaction.com with the subject “AI/ML Internship Application – [Your Name]” 🔒 Note: This is an on-site only opportunity. Remote applications will not be considered. This is your launchpad. Build, deploy, and ship AI products that matter — with us at Whatmaction. Show more Show less

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Join our Team About This Opportunity Ericsson is a leading provider of telecommunications equipment and services to mobile and fixed network operators globally. We are seeking a highly skilled and experienced Data Scientist to join our dynamic team at Ericsson. As a Data Scientist, you will be responsible for leveraging advanced analytics and machine learning techniques to drive actionable insights and solutions for our telecom domain. This role requires a deep understanding of data science methodologies, strong programming skills, and proficiency in cloud-based environments. What You Will Do Develop and deploy machine learning models for various applications including chat-bot, XGBoost, random forest, NLP, computer vision, and generative AI. Utilize Python for data manipulation, analysis, and modeling tasks. Proficient in SQL for querying and analyzing large datasets. Experience with Docker and Kubernetes for containerization and orchestration of applications. Basic knowledge of PySpark for distributed computing and data processing. Collaborate with cross-functional teams to understand business requirements and translate them into analytical solutions. Deploy machine learning models into production environments and ensure scalability and reliability. Preferably have experience working with Google Cloud Platform (GCP) services for data storage, processing, and deployment. Experience in analysing complex problems and translate it into algorithms. Backend development in Rest APIs using Flask, Fast API Deployment experience with CI/CD pipelines Working knowledge of handling data sets and data pre-processing through PySpark Writing queries to target Casandra, PostgreSQL database. Design Principles in application development. The Skills You Bring Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field. A Master's degree or PhD is preferred. 3-7 years of experience in data science and machine learning roles, preferably within the telecommunications or related industry. Proven experience in model development, evaluation, and deployment. Strong programming skills in Python and SQL. Familiarity with Docker, Kubernetes, and PySpark. Solid understanding of machine learning techniques and algorithms. Experience working with cloud platforms, preferably GCP. Excellent problem-solving skills and ability to work independently as well as part of a team. Strong communication and presentation skills, with the ability to explain complex analytical concepts to non-technical stakeholders. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 763993 Show more Show less

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Join our Team About This Opportunity Ericsson is a leading provider of telecommunications equipment and services to mobile and fixed network operators globally. We are seeking a highly skilled and experienced Data Scientist to join our dynamic team at Ericsson. As a Data Scientist, you will be responsible for leveraging advanced analytics and machine learning techniques to drive actionable insights and solutions for our telecom domain. This role requires a deep understanding of data science methodologies, strong programming skills, and proficiency in cloud-based environments. What You Will Do Develop and deploy machine learning models for various applications including chat-bot, XGBoost, random forest, NLP, computer vision, and generative AI. Utilize Python for data manipulation, analysis, and modeling tasks. Proficient in SQL for querying and analyzing large datasets. Experience with Docker and Kubernetes for containerization and orchestration of applications. Basic knowledge of PySpark for distributed computing and data processing. Collaborate with cross-functional teams to understand business requirements and translate them into analytical solutions. Deploy machine learning models into production environments and ensure scalability and reliability. Preferably have experience working with Google Cloud Platform (GCP) services for data storage, processing, and deployment. Experience in analysing complex problems and translate it into algorithms. Backend development in Rest APIs using Flask, Fast API Deployment experience with CI/CD pipelines Working knowledge of handling data sets and data pre-processing through PySpark Writing queries to target Casandra, PostgreSQL database. Design Principles in application development. The Skills You Bring Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field. A Master's degree or PhD is preferred. 3-7 years of experience in data science and machine learning roles, preferably within the telecommunications or related industry. Proven experience in model development, evaluation, and deployment. Strong programming skills in Python and SQL. Familiarity with Docker, Kubernetes, and PySpark. Solid understanding of machine learning techniques and algorithms. Experience working with cloud platforms, preferably GCP. Excellent problem-solving skills and ability to work independently as well as part of a team. Strong communication and presentation skills, with the ability to explain complex analytical concepts to non-technical stakeholders. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 763993 Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Dear Associates Greetings from TATA Consultancy Services!! Thank you for expressing your interest in exploring a career possibility with the TCS Family Hiring For : Python AI ML, MlOPs Must Have : Spark, Hadoop,PyTorch, TensorFlow,Matplotlib, Seaborn, Tableau, Power BI,scikit-learn, TensorFlow, XGBoost,AWS,Azure , AWS, Databricks,Pyspark, Python,SQL, Snowflake, Experience: 5+ yrs Location : Mumbai / Pune If interested kindly fill the details and send your resume at nitu.sadhukhan@tcs.com . Note: only Eligible candidates with Relevant experience will be contacted further Name Contact No: Email id: Current Location: Preferred Location: Highest Qualification (Part time / Correspondence is not Eligible) : Year of Passing (Highest Qualification): Total Experience: Relevant Experience : Current Organization: Notice Period: Current CTC: Expected CTC: Pan Number : Gap in years if any (Education / Career): Updated CV attached (Yes / No) ? IF attended any interview with TCS in last 6 months : Available For walk In drive on 14th June _Pune : Thanks & Regards, Nitu Sadhukhan Talent Acquisition Group Tata Consultancy Services Lets Connect : linkedin.com/in/nitu-sadhukhan-16a580179 Nitu.sadhukhan@tcs.com Show more Show less

Posted 1 week ago

Apply

5.0 years

6 - 15 Lacs

Thrissur

On-site

Senior AI Engineer Location: Infopark, Thrissur, Kerala Employment Type: Full-Time Experience Required: Minimum 5 Years providing AI Solutions, including expertise in ML/DL Projects About Us JK Lucent is a growing IT services provider headquartered in Melbourne, Australia, with an operations centre at Infopark, Thrissur, Kerala. We specialize in Software Development, Software Testing, Game Development, RPA, Data Analytics, and AI solutions. At JK Lucent, we are driven by innovation and committed to delivering cutting-edge technology services that solve real-world problems and drive digital transformation. About the Role We are looking for a highly skilled and experienced Senior AI Engineer to lead the development and deployment of advanced AI systems. This role requires deep expertise in Machine Learning (ML), Deep Learning (DL), and Large Language Models (LLMs). The successful candidate will work on complex AI initiatives, contribute to production-ready systems, and mentor junior engineers. A strong command of professional English and the ability to communicate technical concepts clearly is essential. Roles and Responsibilities Design and develop scalable AI and ML models for real-world applications. Build, fine-tune, and implement Large Language Models (LLMs) for use cases such as chatbots, summarization, and document intelligence. Work with deep learning frameworks such as TensorFlow, PyTorch, and Hugging Face Transformers. Collaborate with cross-functional teams to translate business problems into AI solutions with necessary visualizations using tools like Tableau or Power BI. Deploy models to production environments and implement monitoring and model retraining pipelines. Stay up to date with the latest research and trends in AI, especially in LLMs and generative models. Guide and mentor junior AI engineers, reviewing code and providing technical leadership. Contribute to technical documentation, architecture design, and solution strategies. Ensure models are developed and used ethically and comply with data privacy standards. Requirements Minimum 5 years of experience in AI/ML development with hands-on expertise in model design, development, and deployment. Strong experience working with LLMs and Generative AI tools such as Hugging Face Hub, LangChain, Haystack, LLaMA, GPT, BERT, and T5. Proficiency in Python and ML/DL libraries such as TensorFlow, PyTorch, XGBoost, scikit-learn, and Hugging Face Transformers. Solid understanding of mathematics, statistics, and applied data science principles. Experience deploying models using Docker, FastAPI, MLflow, or similar tools. Familiarity with cloud platforms (AWS, Azure, or GCP) and their AI/ML services. Demonstrated experience in working on end-to-end AI solutions in production environments. Excellent English communication skills (verbal and written) and ability to present technical topics. Strong leadership skills and experience mentoring junior developers or leading small technical teams. Bachelor's or Master's in Computer Science, AI, Data Science, or a related discipline. Job Type: Full-time Pay: ₹600,000.00 - ₹1,500,000.00 per year Schedule: Day shift Monday to Friday Application Question(s): 1. Are you able to commute daily to Infopark, Koratty, Thrissur? (Yes/No) 2. How many years of total IT experience do you have? (Numeric) 3. How many years of experience do you have in AI/ML development? (Numeric) 4. How many years of experience do you have working with Large Language Models (LLMs)? (Numeric) 5. Are you proficient in Python? (Yes/No) 6. Have you used frameworks like TensorFlow, PyTorch, or Hugging Face? (Yes/No) 7. Have you deployed AI/ML models to production environments? (Yes/No) 8. Have you worked with cloud platforms like AWS, Azure, or GCP? (Yes/No) * 9. Do you have professional-level proficiency in English? (Yes/No) 10. What is your current notice period in days? (Numeric) 11. What is your expected salary in LPA? (Numeric) 12. What is your current or last drawn salary in LPA? (Numeric) Work Location: In person

Posted 1 week ago

Apply

6.0 years

5 - 6 Lacs

Cochin

On-site

Joining Gadgeon offers a dynamic and rewarding career experience that fosters both personal and professional growth. Our collaborative culture encourages innovation, empowering team members to contribute their ideas and expertise to cutting-edge projects. APPLIED AI SCIENTIST The Applied AI Scientist will lead the design, tuning, and evaluation of GenAI solutions. This role focuses on creating high-quality, business-relevant AI models, including RAG pipelines, LLM integrations, and prompt optimization strategies. Key Duties/ Responsibilities Design, fine-tune, and evaluate large language model (LLM) pipelines for internal tools and customer-facing use cases. Lead prompt engineering experiments to improve output quality and reduce hallucination. Establish model evaluation frameworks using tools like Promptfoo, Trulens, LangSmith. Collaborate with Enablement and Systems Engineers to integrate AI models into reusable components. Contribute to innovation showcases and internal learning sessions. Design, develop, and evaluate traditional ML models (e.g., classification, regression, clustering) for use cases such as resume matching, anomaly detection, or recommendation systems. Perform feature engineering, preprocessing, and model training on structured datasets using tools like scikit-learn, XGBoost, or TensorFlow. Collaborate with data and engineering teams to operationalize ML models via APIs or batch workflows. Leadership Skills: Strong analytical thinking with the ability to guide AI solution design. Able to mentor engineers on GenAI concepts and best practices. Collaboration across engineering and innovation teams. Required Technical Skills: Proficiency with HuggingFace Transformers, LangChain, LlamaIndex, RAG architectures. Experience with Python, PyTorch, OpenAI API, and model evaluation tools. Familiarity with vector databases (e.g., FAISS, Weaviate, Pinecone or equivalent vector store). Prompt evaluation using LangSmith, Promptfoo, or similar frameworks. Qualification: Bachelor’s or Master’s degree in Computer Science, AI/ML, or related field. Minimum 6 years of industry experience with at least 2 years hands-on with GenAI applications in a production or product-context environment. Experience 6 + Years

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

About Ericsson Ericsson is a leading provider of telecommunications equipment and services to mobile and fixed network operators globally. Our innovative solutions empower individuals, businesses, and societies to explore their full potential in the Networked Society. We are seeking a highly skilled and experienced Data Scientist to join our dynamic team at Ericsson. As a Data Scientist, you will be responsible for leveraging advanced analytics and machine learning techniques to drive actionable insights and solutions for our telecom domain. This role requires a deep understanding of data science methodologies, strong programming skills, and proficiency in cloud-based environments. Key Responsibilities Develop and deploy machine learning models for various applications including chat-bot, XGBoost, random forest, NLP, computer vision, and generative AI. Utilize Python for data manipulation, analysis, and modeling tasks. Proficient in SQL for querying and analyzing large datasets. Experience with Docker and Kubernetes for containerization and orchestration of applications. Basic knowledge of PySpark for distributed computing and data processing. Collaborate with cross-functional teams to understand business requirements and translate them into analytical solutions. Deploy machine learning models into production environments and ensure scalability and reliability. Preferably have experience working with Google Cloud Platform (GCP) services for data storage, processing, and deployment. Qualifications Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field. A Master's degree or PhD is preferred. 8-12 years of experience in data science and machine learning roles, preferably within the telecommunications or related industry. Proven experience in model development, evaluation, and deployment. Strong programming skills in Python and SQL. Familiarity with Docker, Kubernetes, and PySpark. Solid understanding of machine learning techniques and algorithms. Experience working with cloud platforms, preferably GCP. Excellent problem-solving skills and ability to work independently as well as part of a team. Strong communication and presentation skills, with the ability to explain complex analytical concepts to non-technical stakeholders. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Noida Req ID: 767292 Show more Show less

Posted 1 week ago

Apply

1.0 - 2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

This is NewCold NewCold is a service provider in cold chain logistics with a focus on development and operation of large, highly automated cold stores. NewCold strives to be crucial in the cold chain of leading food companies, by offering advanced logistic services worldwide. NewCold is one of the fastest growing companies (over 2,000 employees) in the cold chain logistics and they are expanding teams to support this growth. They use the latest technology that empowers people, to handle food responsibly and guarantee food safety in a sustainable way. They challenge the industry, believe in long-term partnerships, and deliver solid investment opportunities that enable next generation logistic solutions. NewCold has leading market in-house expertise in designing, engineering, developing and operating state-of-the-art automated cold stores: a result of successful development and operation of over 15 automated warehouses cross three continents. With the prospect of many new construction projects around the world in the very near future, this vacancy offers an interesting opportunity to join an internationally growing and ambitious organization. Job Title: AI Associate-Machine Learning Location: Bengaluru Experience: 1-2 Years Compensation: Up to 15 Lakhs PA Position: AI Associate – Machine Learning Join our growing AI team as an AI Associate (ML) and build real-world machine learning solutions that directly impact business performance. This is an exciting opportunity to work with experienced professionals, apply your skills to live data problems, and grow your career in a fast-paced, collaborative environment. Your Role: You’ll help design, train, and deploy ML models that power applications in supply chain, logistics, finance, and operations. From predicting delivery times to detecting anomalies in large datasets, your work will drive smarter decision-making. What You’ll Do: Build ML models for forecasting, classification, clustering, and anomaly detection using real business data. Work on the full ML lifecycle: data prep, feature engineering, model selection, evaluation, and deployment. Collaborate with cross-functional teams (engineering, data, operations) to understand business needs and build ML-driven solutions. Deploy and monitor models in production environments using APIs and MLOps tools. Document experiments and contribute to reusable ML components and workflows. What We’re Looking For: B.Tech / M.Tech in Computer Science, Engineering, or related fields. 1 -2 years of experience applying machine learning in real-world scenarios. Strong programming skills in Python and familiarity with ML libraries (scikit-learn, XGBoost, LightGBM, PyTorch, etc.). Hands-on experience working with structured and semi-structured data. Familiarity with cloud platforms (Azure preferred) and tools for ML deployment. Bonus: Experience with time series, anomaly detection, or working in supply chain/logistics. Why Join Us? Be part of a growing AI team solving real industry challenges. Work on high-impact projects with supportive mentors and strong learning culture. Gain experience in production ML pipelines and cloud deployment at scale. Access opportunities to grow in computer vision, LLMs, or advanced MLOps. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Senior AI Engineer Relevant Experience: 5 - 8 Years Work Location: Hyderabad Working Days: 5 Days Notice Period: Immediate or 15 days maximum. Job Overview: We are seeking a highly skilled and experienced Senior Data Scientist with expertise in Machine Learning (ML), Generative AI (GenAI), and Deep Learning (DL). Mandatory Skills • 5+ years of work experience in writing code in Python •Experience in using various Python libraries like Pandas, NumPy •Experience in writing good quality code in Python and code refactoring techniques (e.g., IDE’s – PyCharm, Visual Studio Code; Libraries – Pylint, pycodestyle, pydocstyle, Black) •Deep understanding of data structures, algorithms, and excellent problem-solving skills •Experience in Python, Exploratory Data Analysis (EDA), Feature Engineering, Data Visualisation • Machine Learning libraries like Scikit-learn, XGBoost • Experience in CV, NLP or Time Series. • Experience in building models for ML tasks (Regression, Classification) • Should have Experience into LLM, LLM Fine Tuning, Chatbot, RAG Pipeline Chatbot, LLM Solution, Multi Modal LLM Solution, GPT, Prompt, Prompt Engineering, Tokens, Context Window, Attention Mecanism, Embeddings •Experience of model training and serving on any of the cloud environments (AWS, GCP, Azure) •Experience in distributed training of models on Nvidia GPU’s •Familiarity in Dockerizing the model and create model end points (Rest or gRPC) •Strong working knowledge of source code control tools such as Git, Bitbucket •Prior experience of designing, developing and maintaining Machine Learning solution through its Life Cycle is highly advantageous •Strong drive to learn and master new technologies and techniques •Strong communication and collaboration skills •Good attitude and self-motivated  Work environment – we have an environment to create an impact on the client business and transform innovative ideas into reality. Even our junior engineers get the opportunity to work on different product features in complex domains  Open communication, flat hierarchy, plenty of individual responsibility Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About Us: Athena is India's largest institution in the "premium undergraduate study abroad" space. Founded 10 years ago by two Princeton graduates, Poshak Agrawal and Rahul Subramaniam, Athena is headquartered in Gurgaon, with offices in Mumbai and Bangalore, and caters to students from 26 countries. Athena’s vision is to help students become the best version of themselves. Athena’s transformative, holistic life coaching program embraces both depth and breadth, sciences and the humanities. Athena encourages students to deepen their theoretical knowledge and apply it to address practical issues confronting society, both locally and globally. Through our flagship program, our students have gotten into various, universities including Harvard University, Princeton University, Yale University, Stanford University, University of Cambridge, MIT, Brown, Cornell University, University of Pennsylvania, University of Chicago, among others. Learn more about Athena: https://www.athenaeducation.co.in/article.aspx Role Overview We are looking for an AI/ML Engineer who can mentor high-potential scholars in creating impactful technology projects. This role requires a blend of strong engineering expertise, the ability to distill complex topics into digestible concepts, and a deep passion for student-driven innovation. You’ll help scholars explore the frontiers of AI—from machine learning models to generative AI systems—while coaching them in best practices and applied engineering. Key Responsibilities: Guide scholars through the full AI/ML development cycle—from problem definition, data exploration, and model selection to evaluation and deployment. Teach and assist in building: Supervised and unsupervised machine learning models. Deep learning networks (CNNs, RNNs, Transformers). NLP tasks such as classification, summarization, and Q&A systems. Provide mentorship in Prompt Engineering: Craft optimized prompts for generative models like GPT-4 and Claude. Teach the principles of few-shot, zero-shot, and chain-of-thought prompting. Experiment with fine-tuning and embeddings in LLM applications. Support scholars with real-world datasets (e.g., Kaggle, open data repositories) and help integrate APIs, automation tools, or ML Ops workflows. Conduct internal training and code reviews, ensuring technical rigor in projects. Stay updated with the latest research, frameworks, and tools in the AI ecosystem. Technical Requirements: Proficiency in Python and ML libraries: scikit-learn, XGBoost, Pandas, NumPy. Experience with deep learning frameworks : TensorFlow, PyTorch, Keras. Strong command of machine learning theory , including: Bias-variance tradeoff, regularization, and model tuning. Cross-validation, hyperparameter optimization, and ensemble techniques. Solid understanding of data processing pipelines , data wrangling, and visualization (Matplotlib, Seaborn, Plotly). Advanced AI & NLP Experience with transformer architectures (e.g., BERT, GPT, T5, LLaMA). Hands-on with LLM APIs : OpenAI (ChatGPT), Anthropic, Cohere, Hugging Face. Understanding of embedding-based retrieval , vector databases (e.g., Pinecone, FAISS), and Retrieval-Augmented Generation (RAG). Familiarity with AutoML tools , MLflow, Weights & Biases, and cloud AI platforms (AWS SageMaker, Google Vertex AI). Prompt Engineering & GenAI Proficiency in crafting effective prompts using: Instruction tuning Role-playing and system prompts Prompt chaining tools like LangChain or LlamaIndex Understanding of AI safety , bias mitigation, and interpretability. Required Qualifications: Bachelor’s degree from a Tier-1 Engineering College in Computer Science, Engineering, or a related field. 2-5 years of relevant experience in ML/AI roles. Portfolio of projects or publications in AI/ML (GitHub, blogs, competitions, etc.) Passion for education, mentoring , and working with high school scholars. Excellent communication skills, with the ability to convey complex concepts to a diverse audience. Preferred Qualifications: Prior experience in student mentorship, teaching, or edtech. Exposure to Arduino, Raspberry Pi, or IoT for integrated AI/ML projects. Strong storytelling and documentation abilities to help scholars write compelling project reports and research summaries. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

About BeGig BeGig is the leading tech freelancing marketplace. We empower innovative, early-stage, non-tech founders to bring their visions to life by connecting them with top-tier freelance talent. By joining BeGig, you’re not just taking on one role—you’re signing up for a platform that will continuously match you with high-impact opportunities tailored to your expertise. Your Opportunity Join our network as a Data Scientist and help fast-growing startups transform data into actionable insights, predictive models, and intelligent decision-making tools. You’ll work on real-world data challenges across domains like marketing, finance, healthtech, and AI—with full flexibility to work remotely and choose the engagements that best fit your goals. Role Overview As a Data Scientist, you will: Extract Insights from Data: Analyze complex datasets to uncover trends, patterns, and opportunities. Build Predictive Models: Develop, validate, and deploy machine learning models that solve core business problems. Communicate Clearly: Work with cross-functional teams to present findings and deliver data-driven recommendations. What You’ll Do Analytics & Modeling: Explore, clean, and analyze structured and unstructured data using statistical and ML techniques. Build predictive and classification models using tools like scikit-learn, XGBoost, TensorFlow, or PyTorch. Conduct A/B testing, customer segmentation, forecasting, and anomaly detection. Data Storytelling & Collaboration: Present complex findings in a clear, actionable way using data visualizations (e.g., Tableau, Power BI, Matplotlib). Work with product, marketing, and engineering teams to integrate models into applications or workflows. Technical Requirements & Skills Experience: 3+ years in data science, analytics, or a related field. Programming: Proficient in Python (preferred), R, and SQL. ML Frameworks: Experience with scikit-learn, TensorFlow, PyTorch, or similar tools. Data Handling: Strong understanding of data preprocessing, feature engineering, and model evaluation. Visualization: Familiar with visualization tools like Matplotlib, Seaborn, Plotly, Tableau, or Power BI. Bonus: Experience working with large datasets, cloud platforms (AWS/GCP), or MLOps practices. What We’re Looking For A data-driven thinker who can go beyond numbers to tell meaningful stories. A freelancer who enjoys solving real business problems using machine learning and advanced analytics. A strong communicator with the ability to simplify complex models for stakeholders. Why Join Us? Immediate Impact: Work on projects that directly influence product, growth, and strategy. Remote & Flexible: Choose your working hours and project commitments. Future Opportunities: BeGig will continue matching you with data science roles aligned to your strengths. Dynamic Network: Collaborate with startups building data-first, insight-driven products. Ready to turn data into decisions? Apply now to become a key Data Scientist for our client and a valued member of the BeGig network! Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Do you want to work hands-on with LangChain, CrewAI, and real-world AI agents solving complex problems at scale? If you're ready to level up your career in the world of autonomous AI , If you are passionate about building the future of AI? W e want to hear from you. Title: AI Agent Developer - Python (LangChain + CrewAI) Location: Noida, Sec-62 (On-site) Initial Work Location: Patel Nagar, New Delhi (2–3 months) Experience Required: 3+ years Employment Type: Full-time 🚀 About the Role We’re looking for an experienced AI Agent Developer who thrives at the intersection of LLMs, agentic workflows, and autonomous systems . If you're excited by building intelligent, tool-using AI agents that interact, reason, and solve real-world problems — this is your place . You’ll work on cutting-edge GenAI products , leveraging CrewAI, LangChain , and vector databases to build high-performance agent frameworks and end-to-end RAG pipelines. 🧠 What You'll Do Architect and build modular, asynchronous Python applications following clean code principles. Define and orchestrate agents using CrewAI – set up tasks, memory, roles, and coordination logic. Develop custom chains and tools using LangChain (AgentExecutor, LLMChain, Memory, Structured Tools). Design advanced prompts using Few-Shot, ReAct, Chain-of-Thought techniques. Integrate with LLM APIs like OpenAI, Anthropic, Mistral, HuggingFace . Build scalable RAG pipelines using vector stores (FAISS, Chroma, Pinecone, etc.). Add tool-use capabilities: web scraping, API integration, PDF parsing , file management. Design memory systems for persistent, context-aware agent behavior. Optimize reasoning logic using DSA and algorithmic problem solving . Package and deploy your work using Docker, Git, Pipenv/Poetry . 🛠️ Required Skills Strong in Python 3.x – async, modular structure, type hinting. LangChain (LLMChain, AgentExecutor, Tools, Memory) – must-have. CrewAI / LangGraph / AutoGen – hands-on experience required. Prompt Engineering – ReAct, Few-Shot, Chain-of-Thought strategies. Integration with LLM APIs – OpenAI, HuggingFace, Anthropic, Mistral. Vector DBs – FAISS, Chroma, Pinecone, Weaviate. Building end-to-end RAG pipelines using LangChain + VectorDB. Agent Memory: BufferMemory, ConversationBufferMemory, VectorStoreMemory. Async programming with asyncio and LangChain hooks. Strong grasp of DSA/Algorithms to optimize agent behavior. 🌟 Bonus Skills Exposure to ML libraries : Scikit-learn, XGBoost, basic TensorFlow. Understanding of NLP foundations : embeddings, tokenization, similarity scoring. Familiarity with DevOps : Docker, GitHub Actions, Pipenv/Poetry. 🎯 Why Join Us? Work on real AI agents that execute autonomous reasoning. Collaborate with a fast-paced, highly motivated AI team building from scratch. Directly influence architecture, UX, and product impact. Stay ahead by building on top of CrewAI, LangChain, LLM APIs , and the latest vector DB tools. Opportunity to lead initiatives and ship innovative GenAI solutions. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Function: Data Science Job: Machine Learning Engineer Position: Senior Immediate manager (N+1 Job title and name): AI Manager Additional reporting line to: Global VP Engineering Position location: Mumbai, Pune, Bangalore, Hyderabad, Noida. 1. Purpose of the Job – A simple statement to identify clearly the objective of the job. The Senior Machine Learning Engineer is responsible for designing, implementing, and deploying scalable and efficient machine learning algorithms to solve complex business problems. The Machine Learning Engineer is also responsible of the lifecycle of models, once deployed in production environments, through monitoring performance and model evolution. The position is highly technical and requires an ability to collaborate with multiple technical and non-technical profiles (data scientists, data engineers, data analysts, product owners, business experts), and actively take part in a large data science community. 2. Organization chart – Indicate schematically the position of the job within the organization. It is sufficient to indicate one hierarchical level above (including possible functional boss) and, if applicable, one below the position. In the horizontal direction, the other jobs reporting to the same superior should be indicated. A Machine Learning Engineer reports to the AI Manager who reports to the Global VP Engineering. 3. Key Responsibilities and Expected Deliverables– This details what actually needs to be done; the duties and expected outcomes. Managing the lifecycle of machine learning models Develop and implement machine learning models to solve complex business problems. Ensure that models are accurate, efficient, reliable, and scalable. Deploy machine learning models to production environments, ensuring that models are integrated with software systems. Monitor machine learning models in production, ensuring that models are performing as expected and that any errors or performance issues are identified and resolved quickly. Maintain machine learning models over time. This includes updating models as new data becomes available, retraining models to improve performance, and retiring models that are no longer effective. Develop and implement policies and procedures for ensuring the ethical and responsible use of machine learning models. This includes addressing issues related to bias, fairness, transparency, and accountability. Development of data science assets Identify cross use cases data science needs that could be mutualised in a reusable piece of code. Design, contribute and participate in the implementation of python libraries answering a data science transversal need that can be reused in several projects. Maintain existing data science assets (timeseries forecasting asset, model monitoring asset) Create documentation and knowledge base on data science assets to ensure a good understanding from users. Participate to asset demos to showcase new features to users. Be an active member of the Sodexo Data Science Community Participate to the definition and maintenance of engineering standards and set of good practices around machine learning. Participate to data science team meeting and regularly share knowledge, ask questions, and learn from others. Mentor and guide junior machine learning engineers and data scientists. Participate to internal or external relevant conferences and meet ups. Continuous Improvements Stay up to date with the latest developments in the field: read research papers, attend conferences, and participate in trainings to expand their knowledge and skills. Identify and evaluate new technologies and tools that can improve the efficiency and effectiveness of machine learning projects. Propose and implement optimizations for current machine learning workflows and systems. Proactively identify areas of improvement within the pipelines. Make sure that created code is compliant with our set of engineering standards. Collaboration with other data experts (Data Engineers, Platform Engineers, and Data Analysts) Participate to pull requests reviews coming from other team members. Ask for review and comments when submitting their own work. Actively participate to the day-to-day life of the project (Agile rituals), the data science team (DS meeting) and the rest of the Global Engineering team 4. Education & Experience – Indicate the skills, knowledge and experience that the job holder should require to conduct the role effectively Engineering Master’s degree or PhD in Data Science, Statistics, Mathematics, or related fields 5 years+ experience in a Data Scientist / Machine Learning Engineer role into large corporate organizations Experience of working with ML models in a cloud ecosystem Statistics & Machine Learning Statistics : Strong understanding of statistical analysis and modelling techniques (e.g., regression analysis, hypothesis testing, time series analysis) Classical ML : Very strong knowledge in classical ML algorithms for regression & classification, supervised and unsupervised machine learning, both theoretical and practical (e.g. using scikit-learn, xgboost) ML niche: Expertise in at least one of the following ML specialisations: Timeseries forecasting / Natural Language Processing / Computer Vision Deep Learning: Good knowledge of Deep Learning fundamentals (CNN, RNN, transformer architecture, attention mechanism, …) and one of the deep learning frameworks (pytorch, tensorflow, keras) Generative AI: Good understanding of Generative AI specificities and previous experience in working with Large Language Models is a plus (e.g. with openai, langchain) MLOps Model strategy : Expertise in designing, implementing, and testing machine learning strategies. Model integration : Very strong skills in integrating a machine learning algorithm in a data science application in production. Model performance: Deep understanding of model performance evaluation metrics and existing libraries (e.g., scikit-learn, evidently) Model deployment: Experience in deploying and managing machine learning models in production either using specific cloud platform, model serving frameworks, or containerization. Model monitoring : Experience with model performance monitoring tools is a plus (Grafana, Prometheus) Software Engineering Python: Very strong coding skills in Python including modularity, OOP, data & config manipulation frameworks (e.g., pandas, pydantic) etc. Python ecosystem: Strong knowledge of tooling in Python ecosystem such as dependency management tooling (venv, poetry), documentation frameworks (e.g. sphinx, mkdocs, jupyter-book), testing frameworks (unittest, pytest) Software engineering practices: Experience in putting in place good software engineering practices such as design patterns, testing (unit, integration), clean code, code formatting etc. Debugging : Ability to troubleshoot and debug issues within machine learning pipelines Data Science Experimentation and Analytics Data Visualization : Knowledge of data visualization tools such as plotly, seaborn, matplotlib, etc. to visualise, interpret and communicate the results of machine learning models to stakeholders. Basic knowledge of PowerBI is a plus Data Cleaning : Experience with data cleaning and preprocessing techniques such as feature scaling, dimensionality reduction, and outlier detection (e.g. with pandas, scikit-learn). Data Science Experiments : Understanding of experimental design and A/B testing methodologies Data Processing: Databricks/Spark : Basic knowledge of PySpark for big data processing Databases : Basic knowledge of SQL to query data in internal systems Data Formats : Familiarity with different data storage formats such as Parquet and Delta DevOps Azure DevOps : Experience using a DevOps platform such as Azure DevOps for using Boards, Repositories, Pipelines Git: Experience working with code versioning (git), branch strategies, and collaborative work with pull requests. Proficient with the most basic git commands. CI / CD : Experience in implementing/maintaining pipelines for continuous integration (including execution of testing strategy) and continuous deployment is preferable. Cloud Platform: Azure Cloud : Previous experience with services like Azure Machine Learning Services and/or Azure Databricks on Azure is preferable. Soft skills Strong analytical and problem-solving skills, with attention to detail Excellent verbal and written communication and pedagogical skills with technical and non-technical teams Excellent teamwork and collaboration skills Adaptability and reactivity to new technologies, tools, and techniques Fluent in English 5. Competencies – Indicate which of the Sodexo core competencies and any professional competencies that the role requires Communication & Collaboration Adaptability & Agility Analytical & technical skills Innovation & Change Rigorous Problem Solving & Troubleshooting Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

Redefining Ads, E-Commerce and Creator Economy with AI, join the founding team of Renokon to make a positive impact on billions of lives. Role: Senior AI/ML Engineer Location: Kolkata (Onsite) CTC: ₹12 LPA - ₹22 LPA (Team Leader in 5 Years with ₹50 LPA+ CTC) CTC = 60% Cash + 40% Equity—structured for exponential upside as Renokon redefines AI-driven commerce. Equity vested over 4 years. It's a once in a lifetime opportunity to join Renokon's founding team. We are assembling a team of 7 world-class engineers to scale Renokon at lightning speed. Rupayan Das - Founder and CEO. Renokon is rewriting the rules of e-commerce and creator economy with AI. We're fixing the broken link between attention and action—seamlessly integrating AI-powered personalized voice-first sales agent, immersive shopping, and native monetization for creators. We’re looking for world-class Full Stack AI Engineer to join our team and help scale Renokon’s vision. If you're someone who thrives on building at scale, solving tough engineering challenges, and pushing the limits of AI-powered commerce, this is the place for you. Renokon's vision: To empower 200,000 Indian content creators to earn ₹10L/year by 2035. To generate $20B in annual products sale for our brand partners by 2035. Who are we looking for? 8+ years of hands-on experience. Comfortable working in Kolkata (onsite). Strong understanding of AI-driven recommendations, search intelligence, and real-time infrastructure. Passion for creator monetization, commerce, and revolutionizing digital ads. What You’ll Work On – Build the world's first voice-native sales agent for voice-based shopping. Optimize AI-powered search & recommendations with ElasticSearch + NLP models (BERT-based semantic search). Develop high-throughput ad bidding engines using XGBoost, DeepFM & RL-based bidding agents to maximize ad conversions. Build trust-driven commerce scoring models with Graph ML, Trust Rank & Behavioral Scoring (XGBoost) to ensure high-quality transactions. Develop creator scoring models using multi-factor ML models (virality, engagement, sales) to drive optimal brand-creator matching. Train NLP-powered content discovery systems leveraging GPT-4 Turbo & custom LLMs via Hugging Face for AI-driven recommendations. Ensure content moderation with CV + NLP models (HuggingFace + Perspective API) to maintain platform integrity. Develop LLM Co-Pilot fine-tuned on top creators, ad campaigns, and real-world commerce insights to assist brands in optimizing ads. Implement advanced A/B testing & experimentation using LaunchDarkly / Unleash to refine AI models based on user behavior. Optimize e-commerce personalization through Neural Collaborative Filtering (NCF) & DeepFM-based recommendation engines. Preferred Educational Qualification: Master’s degree or PhD in Computer Science, AI, Machine Learning, Data Science, or a related field (preferred but not mandatory) . Candidates with strong applied ML experience and contributions to open-source AI projects can be considered without a PhD. Why join us? Be part of building the trillion-dollar AI-driven commerce platform from scratch. Solve real-world challenges at scale—from creator monetization to ad conversion. Work alongside the founder, Rupayan Das and a highly skilled team, the backbone of Renokon. Shape the digital future in India. If you are a world-class engineer, you are extremely good at what you do and you’re ready to build something that will impact millions of people lives — join our founding team. Let’s build the future together. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Purpose - Understand Business Processes & Data, Model the requirements to create Analytics Solutions Build Predictive Models & Recommendation Engines using state-of-the-art Machine Learning Techniques to aid Business Processes increase efficiency and effectiveness in their outcomes. Churn and Analyze the data to discover actionable insights & patterns for Business use. Assist the Function Head in Data Preparation & Modelling Tasks as required JobOutline - Collaborate with Business and IT teams for understanding and collecting data. Collect, collate, clean, process and transform large volume(s) of primarily Tabular data (Blend of Numerical, Categorical & some Text). Apply Data Preparation Techniques like Data Filtering, Joining, Cleaning, Missing Value imputation, Feature Extraction, Feature Engineering, Feature Selection, Dimensionality Reduction, Feature Scaling, Variable Transformation etc Apply as required: basic Algorithms like Linear Regression, Logistic Regression, ANOVA, KNN, Clustering (K-Means, Density, Hierarchical etc), SVM, Naïve Bayes, Decision Trees, Principal Components, Association Rule Mining etc. Apply as required: Ensemble Modeling algorithms like Bagging (Random Forest), Boosting (GBM, LGBM, XGBoost, CatBoost), Time-Series Modelling and other state-of-the-art Algorithms. Apply as required: Modelling concepts like Hyperparameter Optimization, Feature Selection, Stacking, Blending, K-Fold Cross-Validation, Bias & Variance, Overfitting etc Build Predictive Models using state-of-the-art Machine Learning techniques for Regression, Classification, Clustering, Recommendation Engines etc Perform Advance Analytics of the Business Data to find hidden patterns & insights, explanatory causes, and make strategic business recommendations based on the same Knowledge /Education BE /B. Tech – Any Stream Skills Should have strong expertise in Python libraries like Pandas & Scikit Learn along with ability to code according to requirements stated in the Job Outline above Experience of Python Editors like PyCharm and/or Jupyter Notebooks (or other editors) is a must. Ability to organize the code into Modules, Functions and/or Objects is a must Knowledge of using ChatGPT for ML will be preferred. Familiarity with basic SQL for Querying & Excel for Data Analysis is a must. Should understand basics of Statistics like Distributions, Hypothesis Testing, Sampling Techniques etc Work Experience Have an experience of at least 4 years of solving Business Problems through Data Analytics, Data Science and Modelling. Should have experience as a full-time Data Scientist for at least 2 years. Experience of at least 3 Projects in ML Model building, which were used in Production by Business or other clients Skills/Experience Preferred but not compulsory - Familiarity with using ChatGPT, LLMs, Out-of-the Box Models etc for Data Preparation & Model building Kaggle experience. Familiarity with R. Job Interface/Relationships: Internal Work with different Business Teams to build Predictive Models for them External None Key Responsibilities and % Time Spent Data Preparation for Modelling - Data Extraction, Cleaning, Joining & Transformation - 35% Build ML/AI Models for various Business Requirements - 35% Perform Custom Analytics for providing actionable insights to the Business - 20% Assist the Function Head in Data Preparation & Modelling Tasks as required - 10% Any other additional Input - Will not be considered for selection: Familiarity with Deep Learning Algorithms Image Processing & Classification Text Modelling using NLP Techniques Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Tesco India • Bengaluru, Karnataka, India • Hybrid • Full-Time • Permanent • Apply by 19-Jun-2025 About the role Please refer to you are Responsible for :- What is in it for you At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. You will be responsible for Developing and leading a high performing team, creating an environment for success by setting direction and coaching them to succeed through inspiring conversations every day. (Refer to the expectations of a manager at Tesco- the minimum standards) Promoting a culture of CI within their teams to drive operational improvements Accountable for achieving team's objectives, stakeholder management and escalation management Provides inputs that impact the functions plans, policies, influences the budget and resources in their scope. Accountable to EA and market leaderships for building the analytics road-map and improve analytical maturity of partnering functions with in depth understanding of key priorities & outcome. Accountable to shape & own the analytics workplan, proactively spot size able opportunities and deliver programs successfully that will result in disproportionate returns Thought leadership in scoping the business problems, solutions and bringing disruptive / depth oriented solutions to complex problems and institutionalize robust ways of working with business partners Partner with TBS and markets finance team to measure the value delivered through analytics initiatives Build impact driven teams by creating an environment for success by setting direction, objectives and mentor managers, and guide teams to craft analytical assets which will deliver value in sustainable manner Be the voice and represent Enterprise Analytics on internal and external forums Provides inputs that impact the functions plans, policies, influences the budget and resources in their scope Developing managers and colleagues to succeed through inspiring conversations every day You will need Understanding of machine learning techniques, Linear & Logistics regression, Decision Trees, Random Forest, XGBoost and Neural Network Knowledge of Python, SQL, Hive and Visualization tools (e.g. Tableau ) Retail Expertise, Partnership management, Analytics Conceptual application to larger business context, Storyboarding, Managing managers About us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Thrissur, Kerala

On-site

Indeed logo

Senior AI Engineer Location: Infopark, Thrissur, Kerala Employment Type: Full-Time Experience Required: Minimum 5 Years providing AI Solutions, including expertise in ML/DL Projects About Us JK Lucent is a growing IT services provider headquartered in Melbourne, Australia, with an operations centre at Infopark, Thrissur, Kerala. We specialize in Software Development, Software Testing, Game Development, RPA, Data Analytics, and AI solutions. At JK Lucent, we are driven by innovation and committed to delivering cutting-edge technology services that solve real-world problems and drive digital transformation. About the Role We are looking for a highly skilled and experienced Senior AI Engineer to lead the development and deployment of advanced AI systems. This role requires deep expertise in Machine Learning (ML), Deep Learning (DL), and Large Language Models (LLMs). The successful candidate will work on complex AI initiatives, contribute to production-ready systems, and mentor junior engineers. A strong command of professional English and the ability to communicate technical concepts clearly is essential. Roles and Responsibilities Design and develop scalable AI and ML models for real-world applications. Build, fine-tune, and implement Large Language Models (LLMs) for use cases such as chatbots, summarization, and document intelligence. Work with deep learning frameworks such as TensorFlow, PyTorch, and Hugging Face Transformers. Collaborate with cross-functional teams to translate business problems into AI solutions with necessary visualizations using tools like Tableau or Power BI. Deploy models to production environments and implement monitoring and model retraining pipelines. Stay up to date with the latest research and trends in AI, especially in LLMs and generative models. Guide and mentor junior AI engineers, reviewing code and providing technical leadership. Contribute to technical documentation, architecture design, and solution strategies. Ensure models are developed and used ethically and comply with data privacy standards. Requirements Minimum 5 years of experience in AI/ML development with hands-on expertise in model design, development, and deployment. Strong experience working with LLMs and Generative AI tools such as Hugging Face Hub, LangChain, Haystack, LLaMA, GPT, BERT, and T5. Proficiency in Python and ML/DL libraries such as TensorFlow, PyTorch, XGBoost, scikit-learn, and Hugging Face Transformers. Solid understanding of mathematics, statistics, and applied data science principles. Experience deploying models using Docker, FastAPI, MLflow, or similar tools. Familiarity with cloud platforms (AWS, Azure, or GCP) and their AI/ML services. Demonstrated experience in working on end-to-end AI solutions in production environments. Excellent English communication skills (verbal and written) and ability to present technical topics. Strong leadership skills and experience mentoring junior developers or leading small technical teams. Bachelor's or Master's in Computer Science, AI, Data Science, or a related discipline. Job Type: Full-time Pay: ₹600,000.00 - ₹1,500,000.00 per year Schedule: Day shift Monday to Friday Application Question(s): 1. Are you able to commute daily to Infopark, Koratty, Thrissur? (Yes/No) 2. How many years of total IT experience do you have? (Numeric) 3. How many years of experience do you have in AI/ML development? (Numeric) 4. How many years of experience do you have working with Large Language Models (LLMs)? (Numeric) 5. Are you proficient in Python? (Yes/No) 6. Have you used frameworks like TensorFlow, PyTorch, or Hugging Face? (Yes/No) 7. Have you deployed AI/ML models to production environments? (Yes/No) 8. Have you worked with cloud platforms like AWS, Azure, or GCP? (Yes/No) * 9. Do you have professional-level proficiency in English? (Yes/No) 10. What is your current notice period in days? (Numeric) 11. What is your expected salary in LPA? (Numeric) 12. What is your current or last drawn salary in LPA? (Numeric) Work Location: In person

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chalakkudy, Kerala, India

On-site

Linkedin logo

Senior AI Engineer Location: Infopark, Thrissur, Kerala Employment Type: Full-Time Experience Required: Minimum 5 Years providing AI Solutions, including expertise in ML/DL Projects About Us JK Lucent is a growing IT services provider headquartered in Melbourne, Australia, with an operations centre at Infopark, Thrissur, Kerala. We specialize in Software Development, Software Testing, Game Development, RPA, Data Analytics, and AI solutions. At JK Lucent, we are driven by innovation and committed to delivering cutting-edge technology services that solve real-world problems and drive digital transformation. About the Role We are looking for a highly skilled and experienced Senior AI Engineer to lead the development and deployment of advanced AI systems. This role requires deep expertise in Machine Learning (ML), Deep Learning (DL), and Large Language Models (LLMs). The successful candidate will work on complex AI initiatives, contribute to production-ready systems, and mentor junior engineers. A strong command of professional English and the ability to communicate technical concepts clearly is essential. Roles and Responsibilities · Design and develop scalable AI and ML models for real-world applications. · Build, fine-tune, and implement Large Language Models (LLMs) for use cases such as chatbots, summarization, and document intelligence. · Work with deep learning frameworks such as TensorFlow, PyTorch, and Hugging Face Transformers. · Collaborate with cross-functional teams to translate business problems into AI solutions with necessary visualizations using tools like Tableau or Power BI. · Deploy models to production environments and implement monitoring and model retraining pipelines. · Stay up to date with the latest research and trends in AI, especially in LLMs and generative models. · Guide and mentor junior AI engineers, reviewing code and providing technical leadership. · Contribute to technical documentation, architecture design, and solution strategies. · Ensure models are developed and used ethically and comply with data privacy standards.  Requirements · Minimum 5 years of experience in AI/ML development with hands-on expertise in model design, development, and deployment. · Strong experience working with LLMs and Generative AI tools such as Hugging Face Hub, LangChain, Haystack, LLaMA, GPT, BERT, and T5. · Proficiency in Python and ML/DL libraries such as TensorFlow, PyTorch, XGBoost, scikit-learn, and Hugging Face Transformers. · Solid understanding of mathematics, statistics, and applied data science principles. · Experience deploying models using Docker, FastAPI, MLflow, or similar tools. · Familiarity with cloud platforms (AWS, Azure, or GCP) and their AI/ML services. · Demonstrated experience in working on end-to-end AI solutions in production environments. · Excellent English communication skills (verbal and written) and ability to present technical topics. · Strong leadership skills and experience mentoring junior developers or leading small technical teams. · Bachelor's or Master's in Computer Science, AI, Data Science, or a related discipline. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Company Overview Docusign brings agreements to life. Over 1.5 million customers and more than a billion people in over 180 countries use Docusign solutions to accelerate the process of doing business and simplify people’s lives. With intelligent agreement management, Docusign unleashes business-critical data that is trapped inside of documents. Until now, these were disconnected from business systems of record, costing businesses time, money, and opportunity. Using Docusign’s Intelligent Agreement Management platform, companies can create, commit, and manage agreements with solutions created by the #1 company in e-signature and contract lifecycle management (CLM). What you'll do You will play an important role in applying and implementing effective machine learning solutions, with a significant focus on Generative AI. You will work with product and engineering teams to contribute to data-driven product strategies, explore and implement GenAI applications, and deliver impactful insights. This position is an individual contributor role reporting to the Senior Manager, Data Science. Responsibility Experiment with, apply, and implement DL/ML models, with a strong emphasis on Large Language Models (LLMs), Agentic Frameworks, and other Generative AI techniques to predict user behavior, enhance product features, and improve automation Utilize and adapt various GenAI techniques (e.g., prompt engineering, RAG, fine-tuning existing models) to derive actionable insights, generate content, or create novel user experiences Collaborate with product, engineering, and other teams (e.g., Sales, Marketing, Customer Success) to build Agentic system to run campaigns at-scale Conduct in-depth analysis of customer data, market trends, and user insights to inform the development and improvement of GenAI-powered solutions Partner with product teams to design, administer, and analyze the results of A/B and multivariate tests, particularly for GenAI-driven features Leverage data to develop actionable analytical insights & present findings, including the performance and potential of GenAI models, to stakeholders and team members Communicate models, frameworks (especially those related to GenAI), analysis, and insights effectively with stakeholders and business partners Stay updated on the latest advancements in Generative AI and propose their application to relevant business problems Complete assignments with a sense of urgency and purpose, identify and help resolve roadblocks, and collaborate with cross-functional team members on GenAI initiatives Job Designation Hybrid: Employee divides their time between in-office and remote work. Access to an office location is required. (Frequency: Minimum 2 days per week; may vary by team but will be weekly in-office expectation) Positions at Docusign are assigned a job designation of either In Office, Hybrid or Remote and are specific to the role/job. Preferred job designations are not guaranteed when changing positions within Docusign. Docusign reserves the right to change a position's job designation depending on business needs and as permitted by local law. What you bring Basic Bachelor's or Master's degree in Computer Science, Physics, Mathematics, Statistics, or a related field 3+ years of hands-on experience in building data science applications and machine learning pipelines, with demonstrable experience in Generative AI projects Experience with Python for research and software development purposes, including common GenAI libraries and frameworks Strong knowledge of common machine learning, deep learning, and statistics frameworks and concepts, with a specific understanding of Large Language Models (LLMs), transformer architectures, and their applications Experience with or exposure to prompt engineering, and utilizing pre-trained LLMs (e.g., via APIs or open-source models) Experience with large datasets, distributed computing, and cloud computing platforms (e.g., AWS, Azure, GCP) Proficiency with relational databases (e.g., SQL) Experience in training, evaluating, and deploying machine learning models in production environments, with an interest in MLOps for GenAI Proven track record in contributing to ML/GenAI projects from ideation through to deployment and iteration Experience using machine learning and deep learning algorithms like CatBoost, XGBoost, LGBM, Feed Forward Networks for classification, regression, and clustering problems, and an understanding of how these can complement GenAI solutions Experience as a Data Scientist, ideally in the SaaS domain with some focus on AI-driven product features Preferred PhD in Statistics, Computer Science, or Engineering with specialization in machine learning, AI, or Statistics, with research or projects in Generative AI 5+ years of prior industry experience, with at least 1-2 years focused on GenAI applications Previous experience applying data science and GenAI techniques to customer success, product development, or user experience optimization Hands-on experience with fine-tuning LLMs or working with RAG methodologies Experience with or knowledge of experimentation platforms (like DataRobot) and other AI related ones (like CrewAI) Experience with or knowledge of the software development lifecycle/agile methodology, particularly in AI product development Experience with or knowledge of Github, JIRA/Confluence Contributions to open-source GenAI projects or a portfolio of GenAI related work Programming Languages like Python, SQL; familiarity with R Ability to break down complex technical concepts (including GenAI) into simple terms to present to diverse, technical, and non-technical audiences Life at Docusign Working here Docusign is committed to building trust and making the world more agreeable for our employees, customers and the communities in which we live and work. You can count on us to listen, be honest, and try our best to do what’s right, every day. At Docusign, everything is equal. We each have a responsibility to ensure every team member has an equal opportunity to succeed, to be heard, to exchange ideas openly, to build lasting relationships, and to do the work of their life. Best of all, you will be able to feel deep pride in the work you do, because your contribution helps us make the world better than we found it. And for that, you’ll be loved by us, our customers, and the world in which we live. Accommodation Docusign is committed to providing reasonable accommodations for qualified individuals with disabilities in our job application procedures. If you need such an accommodation, or a religious accommodation, during the application process, please contact us at accommodations@docusign.com. If you experience any issues, concerns, or technical difficulties during the application process please get in touch with our Talent organization at taops@docusign.com for assistance. Applicant and Candidate Privacy Notice Show more Show less

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

About This Opportunity Ericsson is a leading provider of telecommunications equipment and services to mobile and fixed network operators globally. We are seeking a highly skilled and experienced Data Scientist to join our dynamic team at Ericsson. As a Data Scientist, you will be responsible for leveraging advanced analytics and machine learning techniques to drive actionable insights and solutions for our telecom domain. This role requires a deep understanding of data science methodologies, strong programming skills, and proficiency in cloud-based environments. What You Will Do Develop and deploy machine learning models for various applications including chat-bot, XGBoost, random forest, NLP, computer vision, and generative AI. Utilize Python for data manipulation, analysis, and modeling tasks. Proficient in SQL for querying and analyzing large datasets. Experience with Docker and Kubernetes for containerization and orchestration of applications. Basic knowledge of PySpark for distributed computing and data processing. Collaborate with cross-functional teams to understand business requirements and translate them into analytical solutions. Deploy machine learning models into production environments and ensure scalability and reliability. Preferably have experience working with Google Cloud Platform (GCP) services for data storage, processing, and deployment. Experience in analysing complex problems and translate it into algorithms. Backend development in Rest APIs using Flask, Fast API Deployment experience with CI/CD pipelines Working knowledge of handling data sets and data pre-processing through PySpark Writing queries to target Casandra, PostgreSQL database. Design Principles in application development. Experience of Service Oriented Architecture (SOA, web services, REST) Experience of agile development & GCP BigQuery Have experience in general tools and techniques: E.g. Docker, K8s, GIT, Argo WorkFlow The Skills You Bring Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field. A Master's degree or PhD is preferred. 3-7 years of experience in data science and machine learning roles, preferably within the telecommunications or related industry. Proven experience in model development, evaluation, and deployment. Strong programming skills in Python and SQL. Familiarity with Docker, Kubernetes, and PySpark. Solid understanding of machine learning techniques and algorithms. Experience working with cloud platforms, preferably GCP. Excellent problem-solving skills and ability to work independently as well as part of a team. Strong communication and presentation skills, with the ability to explain complex analytical concepts to non-technical stakeholders. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Noida Req ID: 759817 Show more Show less

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

About This Opportunity Ericsson is a leading provider of telecommunications equipment and services to mobile and fixed network operators globally. We are seeking a highly skilled and experienced Data Scientist to join our dynamic team at Ericsson. As a Data Scientist, you will be responsible for leveraging advanced analytics and machine learning techniques to drive actionable insights and solutions for our telecom domain. This role requires a deep understanding of data science methodologies, strong programming skills, and proficiency in cloud-based environments. What You Will Do Develop and deploy machine learning models for various applications including chat-bot, XGBoost, random forest, NLP, computer vision, and generative AI. Utilize Python for data manipulation, analysis, and modeling tasks. Proficient in SQL for querying and analyzing large datasets. Experience with Docker and Kubernetes for containerization and orchestration of applications. Basic knowledge of PySpark for distributed computing and data processing. Collaborate with cross-functional teams to understand business requirements and translate them into analytical solutions. Deploy machine learning models into production environments and ensure scalability and reliability. Preferably have experience working with Google Cloud Platform (GCP) services for data storage, processing, and deployment. Experience in analysing complex problems and translate it into algorithms. Backend development in Rest APIs using Flask, Fast API Deployment experience with CI/CD pipelines Working knowledge of handling data sets and data pre-processing through PySpark Writing queries to target Casandra, PostgreSQL database. Design Principles in application development. Experience of Service Oriented Architecture (SOA, web services, REST) Experience of agile development & GCP BigQuery Have experience in general tools and techniques: E.g. Docker, K8s, GIT, Argo WorkFlow The Skills You Bring Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field. A Master's degree or PhD is preferred. 3-7 years of experience in data science and machine learning roles, preferably within the telecommunications or related industry. Proven experience in model development, evaluation, and deployment. Strong programming skills in Python and SQL. Familiarity with Docker, Kubernetes, and PySpark. Solid understanding of machine learning techniques and algorithms. Experience working with cloud platforms, preferably GCP. Excellent problem-solving skills and ability to work independently as well as part of a team. Strong communication and presentation skills, with the ability to explain complex analytical concepts to non-technical stakeholders. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Noida Req ID: 759817 Show more Show less

Posted 1 week ago

Apply

3.0 - 6.0 years

3 - 4 Lacs

Bengaluru

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description (Shaded areas for Talent use only) We are seeking a passionate data analyst to transform data into actionable insights and support decision-making in a global organization focused on pricing and commercial strategy. This role spans business analysis, requirements gathering, data modeling, solution design, and visualization using modern tools. The analyst will also maintain and improve existing analytics solutions, interpret complex datasets, and communicate findings clearly to both technical and non-technical audiences. Essential Functions of the Job: (Identify and describe essential functions, or primary duties and responsibilities. Each function should describe WHAT is done and the END RESULT/PURPOSE achieved. Assume that the reader does not know the role or function of the job.) Analyze and interpret structured and unstructured data using statistical and quantitative methods to generate actionable insights and ongoing reports. Design and implement data pipelines and processes for data cleaning, transformation, modeling, and visualization using tools such as Power BI, SQL, and Python. Collaborate with stakeholders to define requirments, prioritize business needs, and translate problems into analytical solutions. Develop, maintain, and enhance scalable analytics solutions and dashboards that support pricing strategy and commercial decision-making. Identify opportunities for process improvement and opperational efficiency through data-driven recommendations. Communicate complex findings in a clear, compelling, and actionable manner to both technical and non-technical audiences. Analytical/Decision Making Responsibilities: (Describe the kind of problems and challenges typically faced, and decisions required to perform the job, as well as recommendations made to supervisors or others. Focus on the nature of existing policies, precedents and procedures used to guide decisions, and the degree to which the incumbent is free to make decisions requiring interpretation and judgment. Provide an example.) Apply a hypothesis-driven approach to analyzing ambiguous or complex data and synthesizing insights to guide strategic decisions. Promote adoption of best practices in data analysis, modeling, and visualization, while tailoring approaches to meet the unique needs of each project. Tackle analytical challenges with creativity and rigor, balancing innovative thinking with practical problem-solving across varied business domains. Prioritize work based on business impact and deliver timely, high-quality results in fast-paced environments with evolving business needs. Demonstrate sound judgement in selecting methods, tools, and data sources to support business objectives. Knowledge and Skills Requirements: (Describe the knowledge or skills needed to perform this job; these may be professional, technical, or managerial) Proven experience as a data analyst, business analyst, data engineer, or similar role. Strong analytical skills with the ability to collect, organize, analyze, and present large datasets accurately. Foundational knowledge of statistics, including concepts like distributions, variance, and correlation. Skilled in documenting processes and presenting findings to both technical and non-technical audiences. Hands-on experience with Power BI for designing, developing, and maintaining analytics solutions. Proficient in both Python and SQL, with strong programming and scripting skills. Skilled in using Pandas, T-SQL, and Power Query M for querying, transforming, and cleaning data. Hands-on experience in data modeling for both transactional (OLTP) and analytical (OLAP) database systems. Strong visualization skills using Power BI and Python libraries such as Matplotlib and Seaborn. Experience with defining and designing KPIs and aligning data insights with business goals. Additional/Optional Knowledge and Skills: (Describe any additional knowledge or skills that, while not required, may be useful or helpful to perform this job; these may be professional, technical, or managerial) Experience with the Microsoft Fabric data analytics environment. Proficiency in using the Apache Spark distributed analytics engine, particularly via PySpark and Spark SQL. Exposure to implementing machine learning or AI solutions in a business context. Familiarity with Python machine learning libraries such as scikit-learn, XGBoost, PyTorch, or transformers. Experience with Power Platform tools (Power Apps, Power Automate, Dataverse, Copilot Studio, AI Builder). Knowledge of pricing, commercial strategy, or competitive intelligence. Experience with cloud-based data services, particularly in the Azure ecosystem (e.g., Azure Synapse Analytics or Azure Machine Learning). Supervision Responsibilities: (Describe the level of supervision received, i.e., the frequency of supervisory contact, degree to which the individual acts independently and on what kinds of issues. Describe the level of supervision of others, if any, i.e., assigning work, reviewing performance, direct or indirect responsibility). Operates with a high degree of independence and autonomy. Collaborates closesly with cross-functional teams including sales, pricing, and commercial strategy. Mentors junior team members, helping develop technical skills and business domain knowledge. Other Requirements: (Describe any miscellaneous functions or expectations of the job that are important to note) Collaborates with a team operating primarily in the Eastern Time Zone (UTC 4:00 / 5:00). Limited travel may be required for this role. Job Requirements: Education: (What is the minimum level of education or equivalent experience needed/suggested to perform this job effectively?) A bachelor’s degree in a STEM field relevant to data analysis, data engineering, or data science is required. Examples include (but are not limited to) computer science, statistics, data analytics, artificial intelligence, operations research, or econometrics. Experience: (What is the minimum number or range of years needed to perform this job?) 3–6 years of experience in data analysis, data engineering, or a closely related field, ideally within a professional services enviornment. Certification Requirements: (Describe and explain any certifications and/or licenses needed or helpful to perform this job). No certifications are required for this role. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

0 years

3 - 5 Lacs

Chennai

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Position Name ML Developer Taleo ID Position Level Staff Employment Type Permanent Number of Openings 1 Work Location Kochi, Chennai , Noida , Bangalore , Pune , Kolkata , TVM Position Details As part of EY GDS Assurance Digital, you will be responsible for leveraging advanced machine learning techniques to develop innovative, high-impact models and solutions that drive growth and deliver significant business value. You will be helping EY’s sector and service line professionals by developing analytics enabled solutions, integrating data science activities with business relevant aspects to gain insight from data. This is a full-time Machine Learning Developer role, responsible for building and deploying robust machine learning models to solve real-world business problems. You will be working on the entire ML lifecycle, including data analysis, feature engineering, model training, evaluation, and deployment. Requirements (including experience, skills and additional qualifications) A bachelor’s degree (BE/BTech/MCA & MBA) in Computer Science, Engineering, Information Systems Management, Accounting, Finance or a related field with adequate industry experience. Technical skills requirements Develop and implement machine learning models, including regression, classification (e.g., XGBoost, Random Forest), and clustering techniques. Conduct exploratory data analysis (EDA) to uncover insights and trends within data sets. Apply dimension reduction techniques to improve model performance and interpretability. Utilize statistical models to design and implement effective business solutions. Evaluate and validate models to ensure robustness and reliability. Should have solid background in Python Familiarity with Time Series Forecasting . Basic experience with cloud platforms such as AWS, Azure, or GCP. Exposure to ML Ops tools and practices (e.g., MLflow, Airflow, Docker) is a plus Additional skill requirements:  Proficient at quickly understanding complex machine learning concepts and utilizing technology for tasks such as data modeling, analysis, visualization, and process automation. Skilled in selecting and applying the most suitable standards, methods, tools, and frameworks for specific ML tasks and use cases. Capable of collaborating effectively within cross-functional teams, while also being able to work independently on complex ML projects. Demonstrates a strong analytical mindset and systematic approach to solving machine learning challenges. Excellent communication skills, able to present complex technical concepts clearly to both technical and non-technical audiences. What we look for A Team of people with commercial acumen, technical experience, and enthusiasm to learn new things in this fast-moving environment. An opportunity to be a part of market-leading, multi-disciplinary team of 7200 + professionals, in the only integrated global assurance business worldwide. Opportunities to work with EY GDS Assurance practices globally with leading businesses across a range of industries What working at EY offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you About EY As a global leader in assurance, tax, transaction, and advisory services, we’re using the finance products, expertise, and systems we’ve developed to build a better working world. That starts with a culture that believes in giving you the training, opportunities, and creative freedom to make things better. Whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. And with a commitment to hiring and developing the most passionate people, we’ll make our ambition to be the best employer by 2020 a reality. If you can confidently demonstrate that you meet the criteria above, please contact us as soon as possible. Join us in building a better working world. Apply now EY provides equal employment opportunities to applicants and employees without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Note: Looking for Immediate Joiner. Job Title: Machine Learning Engineer (Gen AI and AI ML) Job Summary: Primary Skills • Master's degree from Computer science, Mathematics, Statistics and other relative disciplines • Experience using statistical computer languages (Python, R, etc.) to manipulate data and draw insights from large data sets. • Knowledge of a variety of machine learning techniques (clustering, decision trees, boosting, artificial neural networks, etc.) and their real-world advantages/drawbacks. • Knowledge of popular ML and non-ML libraries: Tensorflow, Torch, Sklearn, Scipy, Xgboost,… • Knowledge of popular cloud infrastructure: Google Cloud, AWS, Microsoft Azure, … • Excellent written and verbal communication skills for coordinating across teams. • Experience in code management using Git Secondary Skills • Experience in Ecommerce or Advertising verticals is a plus • Experience on NLP, Time Series Forecasting, Computer vision, and other relative domains is a plus • Experience on popular NLP and Computer vision libraries is a plus: Spacy, NLTK, OpenCV, … • Experience on parallel computing and GPU acceleration is a plus • Experience on LLM’s utilization and fine tuning, GEN AI solutioning along with RAG is a must Mine and analyze data from company databases to drive optimization and improvement for product development, marketing techniques and business strategies.  Assess the effectiveness and accuracy of new data sources and data gathering techniques.  Transforming data science prototypes and applying appropriate algorithms and tools.  Develop custom ML and non-ML models and algorithms to apply to data sets.  Define the evaluation approach of models  Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting and other business outcomes.  Coordinate with different functional teams to implement models and monitor outcomes.  Keeping abreast of developments in corresponding domains Note: Looking for Immediate Joiner only. Show more Show less

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job description Location: Mumbai/ Bengaluru Experience: 3-5 years Industry: Banking / Financial Services (Mandatory) Why would you like to join us? TransOrg Analytics specializes in Data Science, Data Engineering and Generative AI, providing advanced analytics solutions to industry leaders and Fortune 500 companies across India, US, APAC and the Middle East. We leverage data science to streamline, optimize, and accelerate our clients' businesses. Visit at www.transorg.com to know more about us. What do we expect from you? Build and validate credit risk models , including application scorecards and behavior scorecards (B-score), aligned with business and regulatory requirements. Use advanced machine learning algorithms such as Logistic Regression, XGBoost , and Clustering to develop interpretable and high-performance models. Translate business problems into data-driven solutions using robust statistical and analytical methods. Collaborate with cross-functional teams including credit policy, risk strategy, and data engineering to ensure effective model implementation and monitoring. Maintain clear, audit-ready documentation for all models and comply with internal model governance standards. Track and monitor model performance, proactively suggesting recalibrations or enhancements as needed. What do you need to excel at? Writing efficient and scalable code in Python, SQL, and PySpark for data processing, feature engineering, and model training. Working with large-scale structured and unstructured data in a fast-paced, banking or fintech environment. Deploying and managing models using MLFlow, with a strong understanding of version control and model lifecycle management. Understanding retail banking products , especially credit card portfolios , customer behavior, and risk segmentation. Communicating complex technical outcomes clearly to non-technical stakeholders and senior management. Applying a structured problem-solving approach and delivering insights that drive business value. What are we looking for? Bachelors or masters degree in Statistics, Mathematics, Computer Science , or a related quantitative field. 3-5 years of experience in credit risk modelling , preferably in retail banking or credit cards. Hands-on expertise in Python, SQL, PySpark , and experience with MLFlow or equivalent MLOps tools. Deep understanding of machine learning techniques including Logistic Regression, XGBoost, and Clustering. Proven experience in developing Application Scorecards and behavior Scorecards using real-world banking data. Strong documentation and compliance orientation, with an ability to work within regulatory frameworks. Curiosity, accountability, and a passion for solving real-world problems using data. Cloud Knowledge, JIRA, GitHub(good to have) Show more Show less

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies