Jobs
Interviews

3449 Data Scientist Jobs - Page 26

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

0 Lacs

ghaziabad, uttar pradesh

On-site

You will be joining Teacurry Teas as a Data Scientist in Ghaziabad on a full-time basis. Your role will involve analyzing and interpreting complex data sets, developing statistical models, and leveraging data analytics to drive strategic business decisions. On a daily basis, you will be responsible for tasks like data collection, processing, visualization, and providing valuable insights to enhance operational efficiency and product quality. To excel in this position, you should possess proficiency in Data Science and Statistics, along with hands-on experience in Data Analytics, Data Visualization, and Data Analysis. Strong analytical and problem-solving skills are essential, along with a good command of programming languages such as Python or R. Effective written and verbal communication skills are crucial for this role, as you will be collaborating with various teams. Prior experience in the tea or food industry would be advantageous. Ideally, you should hold a Bachelor's or Master's degree in Data Science, Statistics, Computer Science, or a related field. Working effectively in a team environment and being able to apply your expertise to contribute to the organization's success are key aspects of this role. Join Teacurry Teas to be a part of a team dedicated to providing high-quality teas that offer both comfort and health benefits.,

Posted 4 weeks ago

Apply

10.0 - 15.0 years

0 - 0 Lacs

hyderabad

On-site

Responsibilities Lead the development and implementation of advanced machine learning models and statistical analyses to solve complex business problems Collaborate with business stakeholders to understand requirements and translate them into analytical solutions Manage a team of business intelligence experts Work closely together with the technology teams in implementing new techniques into the Bank environments Develop predictive models for risk assessment, fraud detection, customer behavior analysis or forecasting Create and maintain detailed documentation of methodologies, models, and processes Design and build scalable data pipelines and ETL workflows Present findings and recommendations to senior management and stakeholders Monitor model performance and implement improvements as needed Ensure compliance with banking regulations and data governance policies Qualifications Education: Master's degree or Ph.D. in Data Science, Statistics, Computer Science, Mathematics, or related field Professional certifications in relevant technologies or methodologies are a plus Experience: 10+ years of experience in data science, with at least 4 years in the banking/financial services sector Proven track record of successfully implemented machine learning models in production Minimum 3 years of experience managing and leading data science teams Demonstrated experience in building and developing high-performing teams Technical Skills: Advanced knowledge of machine learning algorithms and statistical modeling Strong expertise in Python, R, or similar programming languages Proficiency in SQL and experience with big data technologies (Hadoop, Spark, Databricks) Experience with deep learning frameworks Knowledge of cloud platforms (AWS, Azure, or GCP) Expertise in data visualization tools (Tableau, Power BI, Qlik) Strong understanding of version control systems (Git) Experience with MLOps and model deployments Business Skills: Proven people management and leadership abilities Experience in resource planning and team capacity management Excellent problem-solving and analytical thinking abilities Strong communication and presentation skills Ability to translate complex technical concepts to non-technical stakeholders Experience in agile development methodologies Understanding of banking and financial services domain Domain Knowledge: Understanding of risk management, compliance, products and regulatory requirements in banking Knowledge of financial markets and instruments is a plus

Posted 4 weeks ago

Apply

5.0 years

0 Lacs

Orissa

On-site

Employment Information Industry Market Research / Human Resource / Management / Security Analyst Open Positions 1 Experience 5 Year Job Type Full Time Location Bhubaneswar, Bhubaneswar, Odisha, India Job Description: Data Scientist – Newaetate, Bhubaneswar About the Role Newaetate is seeking a highly skilled and experienced Data Scientist to join our team in Bhubaneswar. This is a full-time position ideal for professionals with 5+ years of relevant industry experience. The role requires immediate joining or within 15 days. Key Responsibilities Design, implement, and optimize end-to-end data science solutions in alignment with business goals. Develop, test, and deploy machine learning models using Python (Pandas, NumPy, scikit-learn). Analyze large datasets using SQL and conduct statistical analyses to extract actionable insights. Perform advanced data modeling and design experiments for A/B testing and statistical evaluations. Build and maintain compelling data visualizations utilizing tools such as Matplotlib, Seaborn, and Power BI. Deploy models using frameworks like Flask and FastAPI to integrate with production environments. Generate business intelligence reports and insights to support strategic decision-making. Continuously collaborate with cross-functional teams to identify opportunities for improvement and innovation. Required Skills & Qualifications Minimum 5 years of professional experience in data science or analytics roles. Strong proficiency in Python (Pandas, NumPy, scikit-learn). Experienced in SQL programming and machine learning algorithms. Hands-on expertise in statistical analysis and data modeling. Proven abilities in data visualization with Matplotlib, Seaborn, and Power BI. Experience in model deployment using Flask or FastAPI. Demonstrated track record in business intelligence and insight generation. Location: Bhubaneswar Job Type: Full-time Joining: Immediate / Within 15 Days

Posted 4 weeks ago

Apply

3.0 - 8.0 years

20 - 25 Lacs

Ahmedabad

Work from Office

Kraft Heinz Company is looking for Analyst , Data Scientist to join our dynamic team and embark on a rewarding career journey Conducts data analysis to support business decisions Generates reports and insights from data trends Collaborates with teams for process improvements Ensures accuracy and integrity of data

Posted 4 weeks ago

Apply

8.0 - 13.0 years

40 - 50 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Join us for a rewarding opportunity to lead and innovate in Card Marketing Analytics, driving impactful strategies and growth. As a Quant Analytics Manager within the Card Marketing Analytics team, you will lead the analytics strategy for marketing campaigns, focusing on customer engagement and retention. You will mentor junior analysts and drive innovation projects to optimize Chases marketing spend. Job Responsibilities Own analytics strategy, planning, and execution for Lifecycle marketing campaigns. Provide leadership and mentorship to junior analysts and associates. Consult on experimental design and develop new strategies with stakeholders. Drive forecasting automation and process improvements using Alteryx, Tableau, etc. Perform deep dive customer profile and segmentation analysis. Interpret results and present to stakeholders and senior management. Maintain rigorous controls environment for accurate and timely results. Required Qualifications, Capabilities, and Skills Bachelors and Masters degree in a quantitative discipline. 8+ years of experience applying statistical methods to real-world problems. Experience with SQL and visualization techniques (Tableau, Power BI). Exceptional communication and presentation skills. Highly organized and able to prioritize multiple tasks. Excellent communicator and relationship builder with stakeholders. Able to solve unstructured problems independently. 3+ years of experience managing a team of analysts. Preferred Qualifications, Capabilities, and Skills Card Marketing analytics experience. Experience in Python. Join us for a rewarding opportunity to lead and innovate in Card Marketing Analytics, driving impactful strategies and growth. As a Quant Analytics Manager within the Card Marketing Analytics team, you will lead the analytics strategy for marketing campaigns, focusing on customer engagement and retention. You will mentor junior analysts and drive innovation projects to optimize Chases marketing spend. Job Responsibilities Own analytics strategy, planning, and execution for Lifecycle marketing campaigns. Provide leadership and mentorship to junior analysts and associates. Consult on experimental design and develop new strategies with stakeholders. Drive forecasting automation and process improvements using Alteryx, Tableau, etc. Perform deep dive customer profile and segmentation analysis. Interpret results and present to stakeholders and senior management. Maintain rigorous controls environment for accurate and timely results. Required Qualifications, Capabilities, and Skills Bachelors and Masters degree in a quantitative discipline. 8+ years of experience applying statistical methods to real-world problems. Experience with SQL and visualization techniques (Tableau, Power BI). Exceptional communication and presentation skills. Highly organized and able to prioritize multiple tasks. Excellent communicator and relationship builder with stakeholders. Able to solve unstructured problems independently. 3+ years of experience managing a team of analysts. Preferred Qualifications, Capabilities, and Skills Card Marketing analytics experience. Experience in Python.

Posted 4 weeks ago

Apply

3.0 - 8.0 years

50 - 60 Lacs

Gurugram

Work from Office

Data Scientist - FinBox {"@context":"https: / / schema.org / " , "@type":"JobPosting" , "title":"Data Scientist","description":" FinBox: Where Fintech Meets Fun! Welcome to FinBox, the buzzing hive of tech innovation and creativity! Since our inception in 2017, FinBox has built some of the most advanced technologies in the financial services space that help lenders like Banks, NBFCs and large enterprises build and launch credit products within a matter of days, not months or years. FinBox is a Series A funded company which is expanding globally with offices in India, Vietnam, Indonesia and Philippines. Our vision is to build the best-in-class infrastructure for lending products and help Banks & Financial Services companies across the world scale and launch credit programs that set a new standard in the era of digital finance. So far, we ve helped our customers disburse Billions of Dollars in credit across unsecured and secured credit including personal loans, working capital loans, business loans, mortgage and education loans. FinBox solutions are already being used by over 100+ companies to deliver credit to over 5 million customers every month. Why Should You be a FinBoxer: Innovative Environment: At FinBox, we foster a culture of creativity and experimentation, encouraging our team to push the boundaries of whats possible in fintech. Impactful Work: Your contributions will directly impact the lives of millions, helping to provide fair and accessible credit to individuals and businesses alike. Growth Opportunities: We are a Series A funded startup and have ample opportunities for growth, professional development and career advancement. Collaborative Culture: Join a diverse and inclusive team of experts who are passionate about making a difference and supporting one another. Who s a Great FinBoxer: At FinBox, we re on the lookout for exceptional folks who are all about innovation and impact. If you re excited to shake things up in the banking & financial services world, keep reading! Creative Thinkers: If your brain is always bubbling with out-of-the-box ideas and wild solutions, you re our kind of person. We love disruptors who challenge the norm and bring fresh perspectives to the table. Customer Heroes: Our customers are our champions, and we need heroes who can understand their needs, deliver magical experiences, and go above and beyond to keep them happy. Team Players: We believe in the power of we. If you thrive in a collaborative environment, value different viewpoints, and enjoy being part of a spirited, supportive team, you ll fit right in. How Youll Contribute to Our Data Science Team: Assist in analyzing structured and unstructured data to generate meaningful insights. Support the development of predictive models and statistical analyses to improve product decisions. Build dashboards and visualizations to monitor key business metrics. Work on data gathering, pre-processing, and cleaning to ensure high-quality datasets. Conduct exploratory data analysis and hypothesis testing to drive data-driven decision-making. Automate reports and create simple yet effective data visualizations. Document data processes and findings in a structured and presentable format. Collaborate with cross-functional teams to translate business requirements into data-driven solutions. Apply foundational machine learning techniques such as regression analysis, clustering, and decision trees. Who You Are: 3+ years of experience in data science, analytics, or a related field. Strong analytical thinking and problem-solving skills. Proficiency in Python/R, SQL, and Excel for data manipulation and analysis. Experience with data visualization tools (Tableau, Power BI, or similar). Familiarity with AWS, Git, and version control practices is a plus. Understanding of statistical concepts and basic machine learning models . Eagerness to learn, experiment, and grow in a fast-paced startup environment. ","

Posted 4 weeks ago

Apply

3.0 - 5.0 years

7 - 11 Lacs

Noida

Work from Office

About the company: CoreOps.AI is a new age company founded by highly experienced leaders from the technology industry with a vision to be the most compelling technology company that modernizes enterprise core systems and operations. Website : https://coreops.ai CoreOps is building the AI operating system for enterprises - accelerating modernization by 50% and cutting costs by 25% through intelligent automation, data orchestration, and legacy transformation. At CoreOps.AI, we believe in the quiet power of transformation like the dandelion that seeds change wherever it lands. Inspired by this symbol of resilience and growth, our enterprise AI solutions are designed to take root seamlessly, enrich core operations, and spark innovation across your business.Founded by industry veterans with deep B2B expertise and a track record of scaling global tech businesses, CoreOps.AI brings the power of agentic AI to modernize legacy systems, accelerate digital transformation, and shape the future of intelligent enterprises. We are seeking a highly motivated and experienced data scientist to help us in leading the team of Gen-Ai Engineers involved. You are required to lead & manage all the processes from data extraction, cleaning, and pre-processing, to training models and deploying them to production. The ideal candidate will be passionate about artificial intelligence and stay up-to-date with the latest developments in the field. Key Responsibilities: Utilize frameworks like Langchain for developing scalable and efficient AI solutions. Integrate vector databases such as Azure Cognitive Search, Weavite, or Pinecone to support AI model functionalities. Work closely with cross-functional teams to define problem statements and prototype solutions leveraging generative AI. Ensure robustness, scalability, and reliability of AI systems by implementing best practices in machine learning and software development Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world Demonstrable history of devising and overseeing data-centred projects Verifying data quality, and/or ensuring it via data cleaning Supervising the data acquisition process if more data is needed Finding available datasets online that could be used for training Defining validation strategies, feature engineering & data augmentation pipelines to be done on a given dataset Training models and tuning their hyperparameters Analysing the errors of the model and designing strategies to overcome them Deploying models to production Qualifications and Education Requirements: Bachelor s/Master s degree in computer science, data science, mathematics or a related field. At least 3-10 years experience in building Gen-Ai applications. Preferred Skills: Proficiency in statistical techniques such as hypothesis testing, regression analysis, clustering, classification, and time series analysis to extract insights and make predictions from data. Proficiency with a deep learning framework such as TensorFlow, PyTorch and Keras Specialized in Deep Learning (NLP) and statistical machine learning. Strong Python skills. Experience with developing production-grade applications. Familiarity with Langchain framework and vector databases like Azure Cognitive Search, Weavite, or Pinecone. Understanding and experience with retrieval algorithms. Worked on Big data platforms and technologies such as Apache Hadoop, Spark, Kafka, or Hive for processing and analyzing large volumes of data efficiently and at scale. Familiarity in working and deploying applications on Ubuntu/Linux system Excellent communication, negotiation, and interpersonal skills.

Posted 4 weeks ago

Apply

4.0 - 6.0 years

10 - 14 Lacs

Bengaluru

Work from Office

What You Will Need Bachelors/Masters degree in computer science (or similar degrees) 4-7 years of experience as a Data Scienti st in a fast-paced organizati on, preferably B2C Familiarity with Neural Networks, Machine Learning etc. Familiarity with tools like SQL, R, Python, etc. Strong understanding of Stati sti cs and Linear Algebra Strong understanding of hypothesis/model testi ng and ability to identi fy common modeltesti ng errors Experience designing and running A/B tests and drawing insights from them Profi ciency in machine learning algorithms Excellent analyti cal skills to fetch data from reliable sources to generate accurate insights Experience in tech and product teams is a plus Bonus points for: Experience in working on personalizati on or other ML problems Familiarity with Big Data tech stacks like Apache Spark, Hadoop, Redshift Algorithms, Apache Spark, Data Science, Etl, Large Scale, Linear Algebra, Ml, Neural Netwoks, Python, Sql

Posted 4 weeks ago

Apply

3.0 - 8.0 years

35 - 40 Lacs

Hyderabad

Work from Office

At Uber, we empower people to earn and transact on the platform and to do this across the globe for millions of customers, we need to be compliant with local regulations and manage the risk that is associated with fraud losses. The Risk Intelligence team is responsible for keeping the platform safe from fraud losses while minimizing friction to legitimate customers. This team drives scalable solutions to address latest modus operandi driving losses while ensuring frictionless experience for legitimate users. We aim to maintain losses below the target and create magical customer experiences by leveraging data and technology to capture insights, identify opportunities, and ultimately prioritize product and engineering initiatives. Does this sound exciting to youAre you a tested teammate, strategic problem solver, and executorWe want to hear from you. What the Candidate Will Need / Bonus Points What the Candidate Will Do ---- Own the loss metrics for the assigned line of business/Region and design logics and scalable solutions to mitigate fraud causing modus operandi Own new risk solution and related experimentation including plan creation, roll-out, and monitoring Be an invaluable partner to cross-functional teams such as engineering, product management, various data teams to deploy data quality across critical pipelines and to set up processes to triage data issues Develop and track metrics and reporting functions to measure and monitor risk products on our platform Effectively and proactively communicate insights and drive projects to drive towards team goals Proactively seek out opportunities to build new solutions to tackle Risk Basic Qualifications ---- 3+ years of experience in a risk-focused role such as product analytics, business analytics, business operations, or data science Education in Engineering, Computer Science, Math, Economics, Statistics or equivalent experience Experience in modern programming languages (Matlab, Python) or statistical languages (SQL, SAS, R) Past experience with a Product / Tech company serving millions of customers on multiple platforms and countries Preferred Qualifications ---- SQL mastery. Write efficient and complex code in SQL Experience in Python/R and experimentation, A/B testing, and statistical modelling Experience in Risk in a Product / Tech company Proven ability to handle and visualise large datasets, explore and utilize raw data feeds Love of data - you just go get the data you need and turn it into an insightful story. A well-organized, structured approach to problem-solving Strong sense of ownership, accountability, and entrepreneurial spirit Great communicator, problem-solver & confident in decision making Independent & autonomous, while still a strong teammate Enthusiastic, self-starting and thrives in changing, agile environments Liaise with Product and engineering counterparts to launch and impact new products *Accommodations may be available based on religious and/or medical conditions, or as required by applicable law.

Posted 4 weeks ago

Apply

5.0 - 10.0 years

45 - 50 Lacs

Bengaluru

Work from Office

The professional will assist in the maintenance and rollout of the Deal Teams "Data-Driven transformation" project, with the main objective of implementing the ideal globally. This person will support the iDeal Global Rollout, aiding regional teams to integrate iDeal into their end-to-end deal-making process. Responsibilities Assist in managing the iDeal global rollout project Help in creating and updating the iDeal user support platform Contribute ideas on how to apply AI in End-to-End Deal Making processes Assist in designing a better experience and journey to build the deal using iDeal Analyze data generated and gathered by iDeal Develop data-driven pre-approved deal frameworks Provide reporting and deal performance assessments by Geography, Vertical, Sales Orgs Document best practices and lessons learned Map external data (macroeconomic and market level) Day-to-day management of iDeal admin modules: Update drivers and parameters (e.g., actuals, acquiring fees, yields) Update Visa definitions and requirements (e.g., P&L breakouts, KPI calculations and thresholds) Manage user and entitlement (e.g., different profiles for users/admins) Bachelors degree or higher in Computer Science, Statistics, Mathematics, or related field At least 6 years of experience in data analysis and modeling Strong communication and presentation sk

Posted 4 weeks ago

Apply

1.0 - 6.0 years

3 - 8 Lacs

Bengaluru

Work from Office

A Day in Your Life at MKS: As an Associate Data Scientist in our Global Service Team, you will partner with Engineering, Technical Support, Product Marketing, Quality, and Service Operations to drive data driven improvements to the service solutions portfolio. You Will Make an Impact By: Collecting, processing, and performing statistical analysis on quality metrics to identify areas of improvement, patterns, and product trends across a broad portfolio. Design, develop, and maintain Power BI dashboards and databases to visualize data into actional insights. Generate regular and ad-hoc reports to articulate recommendations to cross-functional teams. Collaborate with technical support and engineering teams to document key items for continuous improvements in the repair processes. Monitor and ensure data integrity, accuracy, and consistency across all dashboards, databases, and reports. Travel Requirements: Up to 15 % of Travel is required Skills You Bring: Bachelor of Science in Engineering degree or equivalent experience Minimum 1 year of related experience in Power BI Proficient ability to analyze complex datasets and implementing models to support business intelligence solutions. Efficient in creating interactive dashboards, reports, and data visualizations using Power BI. Knowledge and familiarity with programming languages such as Python for data analysis automation. Excellent verbal and written communication skills with a high level of attention to detail. Physical Demands & Working Conditions: Must be able to remain in a stationary position for 85% of the time Constantly operates a computer and other office productivity machinery This job operates in a professional office environment

Posted 4 weeks ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Bengaluru

Work from Office

Remote Work: Hybrid Overview: At Zebra, we are a community of innovators who come together to create new ways of working to make everyday life better. United by curiosity and care, we develop dynamic solutions that anticipate our customer s and partner s needs and solve their challenges. Being a part of Zebra Nation means being seen, heard, valued, and respected. Drawing from our diverse perspectives, we collaborate to deliver on our purpose. Here you are a part of a team pushing boundaries to redefine the work of tomorrow for organizations, their employees, and those they serve. You have opportunities to learn and lead at a forward-thinking company, defining your path to a fulfilling career while channeling your skills toward causes that you care about locally and globally. We ve only begun reimaging the future for our people, our customers, and the world. Let s create tomorrow together. We are seeking a highly skilled and motivated Data Scientist (LLM Specialist) to join our AI/ML team. This role is ideal for an individual passionate about Large Language Models (LLMs) , workflow automation, and customer-centric AI solutions. You will be responsible for building robust ML pipelines , designing scalable workflows, interfacing with customers, and independently driving research and innovation in the evolving agentic AI space . Responsibilities: LLM Development & Optimization: Train, fine-tune, evaluate, and deploy Large Language Models (LLMs) for various customer-facing applications. Pipeline & Workflow Development: Build scalable machine learning workflows and pipelines that facilitate efficient data ingestion, model training, and deployment. Model Evaluation & Performance Tuning: Implement best-in-class evaluation metrics to assess model performance, optimize for efficiency, and mitigate biases in LLM applications. Customer Engagement: Collaborate closely with customers to understand their needs, design AI-driven solutions , and iterate on models to enhance user experiences. Research & Innovation: Stay updated on the latest developments in LLMs, agentic AI , reinforcement learning with human feedback (RLHF), and generative AI applications. Recommend novel approaches to improve AI-based solutions. Infrastructure & Deployment: Work with MLOps tools to streamline deployment and serve models efficiently using cloud-based or on-premise architectures, including Google Vertex AI for model training, deployment, and inference. Foundational Model Training: Experience working with open-weight foundational models , leveraging pre-trained architectures, fine-tuning on domain-specific datasets, and optimizing models for performance and cost-efficiency. Cross-Functional Collaboration: Partner with engineering, product, and design teams to integrate LLM-based solutions into customer products seamlessly. Ethical AI Practices: Ensure responsible AI development by addressing concerns related to bias, safety, security, and interpretability in LLMs. Programming Skills: Proficiency in Python and experience with ML frameworks like TensorFlow, PyTorch LLM Expertise: Hands-on experience in training, fine-tuning, and deploying LLMs (e.g., OpenAI s GPT, Meta s LLaMA, Mistral, or other transformer-based architectures). Foundational Model Knowledge: Strong understanding of open-weight LLM architectures , including training methodologies, fine-tuning techniques, hyperparameter optimization, and model distillation . Data Pipeline Development: Strong understanding of data engineering concepts , feature engineering, and workflow automation using Airflow or Kubeflow . Cloud & MLOps: Experience deploying ML models in cloud environments like AWS, GCP (Google Vertex AI), or Azure using Docker and Kubernetes . Model Serving & Optimization: Proficiency in model quantization, pruning, distillation, and knowledge distillation to improve deployment efficiency and scalability. Research & Problem-Solving: Ability to conduct independent research , explore novel solutions , and implement state-of-the-art ML techniques. Strong Communication Skills: Ability to translate technical concepts into actionable insights for non-technical stakeholders. Version Control & Collaboration: Proficiency in Git, CI/CD pipelines , and working in cross-functional teams . Qualifications: Bachelor s degree. Advanced degree-masters or PhD-strongly preferred in Statistics, Mathematics, Data / Computer Science or related discipline 2-5 years experience Statistics modeling and algorithms Machine Learning Experience-including deep learning and neural networks, genetics algorithm etc. Working knowledge Big Data-Hadoop, Cassandra,Spark R. Hands-on experience preferred Data Mining Data Visualization and visualization and analysis tools including R Work/Project experience in sensors, IoT, mobile industry highly preferred Excellent verbal and written communication Comfortable with presenting to senior management and CxO level executives Self motivated and self starter with high degree of work ethic To protect candidates from falling victim to online fraudulent activity involving fake job postings and employment offers, please be aware our recruiters will always connect with you via @zebra.com email accounts. Applications are only accepted through our applicant tracking system and only accept personal identifying information through that system. Our Talent Acquisition team will not ask for you to provide personal identifying information via e-mail or outside of the system. If you are a victim of identity theft contact your local police department. ", "

Posted 4 weeks ago

Apply

4.0 - 7.0 years

6 - 9 Lacs

Bengaluru

Work from Office

Its fun to work in a company where people truly BELIEVE in what they are doing! Were committed to bringing passion and customer focus to the business. Responsibilities: Design and implement advanced solutions utilizing Large Language Models (LLMs). Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions. Conduct research and stay informed about the latest developments in generative AI and LLMs. Develop and maintain code libraries, tools, and frameworks to support generative AI development. Participate in code reviews and contribute to maintaining high code quality standards. Engage in the entire software development lifecycle, from design and testing to deployment and maintenance. Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility. Possess strong analytical and problem-solving skills. Demonstrate excellent communication skills and the ability to work effectively in a team environment. Primary Skills: Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling , Q&A and chatbots, search, Document AI, summarization, and content generation. AND/OR Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis. Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct , agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities. Familiarity with Open-source LLMs, including tools like TensorFlow/ Pytorch and huggingface . Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization . Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred. Application Development: Proficiency in Python, Docker, FastAPI /Django/Flask, and Git. If you like wild growth and working with happy, enthusiastic over-achievers, youll enjoy your career with us! Not the right fit? Let us know youre interested in a future opportunity by clicking Introduce Yourself in the top-right corner of the page or create an account to set up email alerts as new job postings become available that meet your interest!

Posted 4 weeks ago

Apply

5.0 - 6.0 years

7 - 8 Lacs

Chennai

Work from Office

Senior Data Scientist Chennai, India Who we are: INVIDI Technologies Corporation is the worlds leading developer of software transforming television all over the world. Our two-time Emmy Award-winning technology is widely deployed by cable, satellite, and telco operators. We provide a device-agnostic solution delivering ads to the right household no matter what program or network you re watching, how youre watching, or whether you re in front of your TV, laptop, cell phone or any other device. INVIDI created the multi-billion-dollar addressable television business that today is growing rapidly globally. INVIDI is right at the heart of the very exciting and fast-paced world of commercial television; companies benefiting from our software include DirecTV, Dish Network, and Verizon, networks such as CBS/Viacom and A&E, advertising agencies such as Ogilvy and Publicis, and advertisers such as Chevrolet and Allstate. INVIDI s world-class technology solutions are known for their flexibility and adaptability. These traits allow INVIDI partners to transform their video content delivery network, revamping legacy systems without significant capital or hardware investments. Our clients count on us to provide superior capabilities, excellent service, and ease of use. The goal of developing a unified video ad tech platform is a big one and the right Senior Data Scientist --like you--flourish in INVIDI s creative, inspiring, and supportive culture. It is a demanding, high-energy, and fast-paced environment. About the role: As a Senior Data Scientist, you have a grounding in data analysis using tools in Python ecosystem and AWS. You also have a business sense for asking and answering fundamental questions to help shape key strategic decisions. This role involves thinking critically and strategically about video ad delivery as a technology, as a business, and as an operation to help broadcasters, distributors, and media companies transform and evolve their advertising practices through the use of data. Using proven design patterns, you will help identify opportunities for INVIDI and our clients to operate more efficiently and produce innovative and actionable quantitative models and analyses to address the challenges of marketing effectiveness and measurement. As a data scientist, you will do more than just crunch the numbers. You will work with Engineers, Product Managers, Sales Associates and Marketing teams to adjust business practices according to your findings. Identifying the problem is only half the job; you also need to figure out the solution. You must be versatile, display leadership qualities and be enthusiastic to take on new problems as we continue to push technology forward. The position will report directly to the Technical Manager of Software Development and will be based in our Chennai, India office. Key responsibilities: Build business intelligence dashboards using AWS Quick sights, Tableau, Excel and power BI Design, develop and deploy Machine learning models using Linear regression, Classification, Neural networks, Time Series forecasting. Engage broadly within the organization to identify, prioritize, frame, and structure reporting problems Help define analytical direction and influence the direction of data engineering and infrastructure work Conduct end-of-the-end analyses, including data gathering, requirements specification, processing, analysis, ongoing deliverables, and presentations Translate analysis results into business recommendations Develop comprehensive understanding of video content inventory, scheduling, customer segmentation, video distribution, viewership data structures and metrics Recommend and implement strategies to solve business problems when availability of data is limited Work with very large data sets to glean useful insights that are valuable to the business. You must have: Bachelor s degree in a computer science, information management systems, or Data Science discipline or equivalent practical experience 5-6 years of experience in statistical data analysis, linear models, multivariate analysis, stochastic models, and sampling methods 4-5 years of experience in data modelling in SQL databases 3-4 years of experience in Data visualization tools like AWS QuickSight, Tableau, Power BI 3-4 years of experience in EDA in Notebook based environments like Jupyter Notebook using Python Pandas, NumPy, Matplotlib, Seaborn 3+ years of experience with AWS services like S3, IAM, Redshift, Athena, and AWS SageMaker 2-3 years experience designing data lakes, cloud data warehouses using Snowflake, Redshift and BigQuery 2-3 years experience with one or more cloud service providers like AWS and GCP. Strong experience in A/B testing, experimental design, and statistical inference It would be very good if you have experience in: Experience in building marketing analytics dashboards Working with Scrum teams in an Agile way Experience with MLOps practices and model deployment pipelines Experience in the following domains: video content delivery, viewership measurement, advertising technology, digital advertising Physical Requirements: INVIDI is a conscious, clean, well-organized, and supportive office environment. Prolonged periods of sitting at a desk and working on a computer are normal. Note: Final candidates must successfully clear INVIDI s background screening requirements. Final candidates must be legally authorized to work in India. INVIDI has reopened its offices on a flexible hybrid model. Ready to join our team? Apply today!

Posted 4 weeks ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Pune, Chennai

Work from Office

Position Overview: We are seeking a Senior Data Scientist Engineer with experience bringing highly scalable enterprise SaaS applications to market. This is a uniquely impactful opportunity to help drive our business forward and directly contribute to long-term growth at Virtana. If you thrive in a fast-paced environment, take initiative, embrace proactivity and collaboration, and you re seeking an environment for continuous learning and improvement, we d love to hear from you! Virtana is a remote first work environment so you ll be able to work from the comfort of your home while collaborating with teammates on a variety of connectivity tools and technologies. Role Responsibilities: Research and test machine learning approaches for analyzing large-scale distributed computing applications. Develop production-ready implementations of proposed solutions across different models AI and ML algorithms, including testing on live customer data to improve accuracy, efficacy, and robustness Work closely with other functional teams to integrate implemented systems into the SaaS platform Suggest innovative and creative concepts and ideas that would improve the overall platform. Job Location - Pune, Chennai or Remote Qualifications: The ideal candidate must have the following qualifications: 6 + years experience in practical implementation and deployment of large customer-facing ML based systems. MS or M Tech (preferred) in applied mathematics/statistics; CS or Engineering disciplines are acceptable but must have with strong quantitative and applied mathematical skills In-depth working, beyond coursework, familiarity with classical and current ML techniques, both supervised and unsupervised learning techniques and algorithms Implementation experiences and deep knowledge of Classification, Time Series Analysis, Pattern Recognition, Reinforcement Learning, Deep Learning, Dynamic Programming and Optimization Experience in working on modeling graph structures related to spatiotemporal systems Programming skills in Python is a must Experience in understanding and usage of LLM models and Prompt engineering is preferred. Experience in developing and deploying on cloud (AWS or Google or Azure) Good verbal and written communication skills Familiarity with well-known ML frameworks such as Pandas, Keras, TensorFlow

Posted 4 weeks ago

Apply

5.0 - 8.0 years

15 - 27 Lacs

Bengaluru

Work from Office

Job Description Skills : B.E./ B. Tech / M. Tech/ MCA in computer science, artificial intelligence, or a related field 6+ years of IT experience with a min of 3+ years in Data Science (AI/ML) Strong programming skills in Python Experience with deep learning frameworks (e.g., TensorFlow, PyTorch) Hands-on AI/ML modeling experience of complex datasets combined with a strong understanding of the theoretical foundations of AI/ML(Research Oriented). Expertise in most of the following areas: supervised & unsupervised learning, deep learning, reinforcement learning, federated learning, time series forecasting, Bayesian statistics, and optimization. Hands-on experience on design, and optimizing LLM, natural language processing (NLP) systems, frameworks, and tools. Building RAG application independently using available open source LLM models. Comfortable working in the cloud and high-performance computing environments (e.g., AWS/Azure/GCP, Databricks). Key Skills : Data Science (AI/ML) ,python ,LLM ,any cloud environment (AWS/Azure/GCP, Databricks)

Posted 4 weeks ago

Apply

6.0 - 11.0 years

20 - 30 Lacs

Bengaluru

Remote

Role & responsibilities We are looking for an experienced Data Scientist with expertise in Generative AI to join our team. In this role, you will develop and optimize AI/ML models, fine-tune large language models (LLMs), and build innovative AI-driven solutions. You will work closely with cross-functional teams to apply Gen AI techniques for text, image, and speech generation, driving business impact through intelligent automation, personalization, and data-driven insights. - Design and develop and deploy algorithms for generative models using deep learning techniques for given business problem. - Collaborate with cross-functional teams to integrate generative AI solutions into existing workflow systems. - Research and stay up-to-date on the latest advancements in generative AI technologies and methodologies. - Optimize and fine-tune generative models for performance and efficiency. - Troubleshoot and resolve issues related to generative AI models and implementations - Create and maintain documentation for generative AI models and their applications. - Communicate complex technical concepts and findings to non-technical stakeholders.

Posted 4 weeks ago

Apply

3.0 - 7.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Develop, test and support future-ready data solutions for customers across industry verticals. Develop, test, and support end-to-end batch and near real-time data flows/pipelines. Demonstrate understanding of data architectures, modern data platforms, big data, analytics, cloud platforms, data governance and information management and associated technologies. Communicates risks and ensures understanding of these risks. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Graduate with a minimum of 5+ years of related experience required. Experience in modelling and business system designs. Good hands-on experience on DataStage, Cloud-based ETL Services. Have great expertise in writing TSQL code. Well-versed with data warehouse schemas and OLAP techniques. Preferred technical and professional experience Ability to manage and make decisions about competing priorities and resources. Ability to delegate where appropriate. Must be a strong team player/leader. Ability to lead Data transformation projects with multiple junior data engineers. Strong oral written and interpersonal skills for interacting throughout all levels of the organization. Ability to communicate complex business problems and technical solutions.

Posted 4 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

As a Data Scientist you will identify business trends and problems through complex big data analysis. You will interpret results from multiple sources using a variety of techniques, ranging from simple data aggregation via statistical analysis to complex data mining independently. You will design, develop and implement the most valuable business solutions for the organization. You will prepare big data, implements data modules and develops database to support the business solutions. Responsibilities: Work on existing Digital Products and understand and enhance the current intelligent models and innovatively improve them. Work with stakeholders to identify opportunities from data, drive innovation and a culture of Invention Disclosures. Translate data into actionable insights to empower confident decisions, product creation & development, drive optimization, marketing techniques, business strategies and outcomes. Research, develop, plan and implement predictive AI/ML/DL based algorithms, Optimization techniques for IoT, IIOT, Robotics and other digital products. Develop, manage and maintain Machine Learning and Deep Learning models and algorithms to apply to data sets. Assess the effectiveness, efficacy and accuracy of new data sources and data gathering techniques. Coordinate with different functional teams to implement models and monitor outcomes. Develop processes and tools to monitor and analyze model performance and data accuracy. Motivation and drive to seek out for new projects and opportunities. Qualifications: Masters Degree - A Ph.D. in Electrical Engineering, Computer Science, or related fields is required. Strong oral and written communication skills. A collaborative mindset to excel in a team environment. Ability to take complex problem objectives and come up with innovative and flexible solutions. Proven track record of driving changes. Ability to reconcile complex possibly conflicting tasks and come up with realistic solutions Experience with distributed data/computing and orchestration tools: Map/Reduce, Hadoop, Hive, Spark, MySQL, Beam, Druid, Airflow, Presto etc. Expertise with statistical techniques and their applications in business. Coding knowledge and experience with Python is a minimum. C, C++, Java, R and JavaScript is a plus. 3 years of experience in working with any of the deep learning frameworks like Pytorch, Tensorflow, Caffe, etc. 3 years of a proven track record of independent thinking and complex problem-solving. Work can include papers in reputed conferences or journals and/or patent applications. 3 years of a proven track record of R&D research in an industrial setting with multiple patents/patent applications. 3 years of experience in robotics i.e., automation through reinforcement learning. 3 years of experience in optimization theory like black-box optimization methods. 3 years of experience in designing algorithms for IoT devices with resource and power constraints. At Wesco, we build, connect, power and protect the world. As a leading provider of business-to-business distribution, logistics services and supply chain solutions, we create a world that you can depend on. Our Company’s greatest asset is our people. Wesco is committed to fostering a workplace where every individual is respected, valued, and empowered to succeed. We promote a culture that is grounded in teamwork and respect. With a workforce of over 20,000 people worldwide, we embrace the unique perspectives each person brings. Through comprehensive benefits and active community engagement, we create an environment where every team member has the opportunity to thrive. Learn more about Working at Wesco here and apply online today! Founded in 1922 and headquartered in Pittsburgh, Wesco is a publicly traded (NYSE: WCC) FORTUNE 500® company. Wesco International, Inc., including its subsidiaries and affiliates (“Wesco”) provides equal employment opportunities to all employees and applicants for employment. Employment decisions are made without regard to race, religion, color, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, or other characteristics protected by law. US applicants only, we are an Equal Opportunity Employer. Los Angeles Unincorporated County Candidates Only: Qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance and the California Fair Chance Act.

Posted 4 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job Requirements Key Skills & Competencies: Programming: Proficiency in Python and SQL, with experience in time series data manipulation, analysis, and automation. Data Analysis: Expertise in data wrangling, cleaning, transformation, and exploratory data analysis (EDA). Machine Learning: Hands-on experience with supervised and unsupervised learning, model evaluation, and hyperparameter tuning. Deep Learning: Basic understanding of neural networks and familiarity with frameworks like TensorFlow or PyTorch. Visualization: Ability to create insightful visualizations using tools like matplotlib, Plotly, or similar. Mathematics & Statistics: Strong foundation in probability, statistics, linear algebra, and hypothesis testing. Software Engineering: Familiarity with Git, modular coding practices, API integration, and basic CI/CD workflows. Cloud Platforms: Exposure to AWS, GCP, or Azure is a plus. Business & Communication: Ability to translate data insights into business value and communicate findings clearly to both technical and non-technical stakeholders. Core Responsibilities: Develop models, algorithms, and analytics for assigned projects, ensuring technical soundness by applying solid engineering principles and adhering to business standards, procedures, and product/program requirements. Utilize state-of-the-art methodologies to perform tasks efficiently and effectively, including conducting research to explore and introduce new technologies in data acquisition and analysis. Rapidly prototype multiple approaches using Data Science, Artificial Intelligence, and Machine Learning concepts, applying sound judgment to select the most suitable method for full-scale development. Prepare comprehensive technical documentation throughout the development phase, aligned with engineering policies and procedures. Work Experience Key Skills & Competencies: Programming: Proficiency in Python and SQL, with experience in time series data manipulation, analysis, and automation. Data Analysis: Expertise in data wrangling, cleaning, transformation, and exploratory data analysis (EDA). Machine Learning: Hands-on experience with supervised and unsupervised learning, model evaluation, and hyperparameter tuning. Deep Learning: Basic understanding of neural networks and familiarity with frameworks like TensorFlow or PyTorch. Visualization: Ability to create insightful visualizations using tools like matplotlib, Plotly, or similar. Mathematics & Statistics: Strong foundation in probability, statistics, linear algebra, and hypothesis testing. Software Engineering: Familiarity with Git, modular coding practices, API integration, and basic CI/CD workflows. Cloud Platforms: Exposure to AWS, GCP, or Azure is a plus. Business & Communication: Ability to translate data insights into business value and communicate findings clearly to both technical and non-technical stakeholders. Core Responsibilities: Develop models, algorithms, and analytics for assigned projects, ensuring technical soundness by applying solid engineering principles and adhering to business standards, procedures, and product/program requirements. Utilize state-of-the-art methodologies to perform tasks efficiently and effectively, including conducting research to explore and introduce new technologies in data acquisition and analysis. Rapidly prototype multiple approaches using Data Science, Artificial Intelligence, and Machine Learning concepts, applying sound judgment to select the most suitable method for full-scale development. Prepare comprehensive technical documentation throughout the development phase, aligned with engineering policies and procedures.

Posted 4 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Details: Job Description Stefanini Group is a multinational company with a global presence in 41 countries and 44 languages, specializing in technological solutions. We believe in digital innovation and agility to transform businesses for a better future. Our diverse portfolio includes consulting, marketing, mobility, AI services, service desk, field service, and outsourcing solutions. Job Requirements Details: Role : Data Scientist Exp : 6 - 9 yrs Location : Pune only Interview - 2 rounds Mandatory Skills: Experience in Deep learning engineering (mostly on MLOps) Strong NLP/LLM experience and processing text using LLM Proficient in Pyspark/Databricks & Python programming. Building backend applications (data processing etc) using Python and Deep learning frame works. Deploying models and building APIS (FAST API, FLASK API) Need to have experience working with GPU'S. Working knowledge of Vector databases like 1) Milvus 2) azure cognitive search 3) quadrant etc Experience in transformers and working with hugging face models like llama, Mixtral AI and embedding models etc. Good To Have: Knowledge and experience in Kubernetes, docker etc Cloud Experience working with VM'S and azure storage. Sound data engineering experience. Pune: Hybrid Shift: 1 Pm to 10 PM

Posted 4 weeks ago

Apply

1.0 - 3.0 years

5 - 9 Lacs

Chennai

Work from Office

Experience in data science, machine learning, and AI model training. Strong skills in Python, TensorFlow, Hugging Face Transformers, and Google Vertex AI. Prior experience in RCM, healthcare data, or insurance analytics is a strong plus.

Posted 4 weeks ago

Apply

0 years

4 - 6 Lacs

Kasaragod, Kerala

Remote

Job Description: Data Scientist / AI Engineer ( Full-Time) Company: Ubon Dairy Products Location: Remote / Hybrid (Kasaragod preferred) Project Type: Logistics Optimization & AI Routing Engine Start Date: Immediate ✅ Responsibilities Design and build a custom AI engine for dynamic delivery route planning and sales optimization. Integrate external APIs (Google Maps, SAP, mobile app APIs). Implement predictive models for order forecasting, stock planning, and delivery performance. Clean and process delivery, outlet, and route data for training AI models. Collaborate with app developers and SAP integrators to deploy models into production. Track and improve model performance through live metrics and feedback. Must-Have Skills Strong experience in Python (Pandas, Scikit-learn, TensorFlow or PyTorch) Hands-on with logistics optimization (TSP, VRP, clustering) Experience using Google Maps API (Directions, Distance Matrix, Geocoding) Proficient in SQL and working with ERP data Understanding of AI model deployment and performance tuning Bonus Skills Familiarity with SAP (especially sales/inventory modules) Experience building route optimization engines for FMCG or dairy Prior experience with B2B delivery operations or mobile-based field apps How to Apply Send your resume and portfolio of past AI/logistics projects to: [email protected] 97445 55999 Job Type: Full-time Pay: ₹420,000.00 - ₹600,000.00 per year Work Location: In person

Posted 4 weeks ago

Apply

4.0 years

3 - 7 Lacs

Coimbatore

On-site

Industry: IT Qualification: Masters in Data Science, AI, ML, - Background in Computer Science, Electronics Required Skills: AI/ML, Python, Cloud Paltforms Working Shift: 10am to 7pm IST City: Coimbatore Country: India Name of the position: Data Scientist Location: Coimbatore No. of resources needed : 01 Mode: Fulltime Years of experience: 4+ Years Overview We are seeking a highly skilled and motivated Data Scientist to join our growing AI and Data team. In this role, you will design, develop, and deploy advanced AI and machine learning models using modern frameworks and cloud platforms. You will work closely with cross-functional teams to integrate AI-driven solutions into core business processes and stay abreast of the latest trends in generative AI, LLMs, and Agentic AI. Key Responsibilities Design, develop, and deploy AI/ML models using technologies such as GenAI, FAST API, and AWS/GCP services (e.g., Lambda, SageMaker, Microsoft AI Studio). Implement model versioning, monitoring, and performance tracking to maintain data accuracy and reliability. Collaborate with engineers, data scientists, and domain experts to integrate AI solutions into business operations. Build and maintain high-quality, secure, and scalable data pipelines. Continuously review and apply the latest advancements in Agentic AI, LLMs, and AI research to improve model capabilities. Optimize machine learning workflows for cloud environments (AWS, GCP, or Azure). Contribute to AI architecture planning and best practices. Required Skills : Minimum 4+ years of experience in data science and AI/ML. Strong programming skills in Python. Hands-on experience with cloud platforms (AWS, GCP, or Azure). Proficiency in machine learning algorithms, model training, and deployment. Experience with model versioning and monitoring tools (e.g., MLflow, Weights & Biases). Strong problem-solving and analytical capabilities. Experience with REST APIs and ML model integration. Domain experience in medicine or insurance. Familiarity with Large Language Models (LLMs) and Agentic AI systems. Exposure to backend technologies and microservice development. Advanced degree (Master’s preferred) in Data Science, Artificial Intelligence, Machine Learning, Computer Science, or related field.

Posted 4 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Responsibilities Anchor ML development track in a client project Data collection, profiling, EDA & data preparation AI Model development, experimentation, tuning & validation Present findings to business users & project management teams Propose ML based solution approaches & estimates for new use cases Contribute to AI based modules in Infosys solutions development Explore new advances in AI continuously and execute PoCs Mentoring – Guide junior team members & evangelize AI in organization Technical skills Programming: Python, R, SQL ML algorithms: Statistical ML algorithms Deep Neural Network architectures Model ensembling Generative AI models ML for Responsible AI AI domains - NLP, speech, computer vision, structured data Learning patterns – supervised, unsupervised, reinforcement learning Tools for data analysis, auto ML, model deployment & scaling, model fine tuning Knowledge of different Model fine tuning approaches for Large models Knowledge of datasets, pre-built models available in open community and 3rd party providers Knowledge of software development & architectures Knowledge of hyperscalers & their AI capabilities – Azure, AWS, GCP Knowledge of Model Quantization and Pruning Past experience playing a Data Scientist role

Posted 4 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies