Home
Jobs

1452 Data Scientist Jobs - Page 19

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description The Senior Software Engineer will work on a Balanced Product Team and collaborate with the Product Manager, Product Designer, and other Software Engineers to deliver analytic solutions. The Software Engineer will be responsible for the development and ongoing support/maintenance of the analytic solutions. Product And Requirements Management: Participate in and/or lead the development of requirements, features, user stories, use cases, and test cases. Participate in stand-up operations meetings. Author: Process and Design Documents Design/Develop/Test/Deploy: Work with the Business Customer, Product Owner, Architects, Product Designer, Software Engineers, and Security Controls Champion on solution design, development, and deployment. Operations: Generate Metrics, Perform User Access Authorization, Perform Password Maintenance, and Build Deployment Pipelines. Incident, Problem, And Change/Service Requests: Participate and/or lead incident, problem, change and service request-related activities. Includes root cause analysis (RCA). Includes proactive problem management/defect prevention activities. Responsibilities Willingness to collaborate daily with team members. A strong curiosity around how to best use technology to amaze and delight our customers Experience in development in at least some from each following categories And Apply: Languages: Java / Kotlin / JS / TS / Python / Other Frontend frameworks: Angular / React / Vue / Other Backend frameworks: Spring / Node / Other Proven experience understanding, practicing, and advocating for software engineering disciplines from eXtreme Programming (XP), Clean Code, Software Craftmanship, and Lean including: Paired / Extreme programming Test-first/Test Driven Development (TDD) Evolutionary design Minimum Viable Product FOSSA, SofarQube,42Crunch, etc., Qualifications 5+ years experience in Software Engineering. Bachelor’s degree in computer science, computer engineering or a combination of education and equivalent experience. 2+ year experience with developing for and deploying to GCP cloud platforms Highly effective in working with other technical experts, Product Managers, UI/UX Designers and business stakeholders Delivered products that include web front-end development; JavaScript, CSS, frameworks like Angular, etc. Comfortable with Continuous Integration/Continuous Delivery tools and pipelines e.g. Tekton, Terraform Jenkins, Cloud Build, etc. Experience with machine learning, mathematical modeling, and data analysis is a plus Experience with CA Agile Central (Rally), backlogs, iterations, user stories, or similar Agile Tools Experience in the development of microservices Understanding of fundamental data modeling Strong analytical and problem-solving skills Show more Show less

Posted 1 week ago

Apply

7.0 - 12.0 years

14 - 24 Lacs

Gurugram

Hybrid

Naukri logo

Gen AI + DS + ML Ops Job Title: Generative AI and Data Science Engineer with MLOps Expertise Location: Gurgaon, India Employment Type: Full-time About the Role: We are seeking a versatile and highly skilled Generative AI and Data Science Engineer with strong MLOps expertise. This role combines deep technical knowledge in data science and machine learning with a focus on designing and deploying scalable, production-level AI solutions. You will work with cross-functional teams to drive AI/ML projects from research and prototyping through to deployment and maintenance, ensuring model robustness, scalability, and efficiency. Responsibilities: Generative AI Development and Data Science: Design, develop, and fine-tune generative AI models for various applications such as natural language processing, image synthesis, and data augmentation. Perform exploratory data analysis (EDA) and statistical modeling to identify trends, patterns, and actionable insights. Collaborate with data engineering and product teams to create data pipelines for model training, testing, and deployment. Apply data science techniques to optimize model performance and address real-world business challenges. Machine Learning Operations (MLOps): Implement MLOps best practices for managing and automating the end-to-end machine learning lifecycle, including model versioning, monitoring, and retraining. Build, maintain, and optimize CI/CD pipelines for ML models to streamline development and deployment processes. Ensure scalability, robustness, and security of AI/ML systems in production environments. Develop tools and frameworks for monitoring model performance and detecting anomalies post-deployment. Research and Innovation: Stay current with advancements in generative AI, machine learning, and MLOps technologies and frameworks. Identify new methodologies, tools, and technologies that could enhance our AI and data science capabilities. Engage in R&D initiatives and collaborate with team members on innovative projects. Requirements: Educational Background: Bachelors or Masters degree in Computer Science, Data Science, Engineering, or a related field. PhD is a plus. Technical Skills: Proficiency in Python and familiarity with machine learning libraries (e.g., TensorFlow, PyTorch, Keras, scikit-learn). Strong understanding of generative AI models (e.g., GANs, VAEs, transformers) and deep learning techniques. Experience with MLOps frameworks and tools such as MLflow, Kubeflow, Docker, and CI/CD platforms. Knowledge of data science techniques for EDA, feature engineering, statistical modeling, and model evaluation. Familiarity with cloud platforms (e.g., AWS, Google Cloud, Azure) for deploying and scaling AI/ML models. Soft Skills: Ability to collaborate effectively across teams and communicate complex technical concepts to non-technical stakeholders. Strong problem-solving skills and the ability to innovate in a fast-paced environment. Preferred Qualifications: Prior experience in designing and deploying large-scale generative AI models. Proficiency in SQL and data visualization tools (e.g., Tableau, Power BI). Experience with model interpretability and explainability frameworks.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chennai

On-site

GlassDoor logo

Ford/GDIA Mission and Scope: At Ford Motor Company, we believe freedom of movement drives human progress. We also believe in providing you with the freedom to define and realize your dreams. With our incredible plans for the future of mobility, we have a wide variety of opportunities for you to accelerate your career potential as you help us define tomorrow’s transportation. Creating the future of smart mobility requires the highly intelligent use of data, metrics, and analytics. That’s where you can make an impact as part of our Global Data Insight & Analytics team. We are the trusted advisers that enable Ford to clearly see business conditions, customer needs, and the competitive landscape. With our support, key decision-makers can act in meaningful, positive ways. Join us and use your data expertise and analytical skills to drive evidence-based, timely decision-making. The Global Data Insights and Analytics (GDI&A) department at Ford Motors Company is looking for qualified people who can develop scalable solutions to complex real-world problems using Machine Learning, Big Data, Statistics, Econometrics, and Optimization. The goal of GDI&A is to drive evidence-based decision making by providing insights from data. Applications for GDI&A include, but are not limited to, Connected Vehicle, Smart Mobility, Advanced Operations, Manufacturing, Supply chain, Logistics, and Warranty Analytics. About the Role: You will be part of the FCSD analytics team, playing a critical role in leveraging data science to drive significant business impact within Ford Customer Service Division. As a Data Scientist, you will translate complex business challenges into data-driven solutions. This involves partnering closely with stakeholders to understand problems, working with diverse data sources (including within GCP), developing and deploying scalable AI/ML models, and communicating actionable insights that deliver measurable results for Ford. Qualifications: At least 3 years of relevant professional experience applying data science techniques to solve business problems. This includes demonstrated hands-on proficiency with SQL and Python. Bachelor's or Master's degree in a quantitative field (e.g., Statistics, Computer Science, Mathematics, Engineering, Economics). Hands-on experience in conducting statistical data analysis (EDA, forecasting, clustering, hypothesis testing, etc.) and applying machine learning techniques (Classification/Regression, NLP, time-series analysis, etc.). Technical Skills: Proficiency in SQL, including the ability to write and optimize queries for data extraction and analysis. Proficiency in Python for data manipulation (Pandas, NumPy), statistical analysis, and implementing Machine Learning models (Scikit-learn, TensorFlow, PyTorch, etc.). Working knowledge in a Cloud environment (GCP, AWS, or Azure) is preferred for developing and deploying models. Experience with version control systems, particularly Git. Nice to have: Exposure to Generative AI / Large Language Models (LLMs). Functional Skills: Proven ability to understand and formulate business problem statements. Ability to translate Business Problem statements into data science problems. Strong problem-solving ability, with the capacity to analyze complex issues and develop effective solutions. Excellent verbal and written communication skills, with a demonstrated ability to translate complex technical information and results into simple, understandable language for non-technical audiences. Strong business engagement skills, including the ability to build relationships, collaborate effectively with stakeholders, and contribute to data-driven decision-making. Build an in-depth understanding of the business domain and data sources, demonstrating strong business acumen. Extract, analyze, and transform data using SQL for insights. Apply statistical methods and develop ML models to solve business problems. Design and implement analytical solutions, contributing to their deployment, ideally leveraging Cloud environments. Work closely and collaboratively with Product Owners, Product Managers, Software Engineers, and Data Engineers within an agile development environment. Integrate and operationalize ML models for real-world impact. Monitor the performance and impact of deployed models, iterating as needed. Present findings and recommendations effectively to both technical and non-technical audiences to inform and drive business decisions.

Posted 1 week ago

Apply

0 years

4 - 8 Lacs

Chennai

On-site

GlassDoor logo

Career Area: Technology, Digital and Data Job Description: Your Work Shapes the World at Caterpillar Inc. When you join Caterpillar, you're joining a global team who cares not just about the work we do – but also about each other. We are the makers, problem solvers, and future world builders who are creating stronger, more sustainable communities. We don't just talk about progress and innovation here – we make it happen, with our customers, where we work and live. Together, we are building a better world, so we can all enjoy living in it. Role Definition Performs analytical tasks and initiatives on huge amount of data to support data-driven business decision and development. Responsibilities Directing the data gathering, data mining, and data processing processes in huge volume; creating appropriate data models. Exploring, promoting, and implementing semantic data capabilities through Natural Language Processing, text analysis and machine learning techniques. Leading to define requirements and scope of data analyses; presenting and reporting possible business insights to management using data visualization technologies. Conducting research on data model optimization and algorithms to improve effectiveness and accuracy on data analyses. Skill Descriptors Business Statistics: Knowledge of the statistical tools, processes, and practices to describe business results in measurable scales; ability to use statistical tools and processes to assist in making business decisions. Level Working Knowledge: Explains the basic decision process associated with specific statistics. Works with basic statistical functions on a spreadsheet or a calculator. Explains reasons for common statistical errors, misinterpretations, and misrepresentations. Describes characteristics of sample size, normal distributions, and standard deviation. Generates and interprets basic statistical data. Accuracy and Attention to Detail: Understanding the necessity and value of accuracy; ability to complete tasks with high levels of precision. Level Working Knowledge: Accurately gauges the impact and cost of errors, omissions, and oversights. Utilizes specific approaches and tools for checking and cross-checking outputs. Processes limited amounts of detailed information with good accuracy. Learns from mistakes and applies lessons learned. Develops and uses checklists to ensure that information goes out error-free. Analytical Thinking: Knowledge of techniques and tools that promote effective analysis; ability to determine the root cause of organizational problems and create alternative solutions that resolve these problems. Level Working Knowledge: Approaches a situation or problem by defining the problem or issue and determining its significance. Makes a systematic comparison of two or more alternative solutions. Uses flow charts, Pareto charts, fish diagrams, etc. to disclose meaningful data patterns. Identifies the major forces, events and people impacting and impacted by the situation at hand. Uses logic and intuition to make inferences about the meaning of the data and arrive at conclusions. Machine Learning: Knowledge of principles, technologies and algorithms of machine learning; ability to develop, implement and deliver related systems, products and services. Level Basic Understanding: Explains the definition and objectives of machine learning. Describes the algorithms and logic of machine learning. Distinguishes between machine learning and deep learning. Gives several examples on the implementation of machine learning. Programming Languages: Knowledge of basic concepts and capabilities of programming; ability to use tools, techniques and platforms in order to write and modify programming languages. Level Basic Understanding: Describes the basic concepts of programming and program construction activities. Uses programming documentation including program specifications in order to maintain standards. Describes the capabilities of major programming languages. Identifies locally relevant programming tools. Query and Database Access Tools: Knowledge of data management systems; ability to use, support and access facilities for searching, extracting and formatting data for further use. Level Working Knowledge: Defines, creates and tests simple queries by using associated command language in a specific environment. Applies appropriate query tools used to connect to the data warehouse. Obtains and analyzes query access path information and query results. Employs tested query statements to retrieve, insert, update and delete information. Works with advanced features and functions including sorting, filtering and making simple calculations. Requirements Analysis: Knowledge of tools, methods, and techniques of requirement analysis; ability to elicit, analyze and record required business functionality and non-functionality requirements to ensure the success of a system or software development project. Level Working Knowledge: Follows policies, practices and standards for determining functional and informational requirements. Confirms deliverables associated with requirements analysis. Communicates with customers and users to elicit and gather client requirements. Participates in the preparation of detailed documentation and requirements. Utilizes specific organizational methods, tools and techniques for requirements analysis. Posting Dates: June 13, 2025 - June 15, 2025 Caterpillar is an Equal Opportunity Employer. Not ready to apply? Join our Talent Community.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Chennai

On-site

GlassDoor logo

The Global Data Insights and Analytics (GDI&A) department at Ford Motors Company is looking for qualified people who can develop scalable solutions to complex real-world problems using Machine Learning, Big Data, Statistics, Econometrics, and Optimization.The candidate should possess the ability to translate a business problem into an analytical problem, identify the relevant data sets needed for addressing the analytical problem, recommend, implement, and validate the best suited analytical algorithm(s), and generate/deliver insights to stakeholders. Candidates are expected to regularly refer to research papers and be at the cutting-edge with respect to algorithms, tools, and techniques. The role is that of an individual contributor; however, the candidate is expected to work in project teams of 2 to 3 people and interact with Business partners on regular basis. Master's degree in computer science, Operational research, Statistics, Applied mathematics, or in any other engineering discipline. Proficient in querying and analyzing large datasets using BigQuery on GCP. Strong Python skills for data wrangling and automation. 2+ years of hands-on experience in Python programming for data analysis, machine learning, and with libraries such as NumPy, Pandas, Matplotlib, Scikit-learn, TensorFlow, PyTorch, NLTK, spaCy, and Gensim. 2+ years of experience with both supervised and unsupervised machine learning techniques. 2+ years of experience with data analysis and visualization using Python packages such as Pandas, NumPy, Matplotlib, Seaborn, or data visualization tools like Dash or QlikSense. 1+ years' experience in SQL programming language and relational databases. Understand business requirements and analyze datasets to determine suitable approaches to meet analytic business needs and support data-driven decision-making by FCSD business team Design and implement data analysis and ML models, hypotheses, algorithms and experiments to support data-driven decision-making Apply various analytics techniques like data mining, predictive modeling, prescriptive modeling, math, statistics, advanced analytics, machine learning models and algorithms, etc.; to analyze data and uncover meaningful patterns, relationships, and trends Design efficient data loading, data augmentation and data analysis techniques to enhance the accuracy and robustness of data science and machine learning models, including scalable models suitable for automation Research, study and stay updated in the domain of data science, machine learning, analytics tools and techniques etc.; and continuously identify avenues for enhancing analysis efficiency, accuracy and robustness

Posted 1 week ago

Apply

0 years

0 - 0 Lacs

India

On-site

GlassDoor logo

About the Role: We are looking for a skilled Python Developer with strong expertise in web scraping and data extraction. You will be responsible for designing and maintaining scalable scraping systems, handling large volumes of data, and ensuring the accuracy and integrity of data from various online sources. Responsibilities: Develop and maintain Python scripts for scraping structured and unstructured data from websites and APIs. Build robust, scalable, and efficient scraping solutions using libraries such as BeautifulSoup, Scrapy, Selenium, or Playwright. Monitor and optimize scraping performance and manage data pipelines. Handle website structure changes, anti-bot protections, and CAPTCHA bypassing when necessary. Store, clean, and normalize scraped data using databases (e.g., PostgreSQL, MongoDB) or cloud solutions. Collaborate with data analysts, engineers, and product managers to define data needs and deliver insights. Ensure compliance with legal and ethical standards of data collection. Required Skills: Strong proficiency in Python, especially in web scraping. Solid understanding of HTML, CSS, JavaScript, HTTP protocols, and browser behavior. Familiarity with RESTful APIs, JSON, and XML. Experience working with databases (SQL or NoSQL). Basic knowledge of cloud platforms (AWS, GCP, or Azure) and containerization (Docker) is a plus. Preferred Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field. Experience handling large-scale scraping projects. Background in using version control systems (e.g., Git). Understanding of data privacy laws (e.g., GDPR, CCPA). Job Type: Full-time Pay: ₹20,000.00 - ₹30,000.00 per month Benefits: Health insurance Leave encashment Paid sick time Paid time off Provident Fund Schedule: Day shift Work Location: In person

Posted 1 week ago

Apply

0 years

0 - 0 Lacs

India

On-site

GlassDoor logo

We are looking for a skilled Python Developer with strong expertise in web scraping and data extraction. You will be responsible for designing and maintaining scalable scraping systems, handling large volumes of data, and ensuring data accuracy and integrity from various online sources. Responsibilities: Develop and maintain Python scripts for scraping structured and unstructured data from websites and APIs. Build robust, scalable, and efficient scraping solutions using libraries such as BeautifulSoup, Scrapy, Selenium, or Playwright. Monitor and optimize scraping performance and manage data pipelines. Handle website structure changes, anti-bot protections, and CAPTCHA bypassing when necessary. Store, clean, and normalize scraped data using databases (e.g., PostgreSQL, MongoDB) or cloud solutions. Collaborate with data analysts, engineers, and product managers to define data needs and deliver insights. Ensure compliance with legal and ethical standards of data collection. Required Skills: Strong proficiency in Python, especially in web scraping. Solid understanding of HTML, CSS, JavaScript, HTTP protocols, and browser behavior. Familiarity with RESTful APIs, JSON, and XML. Experience working with databases (SQL or NoSQL). Basic knowledge of cloud platforms (AWS, GCP, or Azure) and containerization (Docker) is a plus. Preferred Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field. Experience handling large-scale scraping projects. Background in using version control systems (e.g., Git). Understanding of data privacy laws (e.g., GDPR, CCPA). Job Type: Full-time Pay: ₹20,000.00 - ₹30,000.00 per month Benefits: Health insurance Leave encashment Paid sick time Paid time off Provident Fund Schedule: Day shift Work Location: In person

Posted 1 week ago

Apply

0 years

0 - 0 Lacs

India

On-site

GlassDoor logo

About the Role: We are looking for a skilled Python Developer with strong expertise in web scraping and data extraction. You will be responsible for designing and maintaining scalable scraping systems, handling large volumes of data, and ensuring the accuracy and integrity of data from various online sources. Responsibilities: Develop and maintain Python scripts for scraping structured and unstructured data from websites and APIs. Build robust, scalable, and efficient scraping solutions using libraries such as BeautifulSoup, Scrapy, Selenium, or Playwright. Monitor and optimize scraping performance and manage data pipelines. Handle website structure changes, anti-bot protections, and CAPTCHA bypassing when necessary. Store, clean, and normalize scraped data using databases (e.g., PostgreSQL, MongoDB) or cloud solutions. Collaborate with data analysts, engineers, and product managers to define data needs and deliver insights. Ensure compliance with legal and ethical standards of data collection. Required Skills: Strong proficiency in Python, especially in web scraping. Solid understanding of HTML, CSS, JavaScript, HTTP protocols, and browser behavior. Familiarity with RESTful APIs, JSON, and XML. Experience working with databases (SQL or NoSQL). Basic knowledge of cloud platforms (AWS, GCP, or Azure) and containerization (Docker) is a plus. Preferred Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field. Experience handling large-scale scraping projects. Background in using version control systems (e.g., Git). Understanding of data privacy laws (e.g., GDPR, CCPA). Job Type: Full-time Pay: ₹20,000.00 - ₹30,000.00 per month Benefits: Health insurance Paid sick time Paid time off Provident Fund Schedule: Day shift Work Location: In person

Posted 1 week ago

Apply

1.0 years

0 - 0 Lacs

India

On-site

GlassDoor logo

Job Summary: We are looking for a skilled Python Developer with strong expertise in web scraping and data extraction. You will be responsible for designing and maintaining scalable scraping systems, handling large volumes of data, and ensuring data accuracy and integrity from various online sources. Responsibilities: Develop and maintain Python scripts for scraping structured and unstructured data from websites and APIs. Build robust, scalable, and efficient scraping solutions using libraries such as BeautifulSoup, Scrapy, Selenium, or Playwright. Monitor and optimize scraping performance and manage data pipelines. Handle website structure changes, anti-bot protections, and CAPTCHA bypassing when necessary. Store, clean, and normalize scraped data using databases (e.g., PostgreSQL, MongoDB) or cloud solutions. Collaborate with data analysts, engineers, and product managers to define data needs and deliver insights. Ensure compliance with legal and ethical standards of data collection. Required Skills: · Strong proficiency in Python, especially in web scraping. · Solid understanding of HTML, CSS, JavaScript, HTTP protocols, and browser behavior. · Familiarity with RESTful APIs, JSON, and XML. · Experience working with databases (SQL or NoSQL). · Basic knowledge of cloud platforms (AWS, GCP, or Azure) and containerization (Docker) is a plus. Preferred Qualifications: · Bachelor's degree in Computer Science, Information Technology, or related field. · Experience handling large-scale scraping projects. · Background in using version control systems (e.g., Git). · Understanding of data privacy laws (e.g., GDPR, CCPA). Job Type: Full-time Pay: ₹25,000.00 - ₹30,000.00 per month Benefits: Health insurance Leave encashment Paid sick time Paid time off Provident Fund Schedule: Day shift Experience: Data visualization: 1 year (Required) Work Location: In person

Posted 1 week ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

JD for data science: We are seeking an experienced Data Scientist to join our growing analytics and AI team. This role will involve working closely with cross-functional teams to deliver actionable insights, build predictive models, and drive data-driven decision-making across the organization. The ideal candidate is someone who combines strong analytical skills with hands-on experience in statistical modeling, machine learning, and data engineering best practices. Key Responsibilities:  Understand business problems and translate them into data science solutions.  Build, validate, and deploy machine learning models for prediction, classification, clustering, etc.  Perform deep-dive exploratory data analysis and uncover hidden insights.  Work with large, complex datasets from multiple sources; perform data cleaning and preprocessing.  Design and run A/B tests and experiments to validate hypotheses.  Collaborate with data engineers, business analysts, and product managers to drive initiatives from ideation to production.  Present results and insights to non-technical stakeholders in a clear, concise manner.  Contribute to the development of reusable code libraries, templates, and documentation. Required Skills & Qualifications:  Bachelor’s or Master’s degree in Computer Science, Statistics, Mathematics, Engineering, or a related field.  3–7 years of hands-on experience in data science, machine learning, or applied statistics.  Proficiency in Python or R, and hands-on experience with libraries such as scikit- learn, pandas, numpy, XGBoost, TensorFlow/PyTorch.  Solid understanding of machine learning algorithms, statistical inference, and data mining techniques.  Strong SQL skills; experience working with large-scale databases (e.g., Snowflake, BigQuery, Redshift).  Experience with data visualization tools like Power BI, Tableau, or Plotly.  Working knowledge of cloud platforms like AWS, Azure, or GCP is preferred.  Familiarity with MLOps tools and model deployment best practices is a plus. Preferred Qualifications:  Exposure to time series analysis, NLP, or deep learning techniques.  Experience working in domains like healthcare, fintech, retail, or supply chain.  Understanding of version control (Git) and Agile development methodologies. Why Join Us:  Opportunity to work on impactful, real-world problems.  Be part of a high-performing and collaborative team.  Exposure to cutting-edge technologies in data and AI.  Career growth and continuous learning environment. Show more Show less

Posted 1 week ago

Apply

10.0 - 15.0 years

22 - 37 Lacs

Bengaluru

Work from Office

Naukri logo

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role As a Data Scientist at Kyndryl you are the bridge between business problems and innovative solutions, using a powerful blend of well-defined methodologies, statistics, mathematics, domain expertise, consulting, and software engineering. You'll wear many hats, and each day will present a new puzzle to solve, a new challenge to conquer. You will dive deep into the heart of our business, understanding its objectives and requirements – viewing them through the lens of business acumen, and converting this knowledge into a data problem. You’ll collect and explore data, seeking underlying patterns and initial insights that will guide the creation of hypotheses. Responsibilities: Lead the development and implementation of AI/ML, Generative AI and LLM projects, ensuring alignment with business objectives. Design and deploy Proof of Concepts (POCs) and Points of View (POVs) across various industry verticals, demonstrating the potential of Generative AI applications. Engage effectively with customers, showcasing and demonstrating the relevance and impact of Generative AI applications in their businesses. Collaborate with cross-functional teams to integrate AI/ML solutions into cloud environments (Azure, GCP, AWS, etc.). In this role, you will embark on a transformative process of business understanding, data understanding, and data preparation. Utilizing statistical and mathematical modelling techniques, you'll have the opportunity to create models that defy convention – models that hold the key to solving intricate business challenges. With an acute eye for accuracy and generalization, you'll evaluate these models to ensure they not only solve business problems but do so optimally. Additionally, you're not just building and validating models – you’re deploying them as code to applications and processes, ensuring that the model(s) you've selected sustains its business value throughout its lifecycle. Your expertise doesn't stop at data; you'll become intimately familiar with our business processes and have the ability to navigate their complexities, identifying issues and crafting solutions that drive meaningful change in these domains. You will develop and apply standards and policies that protect our organization's most valuable asset – ensuring that data is secure, private, accurate, available, and, most importantly, usable. Your mastery extends to data management, migration, strategy, change management, and policy and regulation. If you're ready to embrace the power of data to transform our business and embark on an epic data adventure, then join us at Kyndryl. Together, let's redefine what's possible and unleash your potential. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Experience: Minimum of 5 years of experience in Data Science and Machine Learning, with expertise in NLP, Generative AI, LLMs, MLOps, optimization techniques, and AI solution architecture. Familiarity with business processes in 1-2 domains (e.g., Financial, Telecom, Retail, Manufacturing) to quickly understand requirements and develop solutions. Graduate/Postgraduate in computer science, computer engineering, or equivalent with minimum of 10 years of experience in the IT industry. Past experience in responding to or solutioning for RFPs, customer proposals, and customer presentations/orals. Strong understanding of Transformers, LLMs, Fine Tuning, Agents, and RAG techniques. Experience using LLM models on cloud platforms (Azure OpenAI, AWS Bedrock, GCP Vertex AI). Experience in AI/ML, with a focus on Generative AI and Large Language Models. Proven track record in working with major cloud platforms (Azure, GCP, AWS). Understanding of how to deploy LLMs on cloud/on-premise and use APIs to build industry solutions. Strong knowledge in programming, specifically in Python / R. Be well equipped to understand machine learning algorithms and be versatile enough to implement them in pure NumPy, TensorFlow or PyTorch as required Good understand of Mathematics Fundamentals like Statistics, Probability, Linear Algebra, and Calculus Experience in Data mining and providing data insights using visualization Knowledge in MySQL or NoSQL Databases will be added advantage Preferred Technical And Professional Experience Certification in one or more of the hyperscale's (Azure, AWS, Google Cloud Platform) Open Certified Data Scientist (Open CDS) – L2/L3 Familiarity with Agentic AI frameworks Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 1 week ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Exciting Opportunity at Eloelo: Join the Future of Live Streaming and Social Gaming! Are you ready to be a part of the dynamic world of live streaming and social gaming? Look no further! Eloelo, an innovative Indian platform founded in February 2020 by ex-Flipkart executives Akshay Dubey and Saurabh Pandey, is on the lookout for passionate individuals to join our growing team in Bangalore. About Us: Eloelo stands at the forefront of multi-host video and audio rooms, offering a unique blend of interactive experiences, including chat rooms, PK challenges, audio rooms, and captivating live games like Lucky 7, Tambola, Tol Mol Ke Bol, and Chidiya Udd. Our platform has successfully attracted audiences from all corners of India, providing a space for social connections and immersive gaming. Recent Milestone: In pursuit of excellence, Eloelo has secured a significant milestone by raising $22Mn in the month of October 2023 from a diverse group of investors, including Lumikai, Waterbridge Capital, Courtside Ventures, Griffin Gaming Partners, and other esteemed new and existing contributors. Why Eloelo? Be a part of a team that thrives on creativity and innovation in the live streaming and social gaming space. Rub shoulders with the stars! Eloelo regularly hosts celebrities such as Akash Chopra, Kartik Aryan, Rahul Dua, Urfi Javed, and Kiku Sharda from the Kapil Sharma Show and that's our level of celebrity collaboration. Working with a world class team ,high performance team that constantly pushes boundaries and limits , redefines what is possible Fun and work at the same place with amazing work culture , flexible timings , and vibrant atmosphere We are looking to hire a business analyst to join our growth analytics team. This role sits at the intersection of business strategy, marketing performance, creative experimentation, and customer lifecycle management, with a growing focus on AI-led insights. You’ll drive actionable insights to guide our performance marketing, creative strategy, and lifecycle interventions, while also building scalable analytics foundations for a fast-moving growth team. About the Role: We are looking for a highly skilled and creative Data Scientist to join our growing team and help drive data-informed decisions across our entertainment platforms. You will leverage advanced analytics, machine learning, and predictive modeling to unlock insights about our audience, content performance, and product engagement—ultimately shaping the way millions of people experience entertainment. Key Responsibilities: Develop and deploy machine learning models to solve key business problems (e.g., personalization, recommendation systems, churn prediction). Analyze large, complex datasets to uncover trends in content consumption, viewer preferences, and engagement behaviors. Partner with product, marketing, engineering, and content teams to translate data insights into actionable strategies. Design and execute A/B and multivariate experiments to evaluate the impact of new features and campaigns. Build dashboards and visualizations to monitor key metrics and provide stakeholders with self-service analytics tools. Collaborate on the development of audience segmentation, lifetime value modeling, and predictive analytics. Stay current with emerging technologies and industry trends in data science and entertainment. Qualifications: Master’s or PhD in Computer Science, Statistics, Mathematics, Data Science, or related field. 1+ years of experience as a Data Scientist, ideally within media, streaming, gaming, or entertainment tech. Proficiency in programming languages such as Python or R. Strong SQL skills and experience working with large-scale datasets and data warehousing tools (e.g., Snowflake, BigQuery, Redshift). Experience with machine learning libraries/frameworks (e.g., scikit-learn, TensorFlow, PyTorch). Solid understanding of experimental design and statistical analysis techniques. Ability to clearly communicate complex technical findings to non-technical stakeholders. Preferred Qualifications: Experience building recommendation engines, content-ranking algorithms, or personalization models in an entertainment context. Familiarity with user analytics tools such as Mixpanel, Amplitude, or Google Analytics. Prior experience with data pipeline and workflow tools (e.g., Airflow, dbt). Background in natural language processing (NLP), computer vision, or audio analysis is a plus. Why Join Us: Shape the future of how audiences engage with entertainment through data-driven storytelling. Work with cutting-edge technology on high-impact, high-visibility projects. Join a collaborative team in a dynamic and fast-paced environment where creativity meets data science. Show more Show less

Posted 1 week ago

Apply

6.0 - 10.0 years

25 - 40 Lacs

Hyderabad

Work from Office

Naukri logo

Tezo is a new generation Digital & AI solutions provider, with a history of creating remarkable outcomes for our customers. We bring exceptional experiences using cutting-edge analytics, data proficiency, technology, and digital excellence. Tezo is seeking passionate AI Engineers who are excited about harnessing the power of Generative AI to transform our company and provide cutting-edge solutions for our clients. Join us in revolutionizing enterprises by building intelligent, generative solutions that leverage AI/ML. If you've ever dreamed of contributing to impactful projects on a large scale, this is the opportunity for you! In this role, you will be an integral part of the Machine Learning Platforms/Data Science team, focusing on developing, testing, and deploying generative AI models. What Makes Our AI/ML Practice Unique: Purpose-driven: We actively respond to our customers' evolving needs with innovative solutions. Collaborative: We foster a positive and engaging work environment where collective ideas thrive. Accountable: We take ownership of our performance, both individually and as a team. Service Excellence: We maximize our potential through continuous learning and improvement. Trusted: We empower individuals to make informed decisions and take calculated risks. Job Summary: We are looking for a dedicated Lead Data Scientist with a strong background in Generative AI to join our team. You will support product, leadership, and client teams by providing insights derived from advanced data analysis and generative modeling. In this role, you will collaborate closely with the development team, architects, and product owners to build efficient generative models and manage their lifecycle using the appropriate technology stack. Core Requirements: At least 6 years of experience working with geographically distributed teams 2+ years of experience working in a client-facing role on AI/ML . Demonstrable experience in leading a substantive area of work, or line management of a team. Proven experience in building production grade Retrieval-Augmented Generation (RAG) solutions with hands on experience with advanced RAG techniques for retrieval, re-ranking etc. Build GenAI applications using LangChain, LlamaIndex and familiarity with Vector Stores and Large Language Models. Experience in fine-tuning Large Language Models (LLMs) for business use cases will be preferred. Minimum of 4 years of experience in developing end-to-end classical machine learning and NLP projects. Demonstrated experience in deploying ML solutions in production using cloud services like Azure,AWS. Business Understanding, Stakeholder management and Team leading skills. Strong practical expertise in Python and SQL needed for data science projects. Join us at Tezo to be part of a dynamic team committed to driving innovation through Generative AI solutions!

Posted 1 week ago

Apply

4.0 - 9.0 years

15 - 30 Lacs

Hyderabad

Hybrid

Naukri logo

hands-on Analytics experience with 2+ years of experience. hands-on experience with one or more data analytics tools including Python, R, SAS, and SQL, SPARK Good understanding of credit card industry, financial Proficiency in Tableau is a plus

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Kolkata, Pune, Bengaluru

Hybrid

Naukri logo

Job Description: Graduate degree in a quantitative field (CS, statistics, applied mathematics, machine learning, or related discipline) • Good programming skills in Python with strong working knowledge of Pythons numerical, data analysis, or AI frameworks such as NumPy, Pandas, Scikit-learn, etc. • Experience with LMs (Llama (1/2/3), T5, Falcon, Langchain or framework similar like Langchain) • Candidate must be aware of entire evolution history of NLP (Traditional Language Models to Modern Large Language Models), training data creation, training set-up and finetuning • Candidate must be comfortable interpreting research papers and architecture diagrams of Language Models • Candidate must be comfortable with LORA, RAG, Instruct fine-tuning, Quantization, etc. • Predictive modelling experience in Python (Time Series/ Multivariable/ Causal) • Experience applying various machine learning techniques and understanding the key parameters that affect their performance • Experience of building systems that capture and utilize large data sets to quantify performance via metrics or KPIs • Excellent verbal and written communication • Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects. Roles & Responsibilities: • Lead a team of Data Engineers, Analysts and Data scientists to carry out following activities: • Connect with internal / external POC to understand the business requirements • Coordinate with right POC to gather all relevant data artifacts, anecdotes, and hypothesis • Create project plan and sprints for milestones / deliverables • Spin VM, create and optimize clusters for Data Science workflows • Create data pipelines to ingest data effectively • Assure the quality of data with proactive checks and resolve the gaps • Carry out EDA, Feature Engineering & Define performance metrics prior to run relevant ML/DL algorithms • Research whether similar solutions have been already developed before building ML models • Create optimized data models to query relevant data efficiently • Run relevant ML / DL algorithms for business goal seek • Optimize and validate these ML / DL models to scale • Create light applications, simulators, and scenario builders to help business consume the end outputs • Create test cases and test the codes pre-production for possible bugs and resolve these bugs proactively • Integrate and operationalize the models in client ecosystem • Document project artifacts and log failures and exceptions. • Measure, articulate impact of DS projects on business metrics and finetune the workflow based on feedbacks

Posted 1 week ago

Apply

2.0 - 6.0 years

3 - 8 Lacs

Pune

Work from Office

Naukri logo

We are seeking a Data Scientist with 2 to 5 years of experience to join our AI/ML team. The ideal candidate will work under the guidance of a team lead to develop and implement machine learning solutions, with a focus on Natural Language Processing (NLP) and generative AI applications. Key Responsibilities: Develop and maintain NLP models and chatbot solutions under the direction of the team lead Implement machine learning algorithms and deep learning models using Python Process, clean, and validate data to ensure high-quality model inputs Create and optimize generative AI solutions for various business applications Collaborate with team members to improve existing models and develop new solutions Document technical processes and maintain codebase Participate in code reviews and team discussions Debug and troubleshoot models to improve performance Monitor model performance and implement necessary updates Prepare and briefing for client presentation Candidate Specification: Bachelor's degree in Computer Science, Data Science, or related field 2+ years of professional experience in data science or machine learning Experience with large language models and transformer architectures Familiarity with chatbot development using RASA (preferable) or other platform Knowledge of MLOps practices and tools Familiarity with cloud platforms (AWS/Azure/GCP) Knowledge on REST APIs Background in implementing production-grade ML solutions Strong proficiency in Python programming Hands-on experience with NLP libraries and frameworks (e.g., NLTK, spaCy, Transformers) Experience with machine learning frameworks (e.g., TensorFlow, PyTorch, scikit-learn) Knowledge of chatbot development and conversational AI Understanding of generative AI concepts and applications Familiarity with version control systems (e.g., Git) Strong analytical and problem-solving skills Technical Skills: Programming Languages:Python (required) ML/AI Frameworks: TensorFlow, PyTorch, scikit-learn NLP Tools: NLTK, spaCy,Hugging Face Transformers Version Control: Git Data Processing: Pandas, NumPy Development Tools: VS Code Soft Skills: Excellent oral and written communication skills Good at client interaction and client communication Presentation skill Good comprehension and articulator Strong team player with ability to take direction from lead Detail-oriented approach to problem-solving Ability to work in a fast-paced environment Good time management skills Eagerness to learn and adapt to new technologies Perks and Benefits: 5 days working Good work environment Great learning opportunity Work life balance

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Note: Please apply only if you have 3 years or more of relevant experience in Data Science, excluding internship Comfortable working 5-days a week from Gurugram, Haryana Are an immediate joiner or currently serving your notice period About Eucloid At Eucloid, innovation meets impact. As a leader in AI and Data Science, we create solutions that redefine industries—from Hi-tech and D2C to Healthcare and SaaS. With partnerships with giants like Databricks, Google Cloud, and Adobe, we’re pushing boundaries and building next-gen technology. Join our talented team of engineers, scientists, and visionaries from top institutes like IITs, IIMs, and NITs. At Eucloid, growth is a promise, and your work will drive transformative results for Fortune 100 clients. What You’ll Do Analyze structured and unstructured datasets to identify actionable insights and trends. Develop and deploy machine learning models and statistical solutions for business challenges. Collaborate with data engineers to design scalable data pipelines and architectures. Translate complex data findings into clear, actionable recommendations for stakeholders. Contribute to building data-driven tools and dashboards for clients. Stay updated on emerging trends in AI/ML and apply learnings to projects. What Makes You a Fit Academic Background: Bachelor’s or Master’s degree in Data Science, Computer Science, Statistics, or a related field. Technical Expertise: 3-5 years of hands-on experience in data science, machine learning, or analytics. Proficiency in Python/R and SQL for data analysis and modeling. Familiarity with ML frameworks (e.g., Scikit-learn, TensorFlow) and cloud platforms (AWS/GCP/Azure). Experience with data manipulation tools (Pandas, NumPy) and visualization tools (Tableau, Power BI). Basic understanding of deploying models and working with large-scale data platforms. Extra Skills: Strong problem-solving mindset and ability to thrive in agile environments. Excellent communication skills to convey technical concepts to non-technical audiences. Collaborative spirit, with experience working in cross-functional teams. Why You’ll Love It Here Innovate with the Best Tech: Work on groundbreaking projects using AI, GenAI, LLMs, and massive-scale data platforms. Tackle challenges that push the boundaries of innovation. Impact Industry Giants: Deliver business-critical solutions for Fortune 100 clients across Hi-tech, D2C, Healthcare, SaaS, and Retail. Partner with platforms like Databricks, Google Cloud, and Adobe to create high-impact products. Collaborate with a World-Class Team: Join exceptional professionals from IITs, IIMs, NITs, and global leaders like Walmart, Amazon, Accenture, and ZS. Learn, grow, and lead in a team that values expertise and collaboration Accelerate Your Growth: Access our Centres of Excellence to upskill and work on industry-leading innovations. Your professional development is a top priority. Work in a Culture of Excellence: Be part of a dynamic workplace that fosters creativity, teamwork, and a passion for building transformative solutions. Your contributions will be recognized and celebrated. About Our Leadership Anuj Gupta – Former Amazon leader with over 22 years of experience in building and managing large engineering teams. (B.Tech, IIT Delhi; MBA, ISB Hyderabad). Raghvendra Kushwah – Business consulting expert with 21+ years at Accenture and Cognizant (B.Tech, IIT Delhi; MBA, IIM Lucknow). Key Benefits Competitive salary and performance-based bonus. Comprehensive benefits package, including health insurance and flexible work hours. Opportunities for professional development and careers growth. Location: Gurugram Submit your resume to saurabh.bhaumik@eucloid.com with the subject line “ Application: Data Scientist. ” Eucloid is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Linkedin logo

The ideal candidate's favorite words are learning, data, scale, and agility. You will leverage your strong collaboration skills and ability to extract valuable insights from highly complex data sets to ask the right questions and find the right answers. Position : Data Scientist Location: Trivandrum (Remote or Hybrid ) Type: Full-time Start Date: Immediate Company : Turilytix.ai About the Role : Join us as a Data Scientist and work on challenging ML problems across paper manufacturing , retail, food, and IT infrastructure. Use real-world data to drive predictive intelligence with BIG-AI . Responsibilities : • Clean, engineer, and model sensor & telemetry data • Build ML models for prediction and classification • Develop explainability using SHAP, LIME • Collaborate with product/engineering to operationalize models Required Skills : • Python, Pandas, Scikit-learn • Time-series & anomaly detection • SHAP / LIME / interpretable ML • SQL, Jupyter Notebooks • Bonus: DVC, Git, Airflow Why Work With Us : • Hands-on with real-world sensor data • No red tape just impact • Remote work and global deployment • Drive AI adoption without complexity Responsibilities Analyze raw data: assessing quality, cleansing, structuring for downstream processing Design accurate and scalable prediction algorithms Collaborate with engineering team to bring analytical prototypes to production Generate actionable insights for business impSQL) Familiarity with Big Data frameworks and visualization tools (Cassandra, Hadoop, Spark, powerBI) Email your resume/GitHub: hr@turilytix.ai Show more Show less

Posted 1 week ago

Apply

7.0 - 12.0 years

30 - 32 Lacs

Pune

Hybrid

Naukri logo

Experience: 7-12years Location and Work Mode: Pune Job Description: About this Role We are seeking a highly skilled and motivated Data Scientist to join our team. This role is ideal for individuals who are passionate about leveraging data to drive innovation and create intelligent, scalable solutions. The successful candidate will work across a variety of data science domains, including diagnostic, predictive, and prescriptive analytics, and will contribute to the development of autonomous cognitive systems. This is an exciting opportunity to be part of a forward-thinking team that values continuous learning and collaboration. Who are you A data enthusiast with a strong foundation in machine learning, deep learning, and statistical modelling. A collaborative problem-solver who thrives in cross-functional teams and is comfortable engaging with stakeholders. A continuous learner with a passion for exploring new tools, technologies, and methodologies. An effective communicator who can translate complex data insights into actionable business strategies. A self-starter with a proactive mindset and the ability to work independently in a dynamic environment. What you will do Manage and process large-scale data on both on-premise and cloud platforms (preferably AWS). Design and implement diagnostic models using decision theory and causal inference techniques (e.g., DAG, ADMG, Deterministic SME). Build and productise diagnostic systems for scalable reuse across the organisation. Develop robust predictive and prescriptive analytics models using AI techniques such as ML, DL, NLP, ES, and RL. Create intelligent systems that determine the Next Best Action based on prescriptive analytics. Drive the development of autonomous cognitive systems that proactively recommend actions with minimal human intervention. Apply advanced deep learning techniques including CNNs, RNNs, and MLPs to solve complex business problems. Utilise machine learning and deep learning libraries such as TensorFlow, PyTorch, Scikit-learn, NumPy, Pandas, Statsmodels, Theano, and XGBoost. Collaborate with stakeholders to understand business needs and deliver data-driven solutions. Present insights and findings using visualisation tools like Tableau and Power BI. What skills you need Proficiency in programming languages: Python, R, SQL Experience with frameworks: TensorFlow, Keras, Scikit-learn Strong understanding of statistical modelling: regression, classification, clustering, time series Hands-on experience with cloud platforms: AWS (preferred), Azure Familiarity with big data technologies and environments Excellent communication and stakeholder management skills Strong analytical and problem-solving abilities At least one certification in AWS is preferred What skills you will learn Advanced applications of causal inference and decision theory in real-world scenarios Building and deploying scalable AI systems in cloud environments Designing autonomous systems that integrate feedback loops for continuous learning Enhancing stakeholder engagement through data storytelling and visualisation Exposure to cutting-edge tools and frameworks in the AI and ML ecosystem

Posted 1 week ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Responsibilities: 1. Architect and develop scalable AI applications focused on indexing, retrieval systems, and distributed data processing. 2. Collaborate closely with framework engineering, data science, and full-stack teams to deliver an integrated developer experience for building next-generation context-aware applications (i.e., Retrieval-Augmented Generation (RAG)). 3. Design, build, and maintain scalable infrastructure for high-performance indexing, search engines, and vector databases (e.g., Pinecone, Weaviate, FAISS). 4. Implement and optimize large-scale ETL pipelines, ensuring efficient data ingestion, transformation, and indexing workflows. 5. Lead the development of end-to-end indexing pipelines, from data ingestion to API delivery, supporting millions of data points. 6. Deploy and manage containerized services (Docker, Kubernetes) on cloud platforms (AWS, Azure, GCP) via infrastructure-as-code (e.g., Terraform, Pulumi). 7. Collaborate on building and enhancing user-facing APIs that provide developers with advanced data retrieval capabilities. 8. Focus on creating high-performance systems that scale effortlessly, ensuring optimal performance in production environments with massive datasets. 9. Stay updated on the latest advancements in LLMs, indexing techniques, and cloud technologies to integrate them into cutting-edge applications. 10. Drive ML and AI best practices across the organization to ensure scalable, maintainable, and secure AI infrastructure. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu

On-site

SimplyHired logo

Career Area: Technology, Digital and Data Job Description: Your Work Shapes the World at Caterpillar Inc. When you join Caterpillar, you're joining a global team who cares not just about the work we do – but also about each other. We are the makers, problem solvers, and future world builders who are creating stronger, more sustainable communities. We don't just talk about progress and innovation here – we make it happen, with our customers, where we work and live. Together, we are building a better world, so we can all enjoy living in it. Role Definition Performs analytical tasks and initiatives on huge amount of data to support data-driven business decision and development. Responsibilities Directing the data gathering, data mining, and data processing processes in huge volume; creating appropriate data models. Exploring, promoting, and implementing semantic data capabilities through Natural Language Processing, text analysis and machine learning techniques. Leading to define requirements and scope of data analyses; presenting and reporting possible business insights to management using data visualization technologies. Conducting research on data model optimization and algorithms to improve effectiveness and accuracy on data analyses. Skill Descriptors Business Statistics: Knowledge of the statistical tools, processes, and practices to describe business results in measurable scales; ability to use statistical tools and processes to assist in making business decisions. Level Working Knowledge: Explains the basic decision process associated with specific statistics. Works with basic statistical functions on a spreadsheet or a calculator. Explains reasons for common statistical errors, misinterpretations, and misrepresentations. Describes characteristics of sample size, normal distributions, and standard deviation. Generates and interprets basic statistical data. Accuracy and Attention to Detail: Understanding the necessity and value of accuracy; ability to complete tasks with high levels of precision. Level Working Knowledge: Accurately gauges the impact and cost of errors, omissions, and oversights. Utilizes specific approaches and tools for checking and cross-checking outputs. Processes limited amounts of detailed information with good accuracy. Learns from mistakes and applies lessons learned. Develops and uses checklists to ensure that information goes out error-free. Analytical Thinking: Knowledge of techniques and tools that promote effective analysis; ability to determine the root cause of organizational problems and create alternative solutions that resolve these problems. Level Working Knowledge: Approaches a situation or problem by defining the problem or issue and determining its significance. Makes a systematic comparison of two or more alternative solutions. Uses flow charts, Pareto charts, fish diagrams, etc. to disclose meaningful data patterns. Identifies the major forces, events and people impacting and impacted by the situation at hand. Uses logic and intuition to make inferences about the meaning of the data and arrive at conclusions. Machine Learning: Knowledge of principles, technologies and algorithms of machine learning; ability to develop, implement and deliver related systems, products and services. Level Basic Understanding: Explains the definition and objectives of machine learning. Describes the algorithms and logic of machine learning. Distinguishes between machine learning and deep learning. Gives several examples on the implementation of machine learning. Programming Languages: Knowledge of basic concepts and capabilities of programming; ability to use tools, techniques and platforms in order to write and modify programming languages. Level Basic Understanding: Describes the basic concepts of programming and program construction activities. Uses programming documentation including program specifications in order to maintain standards. Describes the capabilities of major programming languages. Identifies locally relevant programming tools. Query and Database Access Tools: Knowledge of data management systems; ability to use, support and access facilities for searching, extracting and formatting data for further use. Level Working Knowledge: Defines, creates and tests simple queries by using associated command language in a specific environment. Applies appropriate query tools used to connect to the data warehouse. Obtains and analyzes query access path information and query results. Employs tested query statements to retrieve, insert, update and delete information. Works with advanced features and functions including sorting, filtering and making simple calculations. Requirements Analysis: Knowledge of tools, methods, and techniques of requirement analysis; ability to elicit, analyze and record required business functionality and non-functionality requirements to ensure the success of a system or software development project. Level Working Knowledge: Follows policies, practices and standards for determining functional and informational requirements. Confirms deliverables associated with requirements analysis. Communicates with customers and users to elicit and gather client requirements. Participates in the preparation of detailed documentation and requirements. Utilizes specific organizational methods, tools and techniques for requirements analysis. Posting Dates: June 13, 2025 - June 15, 2025 Caterpillar is an Equal Opportunity Employer. Not ready to apply? Join our Talent Community.

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Analyze complex data sets and develop AI/ML models using Python to drive actionable insights and predictive analytics. Apply statistical techniques and machine learning algorithms to solve business problems and optimize data-driven strategies. Place : ChennaiExperience : 8-12 YearsNo of openings : 1 Skills : Python, AI/ML Models, Analytics Tasks Analyze complex data sets and develop AI/ML models using Python to drive actionable insights and predictive analytics. Apply statistical techniques and machine learning algorithms to solve business problems and optimize data-driven strategies. Place : ChennaiExperience : 8-12 YearsNo of openings : 1 Skills : Python, AI/ML Models, Analytics Requirements Analyze complex data sets and develop AI/ML models using Python to drive actionable insights and predictive analytics. Apply statistical techniques and machine learning algorithms to solve business problems and optimize data-driven strategies. Place : ChennaiExperience : 8-12 YearsNo of openings : 1 Skills : Python, AI/ML Models, Analytics Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

The Data Scientist organization within the Data and Analytics division is responsible for designing and implementing a unified data strategy that enables the efficient, secure, and governed use of data across the organization. We aim to create a trusted and customer-centric data ecosystem, built on a foundation of data quality, security, and openness, and guided by the Thomson Reuters Trust Principles. Our team is dedicated to developing innovative data solutions that drive business value while upholding the highest standards of data management and ethics. About the role: Work with low to minimum supervision to solve business problems using data and analytics. Work in multiple business domain areas including Customer Experience and Service, Operations, Finance, Sales and Marketing. Work with various business stakeholders, to understand and document requirements. Design an analytical framework to provide insights into a business problem. Explore and visualize multiple data sets to understand data available for problem solving. Build end to end data pipelines to handle and process data at scale. Build machine learning models and/or statistical solutions. Build predictive models. Use Natural Language Processing to extract insight from text. Design database models (if a data mart or operational data store is required to aggregate data for modeling). Design visualizations and build dashboards in Tableau and/or PowerBI Extract business insights from the data and models. Present results to stakeholders (and tell stories using data) using power point and/or dashboards. Work collaboratively with other team members. About you: Overall 3+ years' experience in technology roles. Must have a minimum of 1 years of experience working in the data science domain. Has used frameworks/libraries such as Scikit-learn, PyTorch, Keras, NLTK. Highly proficient in Python. Highly proficient in SQL. Experience with Tableau and/or PowerBI. Has worked with Amazon Web Services and Sagemaker. Ability to build data pipelines for data movement using tools such as Alteryx, GLUE, Informatica. Proficient in machine learning, statistical modelling, and data science techniques. Experience with one or more of the following types of business analytics applications: Predictive analytics for customer retention, cross sales and new customer acquisition. Pricing optimization models. Segmentation. Recommendation engines. Experience in one or more of the following business domains Customer Experience and Service. Finance. Operations. Good presentation skills and the ability to tell stories using data and PowerPoint/Dashboard Visualizations. Excellent organizational, analytical and problem-solving skills. Ability to communicate complex results in a simple and concise manner at all levels within the organization. Ability to excel in a fast-paced, startup-like environment. What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

The Data Scientist organization within the Data and Analytics division is responsible for designing and implementing a unified data strategy that enables the efficient, secure, and governed use of data across the organization. We aim to create a trusted and customer-centric data ecosystem, built on a foundation of data quality, security, and openness, and guided by the Thomson Reuters Trust Principles. Our team is dedicated to developing innovative data solutions that drive business value while upholding the highest standards of data management and ethics. About the role: Work with low to minimum supervision to solve business problems using data and analytics. Work in multiple business domain areas including Customer Experience and Service, Operations, Finance, Sales and Marketing. Work with various business stakeholders, to understand and document requirements. Design an analytical framework to provide insights into a business problem. Explore and visualize multiple data sets to understand data available for problem solving. Build end to end data pipelines to handle and process data at scale. Build machine learning models and/or statistical solutions. Build predictive models. Use Natural Language Processing to extract insight from text. Design database models (if a data mart or operational data store is required to aggregate data for modeling). Design visualizations and build dashboards in Tableau and/or PowerBI Extract business insights from the data and models. Present results to stakeholders (and tell stories using data) using power point and/or dashboards. Work collaboratively with other team members. About you: Overall 3+ years' experience in technology roles. Must have a minimum of 1 years of experience working in the data science domain. Has used frameworks/libraries such as Scikit-learn, PyTorch, Keras, NLTK. Highly proficient in Python. Highly proficient in SQL. Experience with Tableau and/or PowerBI. Has worked with Amazon Web Services and Sagemaker. Ability to build data pipelines for data movement using tools such as Alteryx, GLUE, Informatica. Proficient in machine learning, statistical modelling, and data science techniques. Experience with one or more of the following types of business analytics applications: Predictive analytics for customer retention, cross sales and new customer acquisition. Pricing optimization models. Segmentation. Recommendation engines. Experience in one or more of the following business domains Customer Experience and Service. Finance. Operations. Good presentation skills and the ability to tell stories using data and PowerPoint/Dashboard Visualizations. Excellent organizational, analytical and problem-solving skills. Ability to communicate complex results in a simple and concise manner at all levels within the organization. Ability to excel in a fast-paced, startup-like environment. What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com. Show more Show less

Posted 1 week ago

Apply

3.0 - 8.0 years

10 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title : AI/ML Engineer (with LLM, Azure, Python & PySpark expertise) Job Description : We are looking for a skilled and experienced AI/ML Engineer to join our data science and AI team. The ideal candidate will have a strong foundation in machine learning, artificial intelligence, and large language models (LLMs), along with deep proficiency in Python, PySpark, and Microsoft Azure services. You will be responsible for developing and deploying scalable AI solutions, working with big data frameworks, and leveraging cloud platforms to operationalize machine learning models. Key Responsibilities : Artificial Intelligence (AI) & Machine Learning (ML): Design, develop, and optimize machine learning and AI models to solve business problems. Perform exploratory data analysis and feature engineering for model development. Use supervised, unsupervised, and reinforcement learning techniques where appropriate. Build AI pipelines and integrate models into production systems. Large Language Models (LLM): Fine-tune and deploy LLMs (e.g., OpenAI, Hugging Face, or custom-trained models). Develop prompt engineering strategies for LLM applications. Implement RAG (Retrieval-Augmented Generation) systems or LLMOps workflows. Evaluate LLM outputs for accuracy, bias, and performance. Python Programming: Write efficient, reusable, and testable Python code for data processing, modeling, and API services. Build automation scripts for data pipelines and model training workflows. Use popular libraries such as Scikit-learn, TensorFlow, PyTorch, Pandas, and NumPy. PySpark and Big Data: Work with large datasets using PySpark for data wrangling, transformation, and feature extraction. Optimize Spark jobs for performance and scalability. Collaborate with data engineering teams to implement end-to-end data pipelines. Microsoft Azure: Deploy models and applications using Azure ML, Azure Databricks, Azure Functions, and Azure Synapse. Manage compute resources, storage, and data security on Azure. Use Azure DevOps for CI/CD of ML pipelines and automation. Cross-Functional Collaboration & Documentation: Collaborate with data engineers, product managers, and business stakeholders to align technical solutions with business needs. Maintain clear documentation of models, code, and workflows. Present technical findings and model outcomes to both technical and non-technical audiences. Required Skills & Qualifications : Bachelor's or Masters degree in Computer Science, Data Science, Engineering, or a related field. 3+ years of experience in AI/ML and data engineering roles. Proficient in Python and PySpark. Experience with cloud platforms, especially Microsoft Azure. Hands-on experience with LLMs (e.g., GPT, BERT, Claude, etc.). Familiarity with ML frameworks like Scikit-learn, TensorFlow, or PyTorch. Solid understanding of ML lifecycle, MLOps, and deployment strategies. Nice to Have : Experience with LLMOps and vector databases (e.g., FAISS, Pinecone). Knowledge of data governance and responsible AI practices. Azure certifications (e.g., Azure AI Engineer Associate, Azure Data Scientist Associate). Experience with REST APIs and containerization (Docker, Kubernetes).

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies