Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 5.0 years
5 - 8 Lacs
Bengaluru
On-site
Your Job As a Data Analyst in Molex's Copper Solutions Business Unit software solution group, you will be to extracting actionable insights from large and complex manufacturing datasets, identifying trends, optimizing production processes, improving operational efficiency, minimizing downtime, and enhancing overall product quality. You will be collaborating closely with cross-functional teams to ensure the effective use of data in driving continuous improvement and achieving business objectives within the manufacturing environment. Our Team Molex's Copper Solutions Business Unit (CSBU) is a global team that works together to deliver exceptional products to worldwide telecommunication and data center customers. SSG under CSBU is one of the most highly technically advanced software solution group within Molex. Our group leverages software expertise to enhance the concept, design, manufacturing, and support of high-speed electrical interconnects. What You Will Do 1. Collect, clean, and transform data from various sources to support analysis and decision-making processes. 2. Conduct thorough data analysis using Python to uncover trends, patterns, and insights. 3. Create & maintain reports based on business needs. 4. Prepare comprehensive reports that detail analytical processes and outcomes. 5. Develop and maintain visualizations/dashboards. 6. Collaborate with cross-functional teams to understand data needs and deliver actionable insights. 7. Perform ad hoc analysis to support business decisions. 8. Write efficient and optimized SQL queries to extract, manipulate, and analyze data from various databases. 9. Identify gaps and inefficiencies in current reporting processes and implement improvements and new solutions. 10. Ensure data quality and integrity across all reports and tools. Who You Are (Basic Qualifications) B.E./B.Tech Degree in Computer Science Engineering, Information Science, Data Science or related discipline. 3-5 years of progressive data analysis experience with Python (pandas, numpy, matplotlib, OpenPyXL, SciPy , Statsmodels, Seaborn). What Will Put You Ahead . Experience with Power BI, Tableau, or similar tools for creating interactive dashboards and reports tailored for manufacturing operations. • Experience with predictive analytics e.g. machine learning models (e.g., using Scikit-learn) to predict failures, optimize production, or forecast demand. • Experience with big data tools like Hadoop, Apache Kafka, or cloud platforms (e.g., AWS, Azure) for managing and analyzing large-scale data. • Knowledge on A/B testing & forecasting. • Familiarity with typical manufacturing data (e.g., machine performance metrics, production line data, quality control metrics). At Koch companies, we are entrepreneurs. This means we openly challenge the status quo, find new ways to create value and get rewarded for our individual contributions. Any compensation range provided for a role is an estimate determined by available market data. The actual amount may be higher or lower than the range provided considering each candidate's knowledge, skills, abilities, and geographic location. If you have questions, please speak to your recruiter about the flexibility and detail of our compensation philosophy. Who We Are {Insert company language from Company Boilerplate Language Guide } At Koch, employees are empowered to do what they do best to make life better. Learn how our business philosophy helps employees unleash their potential while creating value for themselves and the company. Additionally, everyone has individual work and personal needs. We seek to enable the best work environment that helps you and the business work together to produce superior results.
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Your Job As a Data Analyst in Molex’s Copper Solutions Business Unit software solution group, you will be to extracting actionable insights from large and complex manufacturing datasets, identifying trends, optimizing production processes, improving operational efficiency, minimizing downtime, and enhancing overall product quality. You will be collaborating closely with cross-functional teams to ensure the effective use of data in driving continuous improvement and achieving business objectives within the manufacturing environment. Our Team Molex’s Copper Solutions Business Unit (CSBU) is a global team that works together to deliver exceptional products to worldwide telecommunication and data center customers. SSG under CSBU is one of the most highly technically advanced software solution group within Molex. Our group leverages software expertise to enhance the concept, design, manufacturing, and support of high-speed electrical interconnects. What You Will Do Collect, clean, and transform data from various sources to support analysis and decision-making processes. Conduct thorough data analysis using Python to uncover trends, patterns, and insights. Create & maintain reports based on business needs. Prepare comprehensive reports that detail analytical processes and outcomes. Develop and maintain visualizations/dashboards. Collaborate with cross-functional teams to understand data needs and deliver actionable insights. Perform ad hoc analysis to support business decisions. Write efficient and optimized SQL queries to extract, manipulate, and analyze data from various databases. Identify gaps and inefficiencies in current reporting processes and implement improvements and new solutions. Ensure data quality and integrity across all reports and tools. Who You Are (Basic Qualifications) B.E./B.Tech Degree in Computer Science Engineering, Information Science, Data Science or related discipline. 3-5 years of progressive data analysis experience with Python (pandas, numpy, matplotlib, OpenPyXL, SciPy , Statsmodels, Seaborn). What Will Put You Ahead Experience with Power BI, Tableau, or similar tools for creating interactive dashboards and reports tailored for manufacturing operations. Experience with predictive analytics e.g. machine learning models (e.g., using Scikit-learn) to predict failures, optimize production, or forecast demand. Experience with big data tools like Hadoop, Apache Kafka, or cloud platforms (e.g., AWS, Azure) for managing and analyzing large-scale data. Knowledge on A/B testing & forecasting. Familiarity with typical manufacturing data (e.g., machine performance metrics, production line data, quality control metrics). At Koch companies, we are entrepreneurs. This means we openly challenge the status quo, find new ways to create value and get rewarded for our individual contributions. Any compensation range provided for a role is an estimate determined by available market data. The actual amount may be higher or lower than the range provided considering each candidate's knowledge, skills, abilities, and geographic location. If you have questions, please speak to your recruiter about the flexibility and detail of our compensation philosophy. Who We Are {Insert company language from Company Boilerplate Language Guide} At Koch, employees are empowered to do what they do best to make life better. Learn how our business philosophy helps employees unleash their potential while creating value for themselves and the company. Additionally, everyone has individual work and personal needs. We seek to enable the best work environment that helps you and the business work together to produce superior results. Show more Show less
Posted 1 month ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Description As a leading global investment management firm, AB fosters diverse perspectives and embraces innovation to help our clients navigate the uncertainty of capital markets. Through high-quality research and diversified investment services, we serve institutions, individuals and private wealth clients in major markets worldwide. Our ambition is simple: to be our clients’ most valued asset-management partner. With over 4,400 employees across 51 locations in 25 countries, our people are our advantage. We foster a culture of intellectual curiosity and collaboration to create an environment where everyone can thrive and do their best work. Whether you're producing thought-provoking research, identifying compelling investment opportunities, infusing new technologies into our business or providing thoughtful advice to clients, we’re looking for unique voices to help lead us forward. If you’re ready to challenge your limits and build your future, join us. Describe The Role Day to day responsibilities will include: Conduct asset allocation and manager evaluation research and creating bespoke client portfolios Undertake bespoke requests for data analysis; Build dashboards for data visualization (Python Dash) Handle data collation, cleansing and analysis (SQL, Python) Create new databases using data from different sources, and set up infrastructure for their maintenance; Clean and manipulate data, build models and produce automated reports using Python; Use statistical modelling and Machine Learning to address quantitative problems (Python) Conduct and deliver top notch research projects with quantitative applications to fundamental strategies. Preferred Skill Sets 2+ years of experience of RDBMS database design, preferably on MS SQL Server 2+ years of Python development experience. Advanced skills with programming using any of Python libraries (pandas, numpy, statsmodels, dash, pypfopt, cvxpy, keras, scikit-learn) – Must haves (pandas/numpy/statsmodels) Candidate should be capable of manipulating large quantities of data High level of attention to detail and accuracy Working experience on building quantitative models; experience with factor research, portfolio construction, systematic models Academic qualification in Mathematics/Physics/Statistics/Econometrics/Engineering or related field Understanding of company financial statements, accounting and risk analysis would be an added advantage Strong (English) communication skills with proven ability to interact with global clients Pune, India Show more Show less
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
About VOIS VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. About VOIS India In 2009, VO IS started operating in India and now has established global delivery centres in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VO IS India supports global markets and group functions of Vodafone, and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Job Description Big Data Handling: Passion and attitude to learn new data practices and tools (ingestion, transformation, governance, security & privacy), on both on-prem and Cloud (AWS preferable). Influences and contributes to innovative ways of unlocking value through companywide and external data Diagnostic Models Experience with diagnostic system using decision theory and causal models (including tools like probability, DAG, ADMG, Deterministic SME, etc) to predict the effects of an action to improve insight-led decisions. Able to productize the diagnostic systems built for reuse. Predictive & Prescriptive Analytics Models Expert in AI solutions - ML, DL, NLP, ES, RL etc. Should be able to build robust prescriptive learning systems that are scalable, real-time. Should be able to determine "Next Best Action" following Prescriptive Analytics. Autonomous Cognitive Systems Drive Autonomous system utility and continuously improve precision through creating the stable learning environment. Should be able to build Intelligent Autonomous Systems to prescribe proactive actions based on ML predictions and solicit feedback from the support functions with minimal human involvement. Big Data Tech, Environments & Frameworks Advanced applications of CNNs, RNNs, MLPs, Deep learning. Excellent application of machine learning and deep learning packages like tensorflow, pytorch, scikit, numpy, pandas, statsmodels theano, XGBoost etc. Demonstrated superior deep learning algorithms/framework. At least 1 certification in AWS will be preferred. Programming: Python, R, SQL Frameworks: TensorFlow, Keras, Scikit-learn Visualization: Tableau, Power BI Cloud: AWS, Azure Statistical Modeling: Regression, classification, clustering, time series Soft Skills: Communication, stakeholder management, problem-solving VOIS Equal Opportunity Employer Commitment India VO IS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 10 Best Workplaces for Millennials, Equity, and Inclusion , Top 50 Best Workplaces for Women , Top 25 Best Workplaces in IT & IT-BPM and 10th Overall Best Workplaces in India by the Great Place to Work Institute in 2024. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch! Show more Show less
Posted 1 month ago
0 years
0 Lacs
India
Remote
Data Science Intern (Remote) Company: Coreline Solutions Location: Remote / Pune, India Duration: 3 to 6 Months Stipend: Unpaid (Full-time offer based on performance) Work Mode: Remote About Coreline Solutions We’re a tech and consulting company focused on digital transformation, custom software development, and data-driven solutions. Role Overview We’re looking for a Data Science Intern to work on real-world data projects involving analytics, modeling, and business insights. Great opportunity for students or freshers to gain practical experience in the data science domain. Key Responsibilities Collect, clean, and analyze large datasets using Python, SQL, and Excel. Develop predictive and statistical models using libraries like scikit-learn or statsmodels. Visualize data and present insights using tools like Matplotlib, Seaborn, or Power BI. Support business teams with data-driven recommendations. Collaborate with data analysts, ML engineers, and developers. Requirements Pursuing or completed degree in Data Science, Statistics, CS, or related field. Proficient in Python and basic understanding of machine learning. Familiarity with data handling tools (Pandas, NumPy) and SQL. Good analytical and problem-solving skills. Perks Internship Certificate Letter of Recommendation (Top Performers) Mentorship & real-time project experience Potential full-time role To Apply Email your resume to 📧 hr@corelinesolutions.site Subject: “Application for Data Science Intern – [Your Full Name]” Show more Show less
Posted 1 month ago
4.0 years
6 - 9 Lacs
Hyderābād
On-site
About Citco Citco is a global leader in fund services, corporate governance and related asset services with staff across 80 offices worldwide. With more than $1.7 trillion in assets under administration, we deliver end-to-end solutions and exceptional service to meet our clients’ needs. For more information about Citco, please visit www.citco.com About the Team & Business Line: Citco Fund Services is a division of the Citco Group of Companies and is the largest independent administrator of Hedge Funds in the world. Our continuous investment in learning and technology solutions means our people are equipped to deliver a seamless client experience. This position reports in to the Loan Services Business Line As a core member of our Loan Services Data and Reporting team, you will be working with some of the industry’s most accomplished professionals to deliver award-winning services for complex fund structures that our clients can depend upon Job Duties in Brief: Your Role: Develop and execute database queries and conduct data analyses Create scripts to analyze and modify data, import/export scripts and execute stored procedures Model data by writing SQL queries/Python codes to support data integration and dashboard requirements Develop data pipelines that provide fast, optimized, and robust end-to-end solutions Leverage and contribute to design/building relational database schemas for analytics. Handle and manipulate data in various structures and repositories (data cube, data mart, data warehouse, data lake) Analyze, implement and contribute to building of APIs to improve data integration pipeline Perform data preparation tasks including data cleaning, normalization, deduplication, type conversion etc. Perform data integration through extracting, transforming and loading (ETL) data from various sources. Identify opportunities to improve processes and strategies with technology solutions and identify development needs in order to improve and streamline operations Create tabular reports, matrix reports, parameterized reports, visual reports/dashbords in a reporting application such as Power BI Desktop/Cloud or QLIK Integrating PBI/QLIK reports into other applications using embedded analytics like Power BI service (SaaS), or by API automation is also an advantage Implementation of NLP techniques for text representation, semantic extraction techniques, data structures and modelling Contribute to deployment and maintainence of machine learning solutions in production environments Building and Designing cloud applications using Microsoft Azure/AWS cloud technologies. About You: Background / Qualifications Bachelor’s Degree in technology/related field or equivalent work experience 4+ Years of SQL and/or Python experience is a must Strong knowledge of data concepts and tools and experienced in RDMS such as MS SQL Server, Oracle etc. Well-versed with concepts and techniques of Business Intelligence and Data Warehousing. Strong database designing and SQL skills. objects development, performance tuning and data analysis In-depth understanding of database management systems, OLAP & ETL frameworks Familiarity or hands on experience working with REST or SOAP APIs Well versed with concepts for API Management and Integration with various data sources in cloud platforms, to help with connecting to traditional SQL and new age data sources, such as Snowflake Familiarity with Machine Learning concepts like feature selection/deep learning/AI and ML/DL frameworks (like Tensorflow or PyTorch) and libraries (like scikit-learn, StatsModels) is an advantage Familiarity with BI technologies (e.g. Microsoft Power BI, Oracle BI) is an advantage Hands-on experience at least in one ETL tool (SSIS, Informatica, Talend, Glue, Azure Data factory) and associated data integration principles is an advantage Minimum 1+ year experience with Cloud platform technologies (AWS/Azure), including Azure Machine Learning is desirable. Following AWS experience is a plus: Implementing identity and access management (IAM) policies Managing user accounts with IAM Knowledge of writing infrastructure as code (IaC) using CloudFormation or Terraform. Implementing cloud storage using Amazon Simple Storage Service (S3) Experience with serverless approaches using AWS Lambda, e.g. AWS (SAM) Configuring Amazon Elastic Compute Cloud (EC2) Instances Previous Work Experience: Experience querying databases and strong programming skills: Python, SQL, PySpark etc. Prior experience supporting ETL production environments & web technologies such as XML is an advatange Previous working experience on Azure Data Services including ADF, ADLS, Blob, Data Bricks, Hive, Python, Spark and/or features of Azure ML Studio, ML Services and ML Ops is an advantage Experience with dashboard and reporting applications like Qlik, Tableau, Power BI Other: Well rounded individual possessing a high degree of initiative Proactive person willing to accept responsibility with very little hand-holding A strong analytical and logical mindset Demonstrated proficiency in interpersonal and communication skills including oral and written English. Ability to work in fast paced, complex Business & IT environments Knowledge of Loan Servicing and/or Loan Administration is an advantage Understanding of Agile/Scrum methodology as it relates to the software development lifecycle What We Offer: A rewarding and challenging environment that spans multiple geographies and multiple business lines Great working environment, competitive salary and benefits, and opportunities for educational support Be part of an industry leading global organisation, renowned for excellence Opportunities for personal and professional career development Our Benefits Your well-being is of paramount importance to us, and central to our success. We provide a range of benefits, training and education support, and flexible working arrangements to help you achieve success in your career while balancing personal needs. Ask us about specific benefits in your location. We embrace diversity, prioritizing the hiring of people from diverse backgrounds. Our inclusive culture is a source of pride and strength, fostering innovation and mutual respect. Citco welcomes and encourages applications from people with disabilities. Accommodations are available upon request for candidates taking part in all aspects of the selection .
Posted 1 month ago
1.0 - 3.0 years
35 Lacs
Mumbai
Work from Office
Job Insights: 1. Develop and maintain AI models on time series & financial date for predictive modelling, including data collection, analysis, feature engineering, model development, evaluation, backtesting and monitoring. 2. Identify areas for model improvement through independent research and analysis, and develop recommendations for updates and enhancements. 3. Working with expert colleagues, Quant and business representatives to examine the results and keep models grounded in reality. 4. Documenting each step of the development and informing decision makers by presenting them options and results. 5. Ensure the integrity and security of data. 6. Provide support for production models delivered by the Mumbai team but potentially as well for other models to any of the Asian/EU/US time zones. Qualifications: Bachelors or Masters degree in a numeric subject with understanding of economics and markets (eg.: Economics with a speciality in Econometrics, Finance, Computer Science, Applied Maths, Engineering, Physics) 2. Knowledge of key concepts in Statistics and Mathematics such as Statistical methods for Machine learning, Probability Theory and Linear Algebra. 3. Knowledge of Monte Carlo Simulations, Bayesian modelling & Causal Inference. 4. Experience with Machine Learning & Deep Learning concepts including data representations, neural network architectures, custom loss functions. 5. Proven track record of building AI models on time-series & financial data. 6. Programming skills in Python and knowledge of common numerical and machine-learning packages (like NumPy, scikit-learn, pandas, PyTorch, PyMC, statsmodels). 7. Ability to write clear and concise code in python. 8. Intellectually curious and willing to learn challenging concepts daily.
Posted 1 month ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Title: Associate Data Scientist Location: Mumbai Job Type: Full-time Experience: 0-6months About The Role We are seeking a highly motivated Associate Data Scientist with a strong passion for energy, technology, and data-driven decision-making. In this role, you will be responsible for developing and refining energy load forecasting models , analyzing customer demand patterns , and improving forecasting accuracy using advanced time series analysis and machine learning techniques . Your insights will directly support risk management, operational planning, and strategic decision-making across the company. If you thrive in a fast-paced, dynamic environment and enjoy solving complex data science challenges , we’d love to hear from you! Key Responsibilities Develop and enhance energy load forecasting models using time series forecasting, statistical modeling, and machine learning techniques. Analyze historical and real-time energy consumption data to identify trends and improve forecasting accuracy. Investigate discrepancies between forecasted and actual energy usage, providing actionable insights. Automate data pipelines and forecasting workflows to streamline processes across departments. Monitor day-over-day forecast variations and communicate key insights to stakeholders. Work closely with internal teams and external vendors to refine forecasting methodologies. Perform scenario analysis to assess seasonal patterns, anomalies, and market trends. Continuously optimize forecasting models, leveraging techniques like ARIMA, Prophet, LSTMs, and regression-based models. Qualifications & Skills 0-6months of experience in data science, preferably in energy load forecasting, demand prediction, or a related field. Strong expertise in time series analysis, forecasting algorithms, and statistical modeling. Proficiency in Python, with experience using libraries such as pandas, NumPy, scikit-learn, statsmodels, and TensorFlow/PyTorch. Experience working with SQL and handling large datasets. Hands-on experience with forecasting models like ARIMA, SARIMA, Prophet, LSTMs, XGBoost, and random forests. Familiarity with feature engineering, anomaly detection, and seasonality analysis. Strong analytical and problem-solving skills with a data-driven mindset. Excellent communication skills, with the ability to translate technical findings into business insights. Ability to work independently and collaboratively in a fast-paced, dynamic environment. Strong attention to detail, time management, and organizational skills. Preferred Qualifications (Nice To Have) Experience working with energy market data, smart meter analytics, or grid forecasting. Knowledge of cloud platforms (AWS) for deploying forecasting models. Experience with big data technologies such as Spark or Hadoop. Show more Show less
Posted 2 months ago
4.0 years
0 Lacs
Greater Bengaluru Area
On-site
Job Title : Senior Data Scientist (SDS 2) Experience: 4+ years Location : Bengaluru (Hybrid) Company Overview: Akaike Technologies is a dynamic and innovative AI-driven company dedicated to building impactful solutions across various domains . Our mission is to empower businesses by harnessing the power of data and AI to drive growth, efficiency, and value. We foster a culture of collaboration , creativity, and continuous learning , where every team member is encouraged to take initiative and contribute to groundbreaking projects. We value diversity, integrity, and a strong commitment to excellence in all our endeavors. Job Description: We are seeking an experienced and highly skilled Senior Data Scientist to join our team in Bengaluru. This role focuses on driving innovative solutions using cutting-edge Classical Machine Learning, Deep Learning, and Generative AI . The ideal candidate will possess a blend of deep technical expertise , strong business acumen, effective communication skills , and a sense of ownership . During the interview, we look for a proven track record in designing, developing, and deploying scalable ML/DL solutions in a fast-paced, collaborative environment. Key Responsibilities: ML/DL Solution Development & Deployment: Design, implement, and deploy end-to-end ML/DL, GenAI solutions, writing modular, scalable, and production-ready code. Develop and implement scalable deployment pipelines using Docker and AWS services (ECR, Lambda, Step Functions). Design and implement custom models and loss functions to address data nuances and specific labeling challenges. Ability to model in different marketing scenarios of a product life cycle ( Targeting, Segmenting, Messaging, Content Recommendation, Budget optimisation, Customer scoring, risk and churn ), and data limitations(Sparse or incomplete labels, Single class learning) Large-Scale Data Handling & Processing: Efficiently handle and model billions of data points using multi-cluster data processing frameworks (e.g., Spark SQL, PySpark ). Generative AI & Large Language Models (LLMs): Leverage in-depth understanding of transformer architectures and the principles of Large and Small Language Models . Practical experience in building LLM-ready Data Management layers for large-scale structured and unstructured data . Apply foundational understanding of LLM Agents, multi-agent systems (e.g., Agent-Critique, ReACT, Agent Collaboration), advanced prompting techniques, LLM eval uation methodologies, confidence grading, and Human-in-the-Loop systems. Experimentation, Analysis & System Design: Design and conduct experiments to test hypotheses and perform Exploratory Data Analysis (EDA) aligned with business requirements. Apply system design concepts and engineering principles to create low-latency solutions capable of serving simultaneous users in real-time. Collaboration, Communication & Mentorship: Create clear solution outlines and e ffectively communicate complex technical concepts to stakeholders and team members. Mentor junior team members, providing guidance and bridging the gap between business problems and data science solutions. Work closely with cross-functional teams and clients to deliver impactful solutions. Prototyping & Impact Measurement: Comfortable with rapid prototyping and meeting high productivity expectations in a fast-paced development environment. Set up measurement pipelines to study the impact of solutions in different market scenarios. Must-Have Skills: Core Machine Learning & Deep Learning: In-depth knowledge of Artificial Neural Networks (ANN), 1D, 2D, and 3D Convolutional Neural Networks (ConvNets), LSTMs , and Transformer models. Expertise in modeling techniques such as promo mix modeling (MMM) , PU Learning , Customer Lifetime Value (CLV) , multi-dimensional time series modeling, and demand forecasting in supply chain and simulation. Strong proficiency in PU learning, single-class learning, representation learning, alongside traditional machine learning approaches. Advanced understanding and application of model explainability techniques. Data Analysis & Processing: Proficiency in Python and its data science ecosystem, including libraries like NumPy, Pandas, Dask, and PySpark for large-scale data processing and analysis. Ability to perform effective feature engineering by understanding business objectives. ML/DL Frameworks & Tools: Hands-on experience with ML/DL libraries such as Scikit-learn, TensorFlow/Keras, and PyTorch for developing and deploying models. Natural Language Processing (NLP): Expertise in traditional and advanced NLP techniques, including Transformers (BERT, T5, GPT), Word2Vec, Named Entity Recognition (NER), topic modeling, and contrastive learning. Cloud & MLOps: Experience with the AWS ML stack or equivalent cloud platforms. Proficiency in developing scalable deployment pipelines using Docker and AWS services (ECR, Lambda, Step Functions). Problem Solving & Research: Strong logical and reasoning skills. Good understanding of the Python Ecosystem and experience implementing research papers. Collaboration & Prototyping: Ability to thrive in a fast-paced development and rapid prototyping environment. Relevant to Have: Expertise in Claims data and a background in the pharmaceutical industry . Awareness of best software design practices . Understanding of backend frameworks like Flask. Knowledge of Recommender Systems, Representative learning, PU learning. Benefits and Perks: Competitive ESOP grants. Opportunity to work with Fortune 500 companies and world-class teams. Support for publishing papers and attending academic/industry conferences. Access to networking events, conferences, and seminars. Visibility across all functions at Akaike, including sales, pre-sales, lead generation, marketing, and hiring. Appendix Technical Skills (Must Haves) Having deep understanding of the following Data Processing : Wrangling : Some understanding of querying database (MySQL, PostgresDB etc), very fluent in the usage of the following libraries Pandas, Numpy, Statsmodels etc. Visualization : Exposure towards Matplotlib, Plotly, Altair etc. Machine Learning Exposure : Machine Learning Fundamentals, For ex: PCA, Correlations, Statistical Tests etc. Time Series Models, For ex: ARIMA, Prophet etc. Tree Based Models, For ex: Random Forest, XGBoost etc.. Deep Learning Models, For ex: Understanding and Experience of ConvNets, ResNets, UNets etc. GenAI Based Models : Experience utilizing large-scale language models such as GPT-4 or other open-source alternatives (such as Mistral, Llama, Claude) through prompt engineering and custom finetuning. Code Versioning Systems : Github, Git If you're interested in the job opening, please apply through the Keka link provided here: https://akaike.keka.com/careers/jobdetails/26215 Show more Show less
Posted 2 months ago
5.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – AI and DATA – Statistical Modeler-Senior At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. As part of our EY- GDS AI and Data team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. Technical Skills: Statistical Programming Languages: Python, R Libraries & Frameworks: Pandas, NumPy, Scikit-learn, StatsModels, Tidyverse, caret Data Manipulation Tools: SQL, Excel Data Visualization Tools: Matplotlib, Seaborn, ggplot2, Machine Learning Techniques: Supervised and unsupervised learning, model evaluation (cross-validation, ROC curves) 5-7 years of experience in building statistical forecast models for pharma industry Deep understanding of patient flows,treatment journey across both Onc and Non Onc Tas. What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you About EY As a global leader in assurance, tax, transaction and advisory services, we’re using the finance products, expertise and systems we’ve developed to build a better working world. That starts with a culture that believes in giving you the training, opportunities and creative freedom to make things better. Whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 2 months ago
0 years
0 Lacs
Greater Bengaluru Area
On-site
Job Description: We are looking for a Data Scientist with expertise in Python, Azure Cloud, NLP, Forecasting, and large-scale data processing. The role involves enhancing existing ML models, optimising embeddings, LDA models, RAG architectures, and forecasting models, and migrating data pipelines to Azure Databricks for scalability and efficiency. Key Responsibilities: Model Development Model Development & Optimisation Train and optimise models for new data providers, ensuring seamless integration. Enhance models for dynamic input handling. Improve LDA model performance to handle a higher number of clusters efficiently. Optimise RAG (Retrieval-Augmented Generation) architecture to enhance recommendation accuracy for large datasets. Upgrade Retrieval QA architecture for improved chatbot performance on large datasets. Forecasting & Time Series Modelling Develop and optimise forecasting models for marketing, demand prediction, and trend analysis. Implement time series models (e.g., ARIMA, Prophet, LSTMs) to improve business decision-making. Integrate NLP-based forecasting, leveraging customer sentiment and external data sources (e.g., news, social media). Data Pipeline & Cloud Migration Migrate the existing pipeline from Azure Synapse to Azure Databricks and retrain models accordingly - Note: this is required only for the AUB role(s) Address space and time complexity issues in embedding storage and retrieval on Azure Blob Storage. Optimise embedding storage and retrieval in Azure Blob Storage for better efficiency. MLOps & Deployment Implement MLOps best practices for model deployment on Azure ML, Azure Kubernetes Service (AKS), and Azure Functions. Automate model training, inference pipelines, and API deployments using Azure services. Experience: Experience in Data Science, Machine Learning, Deep Learning and Gen AI. Design, Architect and Execute end to end Data Science pipelines which includes Data extraction, data preprocessing, Feature engineering, Model building, tuning and Deployment. Experience in leading a team and responsible for project delivery. Experience in Building end to end machine learning pipelines with expertise in developing CI/CD pipelines using Azure Synapse pipelines, Databricks, Google Vertex AI and AWS. Experience in developing advanced natural language processing (NLP) systems, specializing in building RAG (Retrieval-Augmented Generation) models using Langchain. Deploy RAG models to production. Have expertise in building Machine learning pipelines and deploy various models like Forecasting models, Anomaly Detection models, Market Mix Models, Classification models, Regression models and Clustering Techniques. Maintaining Github repositories and cloud computing resources for effective and efficient version control, development, testing and production. Developing proof-of-concept solutions and assisting in rolling these out to our clients. Required Skills & Qualifications: Hands-on experience with Azure Databricks, Azure ML, Azure Synapse, Azure Blob Storage, and Azure Kubernetes Service (AKS). Experience with forecasting models, time series analysis, and predictive analytics. Proficiency in Python (NumPy, Pandas, TensorFlow, PyTorch, Statsmodels, Scikit-learn, Hugging Face, FAISS). Experience with model deployment, API optimisation, and serverless architectures. Hands-on experience with Docker, Kubernetes, and MLflow for tracking and scaling ML models. Expertise in optimising time complexity, memory efficiency, and scalability of ML models in a cloud environment. Experience with Langchain or equivalent and RAG and multi-agentic generation Location: DGS India - Bengaluru - Manyata N1 Block Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less
Posted 2 months ago
5.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
About Hakkoda Hakkoda, an IBM Company, is a modern data consultancy that empowers data driven organizations to realize the full value of the Snowflake Data Cloud. We provide consulting and managed services in data architecture, data engineering, analytics and data science. We are renowned for bringing our clients deep expertise, being easy to work with, and being an amazing place to work! We are looking for curious and creative individuals who want to be part of a fast-paced, dynamic environment, where everyone’s input and efforts are valued. We hire outstanding individuals and give them the opportunity to thrive in a collaborative atmosphere that values learning, growth, and hard work. Our team is distributed across North America, Latin America, India and Europe. If you have the desire to be a part of an exciting, challenging, and rapidly-growing Snowflake consulting services company, and if you are passionate about making a difference in this world, we would love to talk to you!. We are seeking an exceptional and highly motivated Lead Data Scientist with a PhD in Data Science, Computer Science, Applied Mathematics, Statistics, or a closely related quantitative field, to spearhead the design, development, and deployment of an automotive OEM’s next-generation Intelligent Forecast Application. This pivotal role will leverage cutting-edge machine learning, deep learning, and statistical modeling techniques to build a robust, scalable, and accurate forecasting system crucial for strategic decision-decision-making across the automotive value chain, including demand planning, production scheduling, inventory optimization, predictive maintenance, and new product introduction. The successful candidate will be a recognized expert in advanced forecasting methodologies, possess a strong foundation in data engineering and MLOps principles, and demonstrate a proven ability to translate complex research into tangible, production-ready applications within a dynamic industrial environment. This role demands not only deep technical expertise but also a visionary approach to leveraging data and AI to drive significant business impact for a leading automotive OEM. Role Description Strategic Leadership & Application Design: Lead the end-to-end design and architecture of the Intelligent Forecast Application, defining its capabilities, modularity, and integration points with existing enterprise systems (e.g., ERP, SCM, CRM). Develop a strategic roadmap for forecasting capabilities, identifying opportunities for innovation and the adoption of emerging AI/ML techniques (e.g., generative AI for scenario planning, reinforcement learning for dynamic optimization). Translate complex business requirements and automotive industry challenges into well-defined data science problems and technical specifications. Advanced Model Development & Research: Design, develop, and validate highly accurate and robust forecasting models using a variety of advanced techniques, including: Time Series Analysis: ARIMA, SARIMA, Prophet, Exponential Smoothing, State-space models. Machine Learning: Gradient Boosting (XGBoost, LightGBM), Random Forests, Support Vector Machines. Deep Learning: LSTMs, GRUs, Transformers, and other neural network architectures for complex sequential data. Probabilistic Forecasting: Quantile regression, Bayesian methods to capture uncertainty. Hierarchical & Grouped Forecasting: Managing forecasts across multiple product hierarchies, regions, and dealerships. Incorporate diverse data sources, including historical sales, market trends, economic indicators, competitor data, internal operational data (e.g., production schedules, supply chain disruptions), external events, and unstructured data. Conduct extensive exploratory data analysis (EDA) to identify patterns, anomalies, and key features influencing automotive forecasts. Stay abreast of the latest academic researchand industry advancements in forecasting, machine learning, and AI, actively evaluating and advocating for their practical application within the OEM. Application Development & Deployment (MLOps): Architect and implement scalable data pipelines for ingestion, cleaning, transformation, and feature engineering of large, complex automotive datasets. Develop robust and efficient code for model training, inference, and deployment within a production environment. Implement MLOps best practices for model versioning, monitoring, retraining, and performance management to ensure the continuous accuracy and reliability of the forecasting application. Collaborate closely with Data Engineering, Software Development, and IT Operations teams to ensure seamless integration, deployment, and maintenance of the application. Performance Evaluation & Optimization: Define and implement rigorous evaluation metrics for forecasting accuracy (e.g., MAE, RMSE, MAPE, sMAPE, wMAPE, Pinball Loss) and business impact. Perform A/B testing and comparative analyses of different models and approaches to continuously improve forecasting performance. Identify and mitigate sources of bias and uncertainty in forecasting models. Collaboration & Mentorship: Work cross-functionally with various business units (e.g., Sales, Marketing, Supply Chain, Manufacturing, Finance, Product Development) to understand their forecasting needs and integrate solutions. Communicate complex technical concepts and model insights clearly and concisely to both technical and non-technical stakeholders. Provide technical leadership and mentorship to junior data scientists and engineers, fostering a culture of innovation and continuous learning. Potentially contribute to intellectual property (patents) and present findings at internal and external conferences. Qualifications Education: PhD in Data Science, Computer Science, Statistics, Applied Mathematics, Operations Research, or a closely related quantitative field. Experience: 5+ years of hands-on experience in a Data Scientist or Machine Learning Engineer role, with a significant focus on developing and deploying advanced forecasting solutions in a production environment. Demonstrated experience designing and developing intelligent applications, not just isolated models. Experience in the automotive industry or a similar complex manufacturing/supply chain environment is highly desirable. Technical Skills: Expert proficiency in Python (Numpy, Pandas, Scikit-learn, Statsmodels) and/or R. Strong proficiency in SQL. Machine Learning/Deep Learning Frameworks: Extensive experience with TensorFlow, PyTorch, Keras, or similar deep learning libraries. Forecasting Specific Libraries: Proficiency with forecasting libraries like Prophet, Statsmodels, or specialized time series packages. Data Warehousing & Big Data Technologies: Experience with distributed computing frameworks (e.g., Apache Spark, Hadoop) and data storage solutions (e.g., Snowflake, Databricks, S3, ADLS). Cloud Platforms: Hands-on experience with at least one major cloud provider (Azure, AWS, GCP) for data science and ML deployments. MLOps: Understanding and practical experience with MLOps tools and practices (e.g., MLflow, Kubeflow, Docker, Kubernetes, CI/CD pipelines). Data Visualization: Proficiency with tools like Tableau, Power BI, or similar for creating compelling data stories and dashboards. Analytical Prowess: Deep understanding of statistical inference, experimental design, causal inference, and the mathematical foundations of machine learning algorithms. Problem Solving: Proven ability to analyze complex, ambiguous problems, break them down into manageable components, and devise innovative solutions. Preferred Qualifications Publications in top-tier conferences or journals related to forecasting, time series analysis, or applied machine learning. Experience with real-time forecasting systems or streaming data analytics. Familiarity with specific automotive data types (e.g., telematics, vehicle sensor data, dealership data, market sentiment). Experience with distributed version control systems (e.g., Git). Knowledge of agile development methodologies. Soft Skills Exceptional Communication: Ability to articulate complex technical concepts and insights to a diverse audience, including senior management and non-technical stakeholders. Collaboration: Strong interpersonal skills and a proven ability to work effectively within cross-functional teams. Intellectual Curiosity & Proactiveness: A passion for continuous learning, staying ahead of industry trends, and proactively identifying opportunities for improvement. Strategic Thinking: Ability to see the big picture and align technical solutions with overall business objectives. Mentorship: Desire and ability to guide and develop less experienced team members. Resilience & Adaptability: Thrive in a fast-paced, evolving environment with complex challenges. Benefits Health Insurance Paid leave Technical training and certifications Robust learning and development opportunities Incentive Toastmasters Food Program Fitness Program Referral Bonus Program Hakkoda is committed to fostering diversity, equity, and inclusion within our teams. A diverse workforce enhances our ability to serve clients and enriches our culture. We encourage candidates of all races, genders, sexual orientations, abilities, and experiences to apply, creating a workplace where everyone can succeed and thrive. Ready to take your career to the next level? 🚀 💻 Apply today👇 and join a team that’s shaping the future!! Hakkoda is an IBM subsidiary which has been acquired by IBM and will be integrated in the IBM organization. Hakkoda will be the hiring entity. By Proceeding with this application, you understand that Hakkoda will share your personal information with other IBM subsidiaries involved in your recruitment process, wherever these are located. More information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here. Show more Show less
Posted 2 months ago
5.0 - 9.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Dreaming big is in our DNA Its who we are as a company Its our culture Its our heritage And more than ever, its our future A future where were always looking forward Always serving up new ways to meet lifes moments A future where we keep dreaming bigger We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential The power we create together when we combine your strengths with ours is unstoppable Are you ready to join a team that dreams as big as you do AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology The teams are transforming Operations through Tech and Analytics, Do You Dream Big We Need You, Job Description Job Title: Senior ML Engineer Location: Bangalore Reporting to: Director Data Analytics Purpose of the role Anheuser-Busch InBev (AB InBev)s Supply Analytics is responsible for building competitive differentiated solutions that enhance brewery efficiency through data-driven insights We optimize processes, reduce waste, and improve productivity by leveraging advanced analytics and AI-driven solutions, Senior MLE, will be responsible for the end-to-end deployment of machine learning models on edge devices You will take ownership of all aspects of edge deployment, including model optimization, scaling complexities, containerization, and infrastructure management, ensuring high availability and performance, Key tasks & accountabilities Lead the entire edge deployment lifecycle, from model training to deployment and monitoring on edge devices Develop, and maintain a scalable Edge ML pipeline that enables real-time analytics at brewery sites, Optimize and containerize models using Portainer, Docker, and Azure Container Registry (ACR) to ensure efficient execution in constrained edge environments, Own and manage the GitHub repository, ensuring structured, well-documented, and modularized code for seamless deployments, Establish robust CI/CD pipelines for continuous integration and deployment of models and services, Implement logging, monitoring, and alerting for deployed models to ensure reliability and quick failure recovery Ensure compliance with security and governance best practices for data and model deployment in edge environments, Document the thought process & create artifacts on team repo/wiki that can be used to share with business & engineering for sign off, Review code quality and design developed by the peers, Significantly improve the performance & reliability of our code that creates high quality & reproducible results, Develop internal tools/utils that improve productivity of entire team, Collaborate with other team members to advance the teams ability to ship high quality code fast! Mentor/coach junior team members to continuously upskill them, Maintain basic developer hygiene that includes but not limited to, writing tests, using loggers, readme to name a few, Qualifications, Experience, Skills Level of educational attainment required (1 or more of the following) Academic degree in, but not limited to, Bachelors or master's in computer application, Computer science, or any engineering discipline, Previous Work Experience 5+ years of real-world experience to develop scalable & high-quality ML models, Strong problem-solving skills with an owners mindset?proactively identifying and resolving bottlenecks, Technical Skills Required Proficiency with pandas, NumPy, SciPy, scikit-learn, stats models, TensorFlow, Good understanding of statistical computing, parallel processing, Experience with advanced TensorFlow distributed, NumPy, joblib, Good understanding of memory management & parallel processing in python, Profiling & optimization of production code, Strong at Python coding Exposure to working in IDEs such as VSC or PyCharm, Experience in code versioning using Git, maintaining modularized code base for multiple deployments, Experience in working in an Agile environment, In depth understand of data bricks (Workflows, cluster creation, repo management), In depth understanding of machine learning solution in Azure cloud, Best practices in coding standards, unit testing, and automation, Proficiency in Docker, Kubernetes, Portainer, and container orchestration for edge computing, Other Skills Required Experience in real-time analytics and edge AI deployments Exposure to DevOps practices, including infrastructure automation and monitoring tools Contributions to OSS or Stack overflow, And above all of this, an undying love for beer! We dream big to create future with more cheers
Posted 2 months ago
0.0 - 2.0 years
0 Lacs
Gurugram, Haryana
On-site
Position : AI / ML Engineer Job Type : Full-Time Location : Gurgaon, Haryana, India Experience : 2 Years Industry : Information Technology Domain : Demand Forecasting in Retail/Manufacturing Job Summary We are seeking a skilled Time Series Forecasting Engineer to enhance existing Python microservices into a modular, scalable forecasting engine. The ideal candidate will have a strong statistical background, expertise in handling multi-seasonal and intermittent data, and a passion for model interpretability and real-time insights. Key Responsibilities Develop and integrate advanced time-series models: MSTL, Croston, TSB, Box-Cox. Implement rolling-origin cross-validation and hyperparameter tuning. Blend models such as ARIMA, Prophet, and XGBoost for improved accuracy. Generate SHAP-based driver insights and deliver them to a React dashboard via GraphQL. Monitor forecast performance with Prometheus and Grafana; trigger alerts based on degradation. Core Technical Skills Languages : Python (pandas, statsmodels, scikit-learn) Time Series : ARIMA, MSTL, Croston, Prophet, TSB Tools : Docker, REST API, GraphQL, Git-flow, Unit Testing Database : PostgreSQL Monitoring : Prometheus, Grafana Nice-to-Have : MLflow, ONNX, TensorFlow Probability Soft Skills Strong communication and collaboration skills Ability to explain statistical models in layman terms Proactive problem-solving attitude Comfort working cross-functionally in iterative development environments Job Type: Full-time Pay: ₹400,000.00 - ₹800,000.00 per year Application Question(s): Do you have at least 2 years of hands-on experience in Python-based time series forecasting? Have you worked in retail or manufacturing domains where demand forecasting was a core responsibility? Are you currently authorized to work in India without sponsorship? Have you implemented or used ARIMA, Prophet, or MSTL in any of your projects? Have you used Croston or TSB models for forecasting intermittent demand? Are you familiar with SHAP for model interpretability? Have you containerized a forecasting pipeline using Docker and exposed it through a REST or GraphQL API? Have you used Prometheus and Grafana to monitor model performance in production? Work Location: In person Application Deadline: 05/06/2025 Expected Start Date: 05/06/2025
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough