Home
Jobs
Companies
Resume

27 Statsmodels Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Manager- GBS Commercial Location: Bangalore Reporting to: Senior Manager - GBS Commercial Purpose of the role This role sits at the intersection of data science and revenue growth strategy, focused on developing advanced analytical solutions to optimize pricing, trade promotions, and product mix. The candidate will lead the end-to-end design, deployment, and automation of machine learning models and statistical frameworks that support commercial decision-making, predictive scenario planning, and real-time performance tracking. By leveraging internal and external data sources—including transactional, market, and customer-level data—this role will deliver insights into price elasticity, promotional lift, channel efficiency, and category dynamics. The goal is to drive measurable improvements in gross margin, ROI on trade spend, and volume growth through data-informed strategies. Key tasks & accountabilities Design and implement price elasticity models using linear regression, log-log models, and hierarchical Bayesian frameworks to understand consumer response to pricing changes across channels and segments. Build uplift models (e.g., Causal Forests, XGBoost for treatment effect) to evaluate promotional effectiveness and isolate true incremental sales vs. base volume. Develop demand forecasting models using ARIMA, SARIMAX, and Prophet, integrating external factors such as seasonality, promotions, and competitor activity. time-series clustering and k-means segmentation to group SKUs, customers, and geographies for targeted pricing and promotion strategies. Construct assortment optimization models using conjoint analysis, choice modeling, and market basket analysis to support category planning and shelf optimization. Use Monte Carlo simulations and what-if scenario modeling to assess revenue impact under varying pricing, promo, and mix conditions. Conduct hypothesis testing (t-tests, ANOVA, chi-square) to evaluate statistical significance of pricing and promotional changes. Create LTV (lifetime value) and customer churn models to prioritize trade investment decisions and drive customer retention strategies. Integrate Nielsen, IRI, and internal POS data to build unified datasets for modeling and advanced analytics in SQL, Python (pandas, statsmodels, scikit-learn), and Azure Databricks environments. Automate reporting processes and real-time dashboards for price pack architecture (PPA), promotion performance tracking, and margin simulation using advanced Excel and Python. Lead post-event analytics using pre/post experimental designs, including difference-in-differences (DiD) methods to evaluate business interventions. Collaborate with Revenue Management, Finance, and Sales leaders to convert insights into pricing corridors, discount policies, and promotional guardrails. Translate complex statistical outputs into clear, executive-ready insights with actionable recommendations for business impact. Continuously refine model performance through feature engineering, model validation, and hyperparameter tuning to ensure accuracy and scalability. Provide mentorship to junior analysts, enhancing their skills in modeling, statistics, and commercial storytelling. Maintain documentation of model assumptions, business rules, and statistical parameters to ensure transparency and reproducibility. Other Competencies Required Presentation Skills: Effectively presenting findings and insights to stakeholders and senior leadership to drive informed decision-making. Collaboration: Working closely with cross-functional teams, including marketing, sales, and product development, to implement insights-driven strategies. Continuous Improvement: Actively seeking opportunities to enhance reporting processes and insights generation to maintain relevance and impact in a dynamic market environment. Data Scope Management: Managing the scope of data analysis, ensuring it aligns with the business objectives and insights goals. Act as a steadfast advisor to leadership, offering expert guidance on harnessing data to drive business outcomes and optimize customer experience initiatives. Serve as a catalyst for change by advocating for data-driven decision-making and cultivating a culture of continuous improvement rooted in insights gleaned from analysis. Continuously evaluate and refine reporting processes to ensure the delivery of timely, relevant, and impactful insights to leadership stakeholders while fostering an environment of ownership, collaboration, and mentorship within the team. Technical Skills - Must Have Data Manipulation & Analysis: Advanced proficiency in SQL, Python (Pandas, NumPy), and Excel for structured data processing. Data Visualization: Expertise in Power BI and Tableau for building interactive dashboards and performance tracking tools. Modeling & Analytics: Hands-on experience with regression analysis, time series forecasting, and ML models using scikit-learn or XGBoost. Data Engineering Fundamentals: Knowledge of data pipelines, ETL processes, and integration of internal/external datasets for analytical readiness. Proficient in Power BI, Advanced MS Excel (Pivots, calculated fields, Conditional formatting, charts, dropdown lists, etc.), MS PowerPoint SQL & Python. Business Environment Work closely with Zone Revenue Management teams. Work in a fast-paced environment. Provide proactive communication to the stakeholders. This is an offshore role and requires comfort with working in a virtual environment. GCC is referred to as the offshore location. The role requires working in a collaborative manner with Zone/country business heads and GCC commercial teams. Summarize insights and recommendations to be presented back to the business. Continuously improve, automate, and optimize the process. Geographical Scope: Global 3. Qualifications, Experience, Skills Level Of Educational Attainment Required Bachelor or Post-Graduate in the field of Business & Marketing, Engineering/Solution, or other equivalent degree or equivalent work experience. Previous Work Experience 5-8 years of experience in the Retail/CPG domain. Extensive experience solving business problems using quantitative approaches. Comfort with extracting, manipulating, and analyzing complex, high volume, high dimensionality data from varying sources. And above all of this, an undying love for beer! We dream big to create future with more cheer. Show more Show less

Posted 3 hours ago

Apply

2.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Roles and Responsibilities : Analyze category performance across sales channels (D2C, marketplaces, offline). Track KPIs like revenue, ASP, margin, sell-through, stock cover, and inventory turns. Conduct pricing, discount, and profitability analysis at SKU and category levels. Identify top-performing or underperforming products and uncover performance drivers. Build dashboards and automated reports for category health and inventory planning. Collaborate with marketing, SCM, and category teams to inform business decisions. Perform trend, seasonality, and cohort analysis to improve demand forecasting. Use customer behavior data (views, clicks, conversions) to support assortment planning. Automate reporting workflows and optimize SQL/Python pipelines. Support new product launches with benchmarks and success prediction models. Skills & Qualifications : 0–2 years of experience in a data analytics role, preferably in E- commerce or Retail. Proficiency in MySQL: writing complex queries, joins, window functions. Advanced Excel/Google Sheets: pivot tables, dynamic dashboards, conditional formatting. Experience in Python: Pandas, automation scripts, statsmodels/scikit- learn. Comfort with data visualization: Power BI / Tableau / Looker Studio. Understanding of product lifecycle, inventory metrics, pricing levers, and customer insights. Strong foundation in statistics: descriptive stats, A/B testing, forecasting models. Excellent problem-solving, data storytelling, and cross-functional collaboration skills. Preferred / Bonus Skills : Experience with Shopify, Magento, or other e-commerce platforms. Familiarity with Google Analytics 4 (GA4). Knowledge of merchandising or visual analytics. Exposure to machine learning (e.g., clustering, success prediction). Experience with VBA or Google Apps Script for reporting automation. Show more Show less

Posted 7 hours ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

We are seeking a skilled Data Engineer to join our dynamic team. The ideal candidate will have expertise in Python, SQL, Tableau, and PySpark, with additional exposure to SAS, banking domain knowledge, and version control tools like GIT and BitBucket. The candidate will be responsible for developing and optimizing data pipelines, ensuring efficient data processing, and supporting business intelligence initiatives. Key Responsibilities Design, build, and maintain data pipelines using Python and PySpark Develop and optimize SQL queries for data extraction and transformation Create interactive dashboards and visualizations using Tableau Implement data models to support analytics and business needs Collaborate with cross-functional teams to understand data requirements Ensure data integrity, security, and governance across platforms Utilize version control tools like GIT and BitBucket for code management Leverage SAS and banking domain knowledge to improve data insights Required Skills Strong proficiency in Python and PySpark for data processing Advanced SQL skills for data manipulation and querying Experience with Tableau for data visualization and reporting Familiarity with database systems and data warehousing concepts Preferred Skills Knowledge of SAS and its applications in data analysis Experience working in the banking domain Understanding of version control systems, specifically GIT and BitBucket Knowledge of pandas, numpy, statsmodels, scikit-learn, matplotlib, PySpark , SASPy Qualifications Bachelor's/Master's degree in Computer Science, Data Science, or a related field Excellent problem-solving and analytical skills Ability to work collaboratively in a fast-paced environment Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Job Summary Fiche de poste : UPS Enterprise Data Analytics team is looking for a talented and motivated Data Scientist to use statistical modelling, state of the art AI tools and techniques to solve complex and large-scale business problems for UPS operations. This role would also support debugging and enhancing existing AI applications in close collaboration with the Machine Learning Operations team. This position will work with multiple stakeholders across different levels of the organization to understand the business problem, develop and help implement robust and scalable solutions. You will be in a high visibility position with the opportunity to interact with the senior leadership to bring forth innovation within the operational space for UPS. Success in this role requires excellent communication to be able to present your cutting-edge solutions to both technical and business leaderships. Responsibilities Become a subject matter expert on UPS business processes and data to help define and solve business needs using data, advanced statistical methods and AI Be actively involved in understanding and converting business use cases to technical requirements for modelling. Query, analyze and extract insights from large-scale structured and unstructured data from different data sources utilizing different platforms, methods and tools like BigQuery, Google Cloud Storage, etc. Understand and apply appropriate methods for cleaning and transforming data, engineering relevant features to be used for modelling. Actively drive modelling of business problem into ML/AI models, work closely with the stakeholders for model evaluation and acceptance. Work closely with the MLOps team to productionize new models, support enhancements and resolving any issues within existing production AI applications. Prepare extensive technical documentation, dashboards and presentations for technical and business stakeholders including leadership teams. Qualifications Expertise in Python, SQL. Experienced in using data science-based packages like scikit-learn, numpy, pandas, tensorflow, keras, statsmodels, etc. Strong understanding of statistical concepts and methods (like hypothesis testing, descriptive stats, etc.), machine learning techniques for regression, classification, clustering problems, including neural networks and deep learning. Proficient in using GCP tools like Vertex AI, BigQuery, GCS, etc. for model development and other activities in the ML lifecycle. Strong ownership and collaborative qualities in the relevant domain. Takes initiative to identify and drive opportunities for improvement and process streamline. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience. Master’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications NLP, Gen AI, LLM knowledge/experience Knowledge of Operations Research methodologies and experience with packages like CPLEX, PULP, etc. Knowledge and experience in MLOps principles and tools in GCP. Experience working in an Agile environment, understanding of Lean Agile principles. Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Summary: UPS Enterprise Data Analytics team is looking for a talented and motivated Data Scientist to use statistical modelling, state of the art AI tools and techniques to solve complex and large-scale business problems for UPS operations. This role would also support debugging and enhancing existing AI applications in close collaboration with the Machine Learning Operations team. This position will work with multiple stakeholders across different levels of the organization to understand the business problem, develop and help implement robust and scalable solutions. You will be in a high visibility position with the opportunity to interact with the senior leadership to bring forth innovation within the operational space for UPS. Success in this role requires excellent communication to be able to present your cutting-edge solutions to both technical and business leaderships. Responsibilities Become a subject matter expert on UPS business processes and data to help define and solve business needs using data, advanced statistical methods and AI Be actively involved in understanding and converting business use cases to technical requirements for modelling. Query, analyze and extract insights from large-scale structured and unstructured data from different data sources utilizing different platforms, methods and tools like BigQuery, Google Cloud Storage, etc. Understand and apply appropriate methods for cleaning and transforming data, engineering relevant features to be used for modelling. Actively drive modelling of business problem into ML/AI models, work closely with the stakeholders for model evaluation and acceptance. Work closely with the MLOps team to productionize new models, support enhancements and resolving any issues within existing production AI applications. Prepare extensive technical documentation, dashboards and presentations for technical and business stakeholders including leadership teams. Qualifications Expertise in Python, SQL. Experienced in using data science-based packages like scikit-learn, numpy, pandas, tensorflow, keras, statsmodels, etc. Strong understanding of statistical concepts and methods (like hypothesis testing, descriptive stats, etc.), machine learning techniques for regression, classification, clustering problems, including neural networks and deep learning. Proficient in using GCP tools like Vertex AI, BigQuery, GCS, etc. for model development and other activities in the ML lifecycle. Strong ownership and collaborative qualities in the relevant domain. Takes initiative to identify and drive opportunities for improvement and process streamline. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience. Master’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications NLP, Gen AI, LLM knowledge/experience Knowledge of Operations Research methodologies and experience with packages like CPLEX, PULP, etc. Knowledge and experience in MLOps principles and tools in GCP. Experience working in an Agile environment, understanding of Lean Agile principles. Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Summary: UPS Enterprise Data Analytics team is looking for a talented and motivated Data Scientist to use statistical modelling, state of the art AI tools and techniques to solve complex and large-scale business problems for UPS operations. This role would also support debugging and enhancing existing AI applications in close collaboration with the Machine Learning Operations team. This position will work with multiple stakeholders across different levels of the organization to understand the business problem, develop and help implement robust and scalable solutions. You will be in a high visibility position with the opportunity to interact with the senior leadership to bring forth innovation within the operational space for UPS. Success in this role requires excellent communication to be able to present your cutting-edge solutions to both technical and business leaderships. Responsibilities Become a subject matter expert on UPS business processes and data to help define and solve business needs using data, advanced statistical methods and AI Be actively involved in understanding and converting business use cases to technical requirements for modelling. Query, analyze and extract insights from large-scale structured and unstructured data from different data sources utilizing different platforms, methods and tools like BigQuery, Google Cloud Storage, etc. Understand and apply appropriate methods for cleaning and transforming data, engineering relevant features to be used for modelling. Actively drive modelling of business problem into ML/AI models, work closely with the stakeholders for model evaluation and acceptance. Work closely with the MLOps team to productionize new models, support enhancements and resolving any issues within existing production AI applications. Prepare extensive technical documentation, dashboards and presentations for technical and business stakeholders including leadership teams. Qualifications Expertise in Python, SQL. Experienced in using data science-based packages like scikit-learn, numpy, pandas, tensorflow, keras, statsmodels, etc. Strong understanding of statistical concepts and methods (like hypothesis testing, descriptive stats, etc.), machine learning techniques for regression, classification, clustering problems, including neural networks and deep learning. Proficient in using GCP tools like Vertex AI, BigQuery, GCS, etc. for model development and other activities in the ML lifecycle. Strong ownership and collaborative qualities in the relevant domain. Takes initiative to identify and drive opportunities for improvement and process streamline. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience. Master’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications NLP, Gen AI, LLM knowledge/experience Knowledge of Operations Research methodologies and experience with packages like CPLEX, PULP, etc. Knowledge and experience in MLOps principles and tools in GCP. Experience working in an Agile environment, understanding of Lean Agile principles. Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less

Posted 3 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

By clicking the “Apply” button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’s Privacy Notice and Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description: The Future Begins Here: At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity: Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. The Opportunity: As a Data Scientist, you will have the opportunity to apply your analytical skills and expertise to extract meaningful insights from vast amounts of data. We are currently seeking a talented and experienced individual to join our team and contribute to our data-driven decision-making process. Objectives: Collaborate with different business users, mainly Supply Chain/Manufacturing to understand the current state and identify opportunities to transform the business into a data-driven organization. Translate processes, and requirements into analytics solutions and metrics with effective data strategy, data quality, and data accessibility for decision making. Operationalize decision support solutions and drive use adoption as well as gathering feedback and metrics on Voice of Customer in order to improve analytics services. Understand the analytics drivers and data to be modeled as well as apply the appropriate quantitative techniques to provide business with actionable insights and ensure analytics model and data are access to the end users to evaluate “what-if” scenarios and decision making. Evaluate the data, analytical models, and experiments periodically to validate hypothesis ensuring it continues to provide business value as requirements and objectives evolve. Accountabilities: Collaborates with business partners in identifying analytical opportunities and developing BI-related goals and projects that will create strategically relevant insights. Work with internal and external partners to develop analytics vision and programs to advance BI solutions and practices. Understands data and sources of data. Strategizes with IT development team and develops a process to collect, ingest, and deliver data along with proper data models for analytical needs. Interacts with business users to define pain points, problem statement, scope, and analytics business case. Develops solutions with recommended data model and business intelligence technologies including data warehouse, data marts, OLAP modeling, dashboards/reporting, and data queries. Works with DevOps and database teams to ensure proper design of system databases and appropriate integration with other enterprise applications. Collaborates with Enterprise Data and Analytics Team to design data model and visualization solutions that synthesize complex data for data mining and discovery. Assists in defining requirements and facilitates workshops and prototyping sessions. Develops and applies technologies such as machine-learning, deep-learning algorithm to enable advanced analytics product functionality. EDUCATION, BEHAVIOURAL COMPETENCIES AND SKILLS : Bachelors’ Degree, from an accredited institution in Data Science, Statistics, Computer Science, or related field. 3+ years of experience with statistical modeling such as clustering, segmentation, multivariate, regression, etc. and analytics tools such as R, Python, Databricks, etc. required Experience in developing and applying predictive and prescriptive modeling, deep-learning, or other machine learning techniques a plus. Hands-on development of AI solutions that comply with industry standards and government regulations. Great numerical and analytical skills, as well as basic knowledge of Python Analytics packages (Pandas, scikit-learn, statsmodels). Ability to build and maintain scalable and reliable data pipelines that collect, transform, manipulate, and load data from internal and external sources. Ability to use statistical tools to conduct data analysis and identify data quality issues throughout the data pipeline. Experience with BI and Visualization tools (f. e. Qlik, Power BI), ETL, NoSQL and proven design skills a plus. Excellent written and verbal communication skills including the ability to interact effectively with multifunctional teams. Experience with working with agile teams. WHAT TAKEDA CAN OFFER YOU: Takeda is certified as a Top Employer, not only in India, but also globally. No investment we make pays greater dividends than taking good care of our people. At Takeda, you take the lead on building and shaping your own career. Joining the ICC in Bengaluru will give you access to high-end technology, continuous training and a diverse and inclusive network of colleagues who will support your career growth. BENEFITS: It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs Health & Wellness programs including annual health screening, weekly health sessions for employees. Employee Assistance Program 3 days of leave every year for Voluntary Service in additional to Humanitarian Leaves Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks) , Maternity Leave (up to 26 weeks), Bereavement Leave (5 calendar days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. Locations: IND - Bengaluru Worker Type: Employee Worker Sub-Type: Regular Time Type: Full time Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

It's about Being What's next. What's in it for you? A Data Scientist for AI Products (Global) will be responsible for working in the Artificial Intelligence team, Linde's AI global corporate division engaged with real business challenges and opportunities in multiple countries. Focus of this role is to support the AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain. You'll collaborate across different business and corporate functions in international team composed of Project Managers, Data Scientists, Data and Software Engineers in the AI team and others in the Linde's Global AI team. As a Data Scientist AI, you will support Linde’s AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain" At Linde, the sky is not the limit. If you’re looking to build a career where your work reaches beyond your job description and betters the people with whom you work, the communities we serve, and the world in which we all live, at Linde, your opportunities are limitless. Be Linde. Be Limitless. Team Making an impact. What will you do? You will work directly with a variety of different data sources, types and structures to derive actionable insights Develop, customize and manage AI software products based on Machine and Deep Learning backends will be your tasks Your role includes strong support on replication of existing products and pipelines to other systems and geographies In addition to that you will support in architectural design and defining data requirements for new developments It will be your responsibility to interact with business functions in identifying opportunities with potential business impact and to support development and deployment of models into production Winning in your role. Do you have what it takes? You have a Bachelor or master’s degree in data science, Computational Statistics/Mathematics, Computer Science, Operations Research or related field You have a strong understanding of and practical experience with Multivariate Statistics, Machine Learning and Probability concepts Further, you gained experience in articulating business questions and using quantitative techniques to arrive at a solution using available data You demonstrate hands-on experience with preprocessing, feature engineering, feature selection and data cleansing on real world datasets Preferably you have work experience in an engineering or technology role You bring a strong background of Python and handling large data sets using SQL in a business environment (pandas, numpy, matplotlib, seaborn, sklearn, keras, tensorflow, pytorch, statsmodels etc.) to the role In addition you have a sound knowledge of data architectures and concepts and practical experience in the visualization of large datasets, e.g. with Tableau or PowerBI Result driven mindset and excellent communication skills with high social competence gives you the ability to structure a project from idea to experimentation to prototype to implementation Very good English language skills are required As a plus you have hands-on experience with DevOps and MS Azure, experience in Azure ML, Kedro or Airflow, experience in MLflow or similar Why you will love working for us! Linde is a leading global industrial gases and engineering company, operating in more than 100 countries worldwide. We live our mission of making our world more productive every day by providing high-quality solutions, technologies and services which are making our customers more successful and helping to sustain and protect our planet. On the 1st of April 2020, Linde India Limited and Praxair India Private Limited successfully formed a joint venture, LSAS Services Private Limited. This company will provide Operations and Management (O&M) services to both existing organizations, which will continue to operate separately. LSAS carries forward the commitment towards sustainable development, championed by both legacy organizations. It also takes ahead the tradition of the development of processes and technologies that have revolutionized the industrial gases industry, serving a variety of end markets including chemicals & refining, food & beverage, electronics, healthcare, manufacturing, and primary metals. Whatever you seek to accomplish, and wherever you want those accomplishments to take you, a career at Linde provides limitless ways to achieve your potential, while making a positive impact in the world. Be Linde. Be Limitless. Have we inspired you? Let's talk about it! We are looking forward to receiving your complete application (motivation letter, CV, certificates) via our online job market. Any designations used of course apply to persons of all genders. The form of speech used here is for simplicity only. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, disability, protected veteran status, pregnancy, sexual orientation, gender identity or expression, or any other reason prohibited by applicable law. Praxair India Private Limited acts responsibly towards its shareholders, business partners, employees, society and the environment in every one of its business areas, regions and locations across the globe. The company is committed to technologies and products that unite the goals of customer value and sustainable development. Show more Show less

Posted 5 days ago

Apply

2.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

About Neo Group: Neo is a new-age, focused Wealth and Asset Management platform in India, catering to HNIs, UHNIs and multi-family offices. Neo stands on its three pillars of unbiased advisory, transparency and cost-efficiency, to offer comprehensive, trustworthy solutions. Founded by Nitin Jain (ex-CEO of Edelweiss Wealth), Neo has amassed over USD 3 Billion (₹25,000 Cr.) of Assets Under Advice within a short span of 2 years since inception, including USD 360 Million (₹3,000 Cr.) Assets Under Management. We have recently partnered with Peak XV Partners via a USD 35 Million growth round. To know more, please visit: www.neo-group.in Position: Senior Data Scientist Location: Mumbai Experience: 4 - 8 years Job Description: You are a data pro with deep statistical knowledge and analytical aptitude. You know how to make sense of massive amounts of data and gather deep insights. You will use statistics, data mining, machine learning, and deep learning techniques to deliver data-driven insights for clients. You will dig deep to understand their challenges and create innovative yet practical solutions. Responsibilities: • Meeting with the business team to discuss user interface ideas and applications. • Selecting features, building and optimizing classifiers using machine learning techniques • Data mining using state-of-the-art methods • Doing ad-hoc analysis and presenting results in a clear manner • Optimize application for maximum speed and scalability • Assure that all user input is validated before submitting code • Collaborate with other team members and stakeholders • Taking ownership of features and accountability Requirements: • 4+ years’ experience in developing Data Models • Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, etc. • Excellent understanding of NLP and language processing • Proficient understanding of Python or PySpark • Good experience of Python and databases such as MongoDB or MySQL • Good applied statistics skills, such as distributions, statistical testing, regression, etc. • Build Acquisition Scorecard Models • Build Behaviour Scorecard Models • Created Threat Detection Models • Created risk profiling model or classification model • Build Threat/Fraud Triggers from various sources of data • Experience with Data Analysis Libraries - NumPy, Pandas, Statsmodels, Dask • Good understanding of Word2vec, RNNs, Transformers, Bert, Resnet, MobileNet, Unet, Mask-RCNN, Siamese Networks, GradCam, image augmentation techniques, GAN, Tensorboard • Ability to provide accurate estimates for tasks and detailed breakdowns for planning and managing sprints • Deployment - Flask, Tensorflow serving, Lambda functions, Docker is a plus • Previous Experience leading a DS team is a plus Personal Qualities: • An ability to perform well in a fast-paced environment • Excellent analytical and multitasking skills • Stays up-to-date on emerging technologies • Data-oriented personality Why join us? We will provide you with the opportunity to challenge yourself and learn new skills, as you become an integral part our growth story. We are group of ambitious people who believe in building a business environment around new age concepts, framework, and technologies built on a strong foundation of industry expertise. We promise you the prospect of being surrounded by smart, ambitious, motivated people, day-in and day-out. That’s the kind of work you can expect to do at Neo. Show more Show less

Posted 6 days ago

Apply

2.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

About Neo Group: Neo is a new-age, focused Wealth and Asset Management platform in India, catering to HNIs, UHNIs and multi-family offices. Neo stands on its three pillars of unbiased advisory, transparency and cost-efficiency, to offer comprehensive, trustworthy solutions. Founded by Nitin Jain (ex-CEO of Edelweiss Wealth), Neo has amassed over USD 3 Billion (₹25,000 Cr.) of Assets Under Advice within a short span of 2 years since inception, including USD 360 Million (₹3,000 Cr.) Assets Under Management. We have recently partnered with Peak XV Partners via a USD 35 Million growth round. To know more, please visit: www.neo-group.in Position: Data Scientist Location: Mumbai Experience: 2-5 years Job Description: You are a data pro with deep statistical knowledge and analytical aptitude. You know how to make sense of massive amounts of data and gather deep insights. You will use statistics, data mining, machine learning, and deep learning techniques to deliver data-driven insights for clients. You will dig deep to understand their challenges and create innovative yet practical solutions. Responsibilities: • Meeting with the business team to discuss user interface ideas and applications. • Selecting features, building and optimizing classifiers using machine learning techniques • Data mining using state-of-the-art methods • Doing ad-hoc analysis and presenting results in a clear manner • Optimize application for maximum speed and scalability • Assure that all user input is validated before submitting code • Collaborate with other team members and stakeholders • Taking ownership of features and accountability Requirements: • 2+ years’ experience in developing Data Models • Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, etc. • Excellent understanding of NLP and language processing • Proficient understanding of Python or PySpark • Basic understanding of Python and databases such as MongoDB or MySQL • Good applied statistics skills, such as distributions, statistical testing, regression, etc. • Build Acquisition Scorecard Models • Build Behaviour Scorecard Models • Created Threat Detection Models • Created risk profiling model or classification model • Build Threat/Fraud Triggers from various sources of data • Experience with Data Analysis Libraries - NumPy, Pandas, Statsmodels, Dask • Good understanding of Word2vec, RNNs, Transformers, Bert, Resnet, MobileNet, Unet, Mask-RCNN, Siamese Networks, GradCam, image augmentation techniques, GAN, Tensorboard • Ability to provide accurate estimates for tasks and detailed breakdowns for planning and managing sprints • Deployment - Flask, Tensorflow serving, Lambda functions, Docker is a plus • Previous Experience leading a DS team is a plus Personal Qualities: • An ability to perform well in a fast-paced environment • Excellent analytical and multitasking skills • Stays up-to-date on emerging technologies • Data-oriented personality Why join us? We will provide you with the opportunity to challenge yourself and learn new skills, as you become an integral part our growth story. We are group of ambitious people who believe in building a business environment around new age concepts, framework, and technologies built on a strong foundation of industry expertise. We promise you the prospect of being surrounded by smart, ambitious, motivated people, day-in and day-out. That’s the kind of work you can expect to do at Neo. Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

By clicking the “Apply” button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’s Privacy Notice and Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description: The Future Begins Here: At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity: Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. The Opportunity: As a Data Scientist, you will have the opportunity to apply your analytical skills and expertise to extract meaningful insights from vast amounts of data. We are currently seeking a talented and experienced individual to join our team and contribute to our data-driven decision-making process. Objectives: Collaborate with different business users, mainly Supply Chain/Manufacturing to understand the current state and identify opportunities to transform the business into a data-driven organization. Translate processes, and requirements into analytics solutions and metrics with effective data strategy, data quality, and data accessibility for decision making. Operationalize decision support solutions and drive use adoption as well as gathering feedback and metrics on Voice of Customer in order to improve analytics services. Understand the analytics drivers and data to be modeled as well as apply the appropriate quantitative techniques to provide business with actionable insights and ensure analytics model and data are access to the end users to evaluate “what-if” scenarios and decision making. Evaluate the data, analytical models, and experiments periodically to validate hypothesis ensuring it continues to provide business value as requirements and objectives evolve. Accountabilities: Collaborates with business partners in identifying analytical opportunities and developing BI-related goals and projects that will create strategically relevant insights. Work with internal and external partners to develop analytics vision and programs to advance BI solutions and practices. Understands data and sources of data. Strategizes with IT development team and develops a process to collect, ingest, and deliver data along with proper data models for analytical needs. Interacts with business users to define pain points, problem statement, scope, and analytics business case. Develops solutions with recommended data model and business intelligence technologies including data warehouse, data marts, OLAP modeling, dashboards/reporting, and data queries. Works with DevOps and database teams to ensure proper design of system databases and appropriate integration with other enterprise applications. Collaborates with Enterprise Data and Analytics Team to design data model and visualization solutions that synthesize complex data for data mining and discovery. Assists in defining requirements and facilitates workshops and prototyping sessions. Develops and applies technologies such as machine-learning, deep-learning algorithm to enable advanced analytics product functionality. EDUCATION, BEHAVIOURAL COMPETENCIES AND SKILLS: Bachelors’ Degree, from an accredited institution in Data Science, Statistics, Computer Science, or related field. 3+ years of experience with statistical modeling such as clustering, segmentation, multivariate, regression, etc. and analytics tools such as R, Python, Databricks, etc. required Experience in developing and applying predictive and prescriptive modeling, deep-learning, or other machine learning techniques a plus. Hands-on development of AI solutions that comply with industry standards and government regulations. Great numerical and analytical skills, as well as basic knowledge of Python Analytics packages (Pandas, scikit-learn, statsmodels). Ability to build and maintain scalable and reliable data pipelines that collect, transform, manipulate, and load data from internal and external sources. Ability to use statistical tools to conduct data analysis and identify data quality issues throughout the data pipeline. Experience with BI and Visualization tools (f. e. Qlik, Power BI), ETL, NoSQL and proven design skills a plus. Excellent written and verbal communication skills including the ability to interact effectively with multifunctional teams. Experience with working with agile teams. WHAT TAKEDA CAN OFFER YOU: Takeda is certified as a Top Employer, not only in India, but also globally. No investment we make pays greater dividends than taking good care of our people. At Takeda, you take the lead on building and shaping your own career. Joining the ICC in Bengaluru will give you access to high-end technology, continuous training and a diverse and inclusive network of colleagues who will support your career growth. BENEFITS: It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs Health & Wellness programs including annual health screening, weekly health sessions for employees. Employee Assistance Program 3 days of leave every year for Voluntary Service in additional to Humanitarian Leaves Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks) , Maternity Leave (up to 26 weeks), Bereavement Leave (5 calendar days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. Li-Hybrid Locations: IND - Bengaluru Worker Type: Employee Worker Sub-Type: Regular Time Type: Full time Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Praxair India Private Limited | Business Area: Digitalisation Data Scientist for AI Products (Global) Bangalore, Karnataka, India | Working Scheme: On-Site | Job Type: Regular / Permanent / Unlimited / FTE | Reference Code: req23348 It's about Being What's next. What's in it for you? A Data Scientist for AI Products (Global) will be responsible for working in the Artificial Intelligence team, Linde's AI global corporate division engaged with real business challenges and opportunities in multiple countries. Focus of this role is to support the AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain. You'll collaborate across different business and corporate functions in international team composed of Project Managers, Data Scientists, Data and Software Engineers in the AI team and others in the Linde's Global AI team. As a Data Scientist AI, you will support Linde’s AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain" At Linde, the sky is not the limit. If you’re looking to build a career where your work reaches beyond your job description and betters the people with whom you work, the communities we serve, and the world in which we all live, at Linde, your opportunities are limitless. Be Linde. Be Limitless. Team Making an impact. What will you do? You will work directly with a variety of different data sources, types and structures to derive actionable insights Develop, customize and manage AI software products based on Machine and Deep Learning backends will be your tasks Your role includes strong support on replication of existing products and pipelines to other systems and geographies In addition to that you will support in architectural design and defining data requirements for new developments It will be your responsibility to interact with business functions in identifying opportunities with potential business impact and to support development and deployment of models into production Winning in your role. Do you have what it takes? You have a Bachelor or master’s degree in data science, Computational Statistics/Mathematics, Computer Science, Operations Research or related field You have a strong understanding of and practical experience with Multivariate Statistics, Machine Learning and Probability concepts Further, you gained experience in articulating business questions and using quantitative techniques to arrive at a solution using available data You demonstrate hands-on experience with preprocessing, feature engineering, feature selection and data cleansing on real world datasets Preferably you have work experience in an engineering or technology role You bring a strong background of Python and handling large data sets using SQL in a business environment (pandas, numpy, matplotlib, seaborn, sklearn, keras, tensorflow, pytorch, statsmodels etc.) to the role In addition you have a sound knowledge of data architectures and concepts and practical experience in the visualization of large datasets, e.g. with Tableau or PowerBI Result driven mindset and excellent communication skills with high social competence gives you the ability to structure a project from idea to experimentation to prototype to implementation Very good English language skills are required As a plus you have hands-on experience with DevOps and MS Azure, experience in Azure ML, Kedro or Airflow, experience in MLflow or similar Why you will love working for us! Linde is a leading global industrial gases and engineering company, operating in more than 100 countries worldwide. We live our mission of making our world more productive every day by providing high-quality solutions, technologies and services which are making our customers more successful and helping to sustain and protect our planet. On the 1st of April 2020, Linde India Limited and Praxair India Private Limited successfully formed a joint venture, LSAS Services Private Limited. This company will provide Operations and Management (O&M) services to both existing organizations, which will continue to operate separately. LSAS carries forward the commitment towards sustainable development, championed by both legacy organizations. It also takes ahead the tradition of the development of processes and technologies that have revolutionized the industrial gases industry, serving a variety of end markets including chemicals & refining, food & beverage, electronics, healthcare, manufacturing, and primary metals. Whatever you seek to accomplish, and wherever you want those accomplishments to take you, a career at Linde provides limitless ways to achieve your potential, while making a positive impact in the world. Be Linde. Be Limitless. Have we inspired you? Let's talk about it! We are looking forward to receiving your complete application (motivation letter, CV, certificates) via our online job market. Any designations used of course apply to persons of all genders. The form of speech used here is for simplicity only. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, disability, protected veteran status, pregnancy, sexual orientation, gender identity or expression, or any other reason prohibited by applicable law. Praxair India Private Limited acts responsibly towards its shareholders, business partners, employees, society and the environment in every one of its business areas, regions and locations across the globe. The company is committed to technologies and products that unite the goals of customer value and sustainable development. Show more Show less

Posted 1 week ago

Apply

5.0 - 7.0 years

3 - 6 Lacs

Pune

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – AI and DATA – Statistical Modeler-Senior At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. As part of our EY- GDS AI and Data team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. Technical Skills: Statistical Programming Languages: Python, R Libraries & Frameworks: Pandas, NumPy, Scikit-learn, StatsModels, Tidyverse, caret Data Manipulation Tools: SQL, Excel Data Visualization Tools: Matplotlib, Seaborn, ggplot2, Machine Learning Techniques: Supervised and unsupervised learning, model evaluation (cross-validation, ROC curves) 5-7 years of experience in building statistical forecast models for pharma industry Deep understanding of patient flows,treatment journey across both Onc and Non Onc Tas. What we look for A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment What working at EY offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you About EY As a global leader in assurance, tax, transaction and advisory services, we’re using the finance products, expertise and systems we’ve developed to build a better working world. That starts with a culture that believes in giving you the training, opportunities and creative freedom to make things better. Whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

3.0 - 5.0 years

5 - 8 Lacs

Bengaluru

On-site

Your Job As a Data Analyst in Molex's Copper Solutions Business Unit software solution group, you will be to extracting actionable insights from large and complex manufacturing datasets, identifying trends, optimizing production processes, improving operational efficiency, minimizing downtime, and enhancing overall product quality. You will be collaborating closely with cross-functional teams to ensure the effective use of data in driving continuous improvement and achieving business objectives within the manufacturing environment. Our Team Molex's Copper Solutions Business Unit (CSBU) is a global team that works together to deliver exceptional products to worldwide telecommunication and data center customers. SSG under CSBU is one of the most highly technically advanced software solution group within Molex. Our group leverages software expertise to enhance the concept, design, manufacturing, and support of high-speed electrical interconnects. What You Will Do 1. Collect, clean, and transform data from various sources to support analysis and decision-making processes. 2. Conduct thorough data analysis using Python to uncover trends, patterns, and insights. 3. Create & maintain reports based on business needs. 4. Prepare comprehensive reports that detail analytical processes and outcomes. 5. Develop and maintain visualizations/dashboards. 6. Collaborate with cross-functional teams to understand data needs and deliver actionable insights. 7. Perform ad hoc analysis to support business decisions. 8. Write efficient and optimized SQL queries to extract, manipulate, and analyze data from various databases. 9. Identify gaps and inefficiencies in current reporting processes and implement improvements and new solutions. 10. Ensure data quality and integrity across all reports and tools. Who You Are (Basic Qualifications) B.E./B.Tech Degree in Computer Science Engineering, Information Science, Data Science or related discipline. 3-5 years of progressive data analysis experience with Python (pandas, numpy, matplotlib, OpenPyXL, SciPy , Statsmodels, Seaborn). What Will Put You Ahead . Experience with Power BI, Tableau, or similar tools for creating interactive dashboards and reports tailored for manufacturing operations. • Experience with predictive analytics e.g. machine learning models (e.g., using Scikit-learn) to predict failures, optimize production, or forecast demand. • Experience with big data tools like Hadoop, Apache Kafka, or cloud platforms (e.g., AWS, Azure) for managing and analyzing large-scale data. • Knowledge on A/B testing & forecasting. • Familiarity with typical manufacturing data (e.g., machine performance metrics, production line data, quality control metrics). At Koch companies, we are entrepreneurs. This means we openly challenge the status quo, find new ways to create value and get rewarded for our individual contributions. Any compensation range provided for a role is an estimate determined by available market data. The actual amount may be higher or lower than the range provided considering each candidate's knowledge, skills, abilities, and geographic location. If you have questions, please speak to your recruiter about the flexibility and detail of our compensation philosophy. Who We Are {Insert company language from Company Boilerplate Language Guide } At Koch, employees are empowered to do what they do best to make life better. Learn how our business philosophy helps employees unleash their potential while creating value for themselves and the company. Additionally, everyone has individual work and personal needs. We seek to enable the best work environment that helps you and the business work together to produce superior results.

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Your Job As a Data Analyst in Molex’s Copper Solutions Business Unit software solution group, you will be to extracting actionable insights from large and complex manufacturing datasets, identifying trends, optimizing production processes, improving operational efficiency, minimizing downtime, and enhancing overall product quality. You will be collaborating closely with cross-functional teams to ensure the effective use of data in driving continuous improvement and achieving business objectives within the manufacturing environment. Our Team Molex’s Copper Solutions Business Unit (CSBU) is a global team that works together to deliver exceptional products to worldwide telecommunication and data center customers. SSG under CSBU is one of the most highly technically advanced software solution group within Molex. Our group leverages software expertise to enhance the concept, design, manufacturing, and support of high-speed electrical interconnects. What You Will Do Collect, clean, and transform data from various sources to support analysis and decision-making processes. Conduct thorough data analysis using Python to uncover trends, patterns, and insights. Create & maintain reports based on business needs. Prepare comprehensive reports that detail analytical processes and outcomes. Develop and maintain visualizations/dashboards. Collaborate with cross-functional teams to understand data needs and deliver actionable insights. Perform ad hoc analysis to support business decisions. Write efficient and optimized SQL queries to extract, manipulate, and analyze data from various databases. Identify gaps and inefficiencies in current reporting processes and implement improvements and new solutions. Ensure data quality and integrity across all reports and tools. Who You Are (Basic Qualifications) B.E./B.Tech Degree in Computer Science Engineering, Information Science, Data Science or related discipline. 3-5 years of progressive data analysis experience with Python (pandas, numpy, matplotlib, OpenPyXL, SciPy , Statsmodels, Seaborn). What Will Put You Ahead Experience with Power BI, Tableau, or similar tools for creating interactive dashboards and reports tailored for manufacturing operations. Experience with predictive analytics e.g. machine learning models (e.g., using Scikit-learn) to predict failures, optimize production, or forecast demand. Experience with big data tools like Hadoop, Apache Kafka, or cloud platforms (e.g., AWS, Azure) for managing and analyzing large-scale data. Knowledge on A/B testing & forecasting. Familiarity with typical manufacturing data (e.g., machine performance metrics, production line data, quality control metrics). At Koch companies, we are entrepreneurs. This means we openly challenge the status quo, find new ways to create value and get rewarded for our individual contributions. Any compensation range provided for a role is an estimate determined by available market data. The actual amount may be higher or lower than the range provided considering each candidate's knowledge, skills, abilities, and geographic location. If you have questions, please speak to your recruiter about the flexibility and detail of our compensation philosophy. Who We Are {Insert company language from Company Boilerplate Language Guide} At Koch, employees are empowered to do what they do best to make life better. Learn how our business philosophy helps employees unleash their potential while creating value for themselves and the company. Additionally, everyone has individual work and personal needs. We seek to enable the best work environment that helps you and the business work together to produce superior results. Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Company Description As a leading global investment management firm, AB fosters diverse perspectives and embraces innovation to help our clients navigate the uncertainty of capital markets. Through high-quality research and diversified investment services, we serve institutions, individuals and private wealth clients in major markets worldwide. Our ambition is simple: to be our clients’ most valued asset-management partner. With over 4,400 employees across 51 locations in 25 countries, our people are our advantage. We foster a culture of intellectual curiosity and collaboration to create an environment where everyone can thrive and do their best work. Whether you're producing thought-provoking research, identifying compelling investment opportunities, infusing new technologies into our business or providing thoughtful advice to clients, we’re looking for unique voices to help lead us forward. If you’re ready to challenge your limits and build your future, join us. Describe The Role Day to day responsibilities will include: Conduct asset allocation and manager evaluation research and creating bespoke client portfolios Undertake bespoke requests for data analysis; Build dashboards for data visualization (Python Dash) Handle data collation, cleansing and analysis (SQL, Python) Create new databases using data from different sources, and set up infrastructure for their maintenance; Clean and manipulate data, build models and produce automated reports using Python; Use statistical modelling and Machine Learning to address quantitative problems (Python) Conduct and deliver top notch research projects with quantitative applications to fundamental strategies. Preferred Skill Sets 2+ years of experience of RDBMS database design, preferably on MS SQL Server 2+ years of Python development experience. Advanced skills with programming using any of Python libraries (pandas, numpy, statsmodels, dash, pypfopt, cvxpy, keras, scikit-learn) – Must haves (pandas/numpy/statsmodels) Candidate should be capable of manipulating large quantities of data High level of attention to detail and accuracy Working experience on building quantitative models; experience with factor research, portfolio construction, systematic models Academic qualification in Mathematics/Physics/Statistics/Econometrics/Engineering or related field Understanding of company financial statements, accounting and risk analysis would be an added advantage Strong (English) communication skills with proven ability to interact with global clients Pune, India Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About VOIS VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. About VOIS India In 2009, VO IS started operating in India and now has established global delivery centres in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VO IS India supports global markets and group functions of Vodafone, and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Job Description Big Data Handling: Passion and attitude to learn new data practices and tools (ingestion, transformation, governance, security & privacy), on both on-prem and Cloud (AWS preferable). Influences and contributes to innovative ways of unlocking value through companywide and external data Diagnostic Models Experience with diagnostic system using decision theory and causal models (including tools like probability, DAG, ADMG, Deterministic SME, etc) to predict the effects of an action to improve insight-led decisions. Able to productize the diagnostic systems built for reuse. Predictive & Prescriptive Analytics Models Expert in AI solutions - ML, DL, NLP, ES, RL etc. Should be able to build robust prescriptive learning systems that are scalable, real-time. Should be able to determine "Next Best Action" following Prescriptive Analytics. Autonomous Cognitive Systems Drive Autonomous system utility and continuously improve precision through creating the stable learning environment. Should be able to build Intelligent Autonomous Systems to prescribe proactive actions based on ML predictions and solicit feedback from the support functions with minimal human involvement. Big Data Tech, Environments & Frameworks Advanced applications of CNNs, RNNs, MLPs, Deep learning. Excellent application of machine learning and deep learning packages like tensorflow, pytorch, scikit, numpy, pandas, statsmodels theano, XGBoost etc. Demonstrated superior deep learning algorithms/framework. At least 1 certification in AWS will be preferred. Programming: Python, R, SQL Frameworks: TensorFlow, Keras, Scikit-learn Visualization: Tableau, Power BI Cloud: AWS, Azure Statistical Modeling: Regression, classification, clustering, time series Soft Skills: Communication, stakeholder management, problem-solving VOIS Equal Opportunity Employer Commitment India VO IS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 10 Best Workplaces for Millennials, Equity, and Inclusion , Top 50 Best Workplaces for Women , Top 25 Best Workplaces in IT & IT-BPM and 10th Overall Best Workplaces in India by the Great Place to Work Institute in 2024. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch! Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Data Science Intern (Remote) Company: Coreline Solutions Location: Remote / Pune, India Duration: 3 to 6 Months Stipend: Unpaid (Full-time offer based on performance) Work Mode: Remote About Coreline Solutions We’re a tech and consulting company focused on digital transformation, custom software development, and data-driven solutions. Role Overview We’re looking for a Data Science Intern to work on real-world data projects involving analytics, modeling, and business insights. Great opportunity for students or freshers to gain practical experience in the data science domain. Key Responsibilities Collect, clean, and analyze large datasets using Python, SQL, and Excel. Develop predictive and statistical models using libraries like scikit-learn or statsmodels. Visualize data and present insights using tools like Matplotlib, Seaborn, or Power BI. Support business teams with data-driven recommendations. Collaborate with data analysts, ML engineers, and developers. Requirements Pursuing or completed degree in Data Science, Statistics, CS, or related field. Proficient in Python and basic understanding of machine learning. Familiarity with data handling tools (Pandas, NumPy) and SQL. Good analytical and problem-solving skills. Perks Internship Certificate Letter of Recommendation (Top Performers) Mentorship & real-time project experience Potential full-time role To Apply Email your resume to 📧 hr@corelinesolutions.site Subject: “Application for Data Science Intern – [Your Full Name]” Show more Show less

Posted 1 week ago

Apply

4.0 years

6 - 9 Lacs

Hyderābād

On-site

About Citco Citco is a global leader in fund services, corporate governance and related asset services with staff across 80 offices worldwide. With more than $1.7 trillion in assets under administration, we deliver end-to-end solutions and exceptional service to meet our clients’ needs. For more information about Citco, please visit www.citco.com About the Team & Business Line: Citco Fund Services is a division of the Citco Group of Companies and is the largest independent administrator of Hedge Funds in the world. Our continuous investment in learning and technology solutions means our people are equipped to deliver a seamless client experience. This position reports in to the Loan Services Business Line As a core member of our Loan Services Data and Reporting team, you will be working with some of the industry’s most accomplished professionals to deliver award-winning services for complex fund structures that our clients can depend upon Job Duties in Brief: Your Role: Develop and execute database queries and conduct data analyses Create scripts to analyze and modify data, import/export scripts and execute stored procedures Model data by writing SQL queries/Python codes to support data integration and dashboard requirements Develop data pipelines that provide fast, optimized, and robust end-to-end solutions Leverage and contribute to design/building relational database schemas for analytics. Handle and manipulate data in various structures and repositories (data cube, data mart, data warehouse, data lake) Analyze, implement and contribute to building of APIs to improve data integration pipeline Perform data preparation tasks including data cleaning, normalization, deduplication, type conversion etc. Perform data integration through extracting, transforming and loading (ETL) data from various sources. Identify opportunities to improve processes and strategies with technology solutions and identify development needs in order to improve and streamline operations Create tabular reports, matrix reports, parameterized reports, visual reports/dashbords in a reporting application such as Power BI Desktop/Cloud or QLIK Integrating PBI/QLIK reports into other applications using embedded analytics like Power BI service (SaaS), or by API automation is also an advantage Implementation of NLP techniques for text representation, semantic extraction techniques, data structures and modelling Contribute to deployment and maintainence of machine learning solutions in production environments Building and Designing cloud applications using Microsoft Azure/AWS cloud technologies. About You: Background / Qualifications Bachelor’s Degree in technology/related field or equivalent work experience 4+ Years of SQL and/or Python experience is a must Strong knowledge of data concepts and tools and experienced in RDMS such as MS SQL Server, Oracle etc. Well-versed with concepts and techniques of Business Intelligence and Data Warehousing. Strong database designing and SQL skills. objects development, performance tuning and data analysis In-depth understanding of database management systems, OLAP & ETL frameworks Familiarity or hands on experience working with REST or SOAP APIs Well versed with concepts for API Management and Integration with various data sources in cloud platforms, to help with connecting to traditional SQL and new age data sources, such as Snowflake Familiarity with Machine Learning concepts like feature selection/deep learning/AI and ML/DL frameworks (like Tensorflow or PyTorch) and libraries (like scikit-learn, StatsModels) is an advantage Familiarity with BI technologies (e.g. Microsoft Power BI, Oracle BI) is an advantage Hands-on experience at least in one ETL tool (SSIS, Informatica, Talend, Glue, Azure Data factory) and associated data integration principles is an advantage Minimum 1+ year experience with Cloud platform technologies (AWS/Azure), including Azure Machine Learning is desirable. Following AWS experience is a plus: Implementing identity and access management (IAM) policies Managing user accounts with IAM Knowledge of writing infrastructure as code (IaC) using CloudFormation or Terraform. Implementing cloud storage using Amazon Simple Storage Service (S3) Experience with serverless approaches using AWS Lambda, e.g. AWS (SAM) Configuring Amazon Elastic Compute Cloud (EC2) Instances Previous Work Experience: Experience querying databases and strong programming skills: Python, SQL, PySpark etc. Prior experience supporting ETL production environments & web technologies such as XML is an advatange Previous working experience on Azure Data Services including ADF, ADLS, Blob, Data Bricks, Hive, Python, Spark and/or features of Azure ML Studio, ML Services and ML Ops is an advantage Experience with dashboard and reporting applications like Qlik, Tableau, Power BI Other: Well rounded individual possessing a high degree of initiative Proactive person willing to accept responsibility with very little hand-holding A strong analytical and logical mindset Demonstrated proficiency in interpersonal and communication skills including oral and written English. Ability to work in fast paced, complex Business & IT environments Knowledge of Loan Servicing and/or Loan Administration is an advantage Understanding of Agile/Scrum methodology as it relates to the software development lifecycle What We Offer: A rewarding and challenging environment that spans multiple geographies and multiple business lines Great working environment, competitive salary and benefits, and opportunities for educational support Be part of an industry leading global organisation, renowned for excellence Opportunities for personal and professional career development Our Benefits Your well-being is of paramount importance to us, and central to our success. We provide a range of benefits, training and education support, and flexible working arrangements to help you achieve success in your career while balancing personal needs. Ask us about specific benefits in your location. We embrace diversity, prioritizing the hiring of people from diverse backgrounds. Our inclusive culture is a source of pride and strength, fostering innovation and mutual respect. Citco welcomes and encourages applications from people with disabilities. Accommodations are available upon request for candidates taking part in all aspects of the selection .

Posted 1 week ago

Apply

1.0 - 3.0 years

35 Lacs

Mumbai

Work from Office

Naukri logo

Job Insights: 1. Develop and maintain AI models on time series & financial date for predictive modelling, including data collection, analysis, feature engineering, model development, evaluation, backtesting and monitoring. 2. Identify areas for model improvement through independent research and analysis, and develop recommendations for updates and enhancements. 3. Working with expert colleagues, Quant and business representatives to examine the results and keep models grounded in reality. 4. Documenting each step of the development and informing decision makers by presenting them options and results. 5. Ensure the integrity and security of data. 6. Provide support for production models delivered by the Mumbai team but potentially as well for other models to any of the Asian/EU/US time zones. Qualifications: Bachelors or Masters degree in a numeric subject with understanding of economics and markets (eg.: Economics with a speciality in Econometrics, Finance, Computer Science, Applied Maths, Engineering, Physics) 2. Knowledge of key concepts in Statistics and Mathematics such as Statistical methods for Machine learning, Probability Theory and Linear Algebra. 3. Knowledge of Monte Carlo Simulations, Bayesian modelling & Causal Inference. 4. Experience with Machine Learning & Deep Learning concepts including data representations, neural network architectures, custom loss functions. 5. Proven track record of building AI models on time-series & financial data. 6. Programming skills in Python and knowledge of common numerical and machine-learning packages (like NumPy, scikit-learn, pandas, PyTorch, PyMC, statsmodels). 7. Ability to write clear and concise code in python. 8. Intellectually curious and willing to learn challenging concepts daily.

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Job Title: Associate Data Scientist Location: Mumbai Job Type: Full-time Experience: 0-6months About The Role We are seeking a highly motivated Associate Data Scientist with a strong passion for energy, technology, and data-driven decision-making. In this role, you will be responsible for developing and refining energy load forecasting models , analyzing customer demand patterns , and improving forecasting accuracy using advanced time series analysis and machine learning techniques . Your insights will directly support risk management, operational planning, and strategic decision-making across the company. If you thrive in a fast-paced, dynamic environment and enjoy solving complex data science challenges , we’d love to hear from you! Key Responsibilities Develop and enhance energy load forecasting models using time series forecasting, statistical modeling, and machine learning techniques. Analyze historical and real-time energy consumption data to identify trends and improve forecasting accuracy. Investigate discrepancies between forecasted and actual energy usage, providing actionable insights. Automate data pipelines and forecasting workflows to streamline processes across departments. Monitor day-over-day forecast variations and communicate key insights to stakeholders. Work closely with internal teams and external vendors to refine forecasting methodologies. Perform scenario analysis to assess seasonal patterns, anomalies, and market trends. Continuously optimize forecasting models, leveraging techniques like ARIMA, Prophet, LSTMs, and regression-based models. Qualifications & Skills 0-6months of experience in data science, preferably in energy load forecasting, demand prediction, or a related field. Strong expertise in time series analysis, forecasting algorithms, and statistical modeling. Proficiency in Python, with experience using libraries such as pandas, NumPy, scikit-learn, statsmodels, and TensorFlow/PyTorch. Experience working with SQL and handling large datasets. Hands-on experience with forecasting models like ARIMA, SARIMA, Prophet, LSTMs, XGBoost, and random forests. Familiarity with feature engineering, anomaly detection, and seasonality analysis. Strong analytical and problem-solving skills with a data-driven mindset. Excellent communication skills, with the ability to translate technical findings into business insights. Ability to work independently and collaboratively in a fast-paced, dynamic environment. Strong attention to detail, time management, and organizational skills. Preferred Qualifications (Nice To Have) Experience working with energy market data, smart meter analytics, or grid forecasting. Knowledge of cloud platforms (AWS) for deploying forecasting models. Experience with big data technologies such as Spark or Hadoop. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Greater Bengaluru Area

On-site

Linkedin logo

Job Title : Senior Data Scientist (SDS 2) Experience: 4+ years Location : Bengaluru (Hybrid) Company Overview: Akaike Technologies is a dynamic and innovative AI-driven company dedicated to building impactful solutions across various domains . Our mission is to empower businesses by harnessing the power of data and AI to drive growth, efficiency, and value. We foster a culture of collaboration , creativity, and continuous learning , where every team member is encouraged to take initiative and contribute to groundbreaking projects. We value diversity, integrity, and a strong commitment to excellence in all our endeavors. Job Description: We are seeking an experienced and highly skilled Senior Data Scientist to join our team in Bengaluru. This role focuses on driving innovative solutions using cutting-edge Classical Machine Learning, Deep Learning, and Generative AI . The ideal candidate will possess a blend of deep technical expertise , strong business acumen, effective communication skills , and a sense of ownership . During the interview, we look for a proven track record in designing, developing, and deploying scalable ML/DL solutions in a fast-paced, collaborative environment. Key Responsibilities: ML/DL Solution Development & Deployment: Design, implement, and deploy end-to-end ML/DL, GenAI solutions, writing modular, scalable, and production-ready code. Develop and implement scalable deployment pipelines using Docker and AWS services (ECR, Lambda, Step Functions). Design and implement custom models and loss functions to address data nuances and specific labeling challenges. Ability to model in different marketing scenarios of a product life cycle ( Targeting, Segmenting, Messaging, Content Recommendation, Budget optimisation, Customer scoring, risk and churn ), and data limitations(Sparse or incomplete labels, Single class learning) Large-Scale Data Handling & Processing: Efficiently handle and model billions of data points using multi-cluster data processing frameworks (e.g., Spark SQL, PySpark ). Generative AI & Large Language Models (LLMs): Leverage in-depth understanding of transformer architectures and the principles of Large and Small Language Models . Practical experience in building LLM-ready Data Management layers for large-scale structured and unstructured data . Apply foundational understanding of LLM Agents, multi-agent systems (e.g., Agent-Critique, ReACT, Agent Collaboration), advanced prompting techniques, LLM eval uation methodologies, confidence grading, and Human-in-the-Loop systems. Experimentation, Analysis & System Design: Design and conduct experiments to test hypotheses and perform Exploratory Data Analysis (EDA) aligned with business requirements. Apply system design concepts and engineering principles to create low-latency solutions capable of serving simultaneous users in real-time. Collaboration, Communication & Mentorship: Create clear solution outlines and e ffectively communicate complex technical concepts to stakeholders and team members. Mentor junior team members, providing guidance and bridging the gap between business problems and data science solutions. Work closely with cross-functional teams and clients to deliver impactful solutions. Prototyping & Impact Measurement: Comfortable with rapid prototyping and meeting high productivity expectations in a fast-paced development environment. Set up measurement pipelines to study the impact of solutions in different market scenarios. Must-Have Skills: Core Machine Learning & Deep Learning: In-depth knowledge of Artificial Neural Networks (ANN), 1D, 2D, and 3D Convolutional Neural Networks (ConvNets), LSTMs , and Transformer models. Expertise in modeling techniques such as promo mix modeling (MMM) , PU Learning , Customer Lifetime Value (CLV) , multi-dimensional time series modeling, and demand forecasting in supply chain and simulation. Strong proficiency in PU learning, single-class learning, representation learning, alongside traditional machine learning approaches. Advanced understanding and application of model explainability techniques. Data Analysis & Processing: Proficiency in Python and its data science ecosystem, including libraries like NumPy, Pandas, Dask, and PySpark for large-scale data processing and analysis. Ability to perform effective feature engineering by understanding business objectives. ML/DL Frameworks & Tools: Hands-on experience with ML/DL libraries such as Scikit-learn, TensorFlow/Keras, and PyTorch for developing and deploying models. Natural Language Processing (NLP): Expertise in traditional and advanced NLP techniques, including Transformers (BERT, T5, GPT), Word2Vec, Named Entity Recognition (NER), topic modeling, and contrastive learning. Cloud & MLOps: Experience with the AWS ML stack or equivalent cloud platforms. Proficiency in developing scalable deployment pipelines using Docker and AWS services (ECR, Lambda, Step Functions). Problem Solving & Research: Strong logical and reasoning skills. Good understanding of the Python Ecosystem and experience implementing research papers. Collaboration & Prototyping: Ability to thrive in a fast-paced development and rapid prototyping environment. Relevant to Have: Expertise in Claims data and a background in the pharmaceutical industry . Awareness of best software design practices . Understanding of backend frameworks like Flask. Knowledge of Recommender Systems, Representative learning, PU learning. Benefits and Perks: Competitive ESOP grants. Opportunity to work with Fortune 500 companies and world-class teams. Support for publishing papers and attending academic/industry conferences. Access to networking events, conferences, and seminars. Visibility across all functions at Akaike, including sales, pre-sales, lead generation, marketing, and hiring. Appendix Technical Skills (Must Haves) Having deep understanding of the following Data Processing : Wrangling : Some understanding of querying database (MySQL, PostgresDB etc), very fluent in the usage of the following libraries Pandas, Numpy, Statsmodels etc. Visualization : Exposure towards Matplotlib, Plotly, Altair etc. Machine Learning Exposure : Machine Learning Fundamentals, For ex: PCA, Correlations, Statistical Tests etc. Time Series Models, For ex: ARIMA, Prophet etc. Tree Based Models, For ex: Random Forest, XGBoost etc.. Deep Learning Models, For ex: Understanding and Experience of ConvNets, ResNets, UNets etc. GenAI Based Models : Experience utilizing large-scale language models such as GPT-4 or other open-source alternatives (such as Mistral, Llama, Claude) through prompt engineering and custom finetuning. Code Versioning Systems : Github, Git If you're interested in the job opening, please apply through the Keka link provided here: https://akaike.keka.com/careers/jobdetails/26215 Show more Show less

Posted 2 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – AI and DATA – Statistical Modeler-Senior At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. As part of our EY- GDS AI and Data team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. Technical Skills: Statistical Programming Languages: Python, R Libraries & Frameworks: Pandas, NumPy, Scikit-learn, StatsModels, Tidyverse, caret Data Manipulation Tools: SQL, Excel Data Visualization Tools: Matplotlib, Seaborn, ggplot2, Machine Learning Techniques: Supervised and unsupervised learning, model evaluation (cross-validation, ROC curves) 5-7 years of experience in building statistical forecast models for pharma industry Deep understanding of patient flows,treatment journey across both Onc and Non Onc Tas. What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you About EY As a global leader in assurance, tax, transaction and advisory services, we’re using the finance products, expertise and systems we’ve developed to build a better working world. That starts with a culture that believes in giving you the training, opportunities and creative freedom to make things better. Whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Greater Bengaluru Area

On-site

Linkedin logo

Job Description: We are looking for a Data Scientist with expertise in Python, Azure Cloud, NLP, Forecasting, and large-scale data processing. The role involves enhancing existing ML models, optimising embeddings, LDA models, RAG architectures, and forecasting models, and migrating data pipelines to Azure Databricks for scalability and efficiency. Key Responsibilities: Model Development Model Development & Optimisation Train and optimise models for new data providers, ensuring seamless integration. Enhance models for dynamic input handling. Improve LDA model performance to handle a higher number of clusters efficiently. Optimise RAG (Retrieval-Augmented Generation) architecture to enhance recommendation accuracy for large datasets. Upgrade Retrieval QA architecture for improved chatbot performance on large datasets. Forecasting & Time Series Modelling Develop and optimise forecasting models for marketing, demand prediction, and trend analysis. Implement time series models (e.g., ARIMA, Prophet, LSTMs) to improve business decision-making. Integrate NLP-based forecasting, leveraging customer sentiment and external data sources (e.g., news, social media). Data Pipeline & Cloud Migration Migrate the existing pipeline from Azure Synapse to Azure Databricks and retrain models accordingly - Note: this is required only for the AUB role(s) Address space and time complexity issues in embedding storage and retrieval on Azure Blob Storage. Optimise embedding storage and retrieval in Azure Blob Storage for better efficiency. MLOps & Deployment Implement MLOps best practices for model deployment on Azure ML, Azure Kubernetes Service (AKS), and Azure Functions. Automate model training, inference pipelines, and API deployments using Azure services. Experience: Experience in Data Science, Machine Learning, Deep Learning and Gen AI. Design, Architect and Execute end to end Data Science pipelines which includes Data extraction, data preprocessing, Feature engineering, Model building, tuning and Deployment. Experience in leading a team and responsible for project delivery. Experience in Building end to end machine learning pipelines with expertise in developing CI/CD pipelines using Azure Synapse pipelines, Databricks, Google Vertex AI and AWS. Experience in developing advanced natural language processing (NLP) systems, specializing in building RAG (Retrieval-Augmented Generation) models using Langchain. Deploy RAG models to production. Have expertise in building Machine learning pipelines and deploy various models like Forecasting models, Anomaly Detection models, Market Mix Models, Classification models, Regression models and Clustering Techniques. Maintaining Github repositories and cloud computing resources for effective and efficient version control, development, testing and production. Developing proof-of-concept solutions and assisting in rolling these out to our clients. Required Skills & Qualifications: Hands-on experience with Azure Databricks, Azure ML, Azure Synapse, Azure Blob Storage, and Azure Kubernetes Service (AKS). Experience with forecasting models, time series analysis, and predictive analytics. Proficiency in Python (NumPy, Pandas, TensorFlow, PyTorch, Statsmodels, Scikit-learn, Hugging Face, FAISS). Experience with model deployment, API optimisation, and serverless architectures. Hands-on experience with Docker, Kubernetes, and MLflow for tracking and scaling ML models. Expertise in optimising time complexity, memory efficiency, and scalability of ML models in a cloud environment. Experience with Langchain or equivalent and RAG and multi-agentic generation Location: DGS India - Bengaluru - Manyata N1 Block Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

About Hakkoda Hakkoda, an IBM Company, is a modern data consultancy that empowers data driven organizations to realize the full value of the Snowflake Data Cloud. We provide consulting and managed services in data architecture, data engineering, analytics and data science. We are renowned for bringing our clients deep expertise, being easy to work with, and being an amazing place to work! We are looking for curious and creative individuals who want to be part of a fast-paced, dynamic environment, where everyone’s input and efforts are valued. We hire outstanding individuals and give them the opportunity to thrive in a collaborative atmosphere that values learning, growth, and hard work. Our team is distributed across North America, Latin America, India and Europe. If you have the desire to be a part of an exciting, challenging, and rapidly-growing Snowflake consulting services company, and if you are passionate about making a difference in this world, we would love to talk to you!. We are seeking an exceptional and highly motivated Lead Data Scientist with a PhD in Data Science, Computer Science, Applied Mathematics, Statistics, or a closely related quantitative field, to spearhead the design, development, and deployment of an automotive OEM’s next-generation Intelligent Forecast Application. This pivotal role will leverage cutting-edge machine learning, deep learning, and statistical modeling techniques to build a robust, scalable, and accurate forecasting system crucial for strategic decision-decision-making across the automotive value chain, including demand planning, production scheduling, inventory optimization, predictive maintenance, and new product introduction. The successful candidate will be a recognized expert in advanced forecasting methodologies, possess a strong foundation in data engineering and MLOps principles, and demonstrate a proven ability to translate complex research into tangible, production-ready applications within a dynamic industrial environment. This role demands not only deep technical expertise but also a visionary approach to leveraging data and AI to drive significant business impact for a leading automotive OEM. Role Description Strategic Leadership & Application Design: Lead the end-to-end design and architecture of the Intelligent Forecast Application, defining its capabilities, modularity, and integration points with existing enterprise systems (e.g., ERP, SCM, CRM). Develop a strategic roadmap for forecasting capabilities, identifying opportunities for innovation and the adoption of emerging AI/ML techniques (e.g., generative AI for scenario planning, reinforcement learning for dynamic optimization). Translate complex business requirements and automotive industry challenges into well-defined data science problems and technical specifications. Advanced Model Development & Research: Design, develop, and validate highly accurate and robust forecasting models using a variety of advanced techniques, including: Time Series Analysis: ARIMA, SARIMA, Prophet, Exponential Smoothing, State-space models. Machine Learning: Gradient Boosting (XGBoost, LightGBM), Random Forests, Support Vector Machines. Deep Learning: LSTMs, GRUs, Transformers, and other neural network architectures for complex sequential data. Probabilistic Forecasting: Quantile regression, Bayesian methods to capture uncertainty. Hierarchical & Grouped Forecasting: Managing forecasts across multiple product hierarchies, regions, and dealerships. Incorporate diverse data sources, including historical sales, market trends, economic indicators, competitor data, internal operational data (e.g., production schedules, supply chain disruptions), external events, and unstructured data. Conduct extensive exploratory data analysis (EDA) to identify patterns, anomalies, and key features influencing automotive forecasts. Stay abreast of the latest academic researchand industry advancements in forecasting, machine learning, and AI, actively evaluating and advocating for their practical application within the OEM. Application Development & Deployment (MLOps): Architect and implement scalable data pipelines for ingestion, cleaning, transformation, and feature engineering of large, complex automotive datasets. Develop robust and efficient code for model training, inference, and deployment within a production environment. Implement MLOps best practices for model versioning, monitoring, retraining, and performance management to ensure the continuous accuracy and reliability of the forecasting application. Collaborate closely with Data Engineering, Software Development, and IT Operations teams to ensure seamless integration, deployment, and maintenance of the application. Performance Evaluation & Optimization: Define and implement rigorous evaluation metrics for forecasting accuracy (e.g., MAE, RMSE, MAPE, sMAPE, wMAPE, Pinball Loss) and business impact. Perform A/B testing and comparative analyses of different models and approaches to continuously improve forecasting performance. Identify and mitigate sources of bias and uncertainty in forecasting models. Collaboration & Mentorship: Work cross-functionally with various business units (e.g., Sales, Marketing, Supply Chain, Manufacturing, Finance, Product Development) to understand their forecasting needs and integrate solutions. Communicate complex technical concepts and model insights clearly and concisely to both technical and non-technical stakeholders. Provide technical leadership and mentorship to junior data scientists and engineers, fostering a culture of innovation and continuous learning. Potentially contribute to intellectual property (patents) and present findings at internal and external conferences. Qualifications Education: PhD in Data Science, Computer Science, Statistics, Applied Mathematics, Operations Research, or a closely related quantitative field. Experience: 5+ years of hands-on experience in a Data Scientist or Machine Learning Engineer role, with a significant focus on developing and deploying advanced forecasting solutions in a production environment. Demonstrated experience designing and developing intelligent applications, not just isolated models. Experience in the automotive industry or a similar complex manufacturing/supply chain environment is highly desirable. Technical Skills: Expert proficiency in Python (Numpy, Pandas, Scikit-learn, Statsmodels) and/or R. Strong proficiency in SQL. Machine Learning/Deep Learning Frameworks: Extensive experience with TensorFlow, PyTorch, Keras, or similar deep learning libraries. Forecasting Specific Libraries: Proficiency with forecasting libraries like Prophet, Statsmodels, or specialized time series packages. Data Warehousing & Big Data Technologies: Experience with distributed computing frameworks (e.g., Apache Spark, Hadoop) and data storage solutions (e.g., Snowflake, Databricks, S3, ADLS). Cloud Platforms: Hands-on experience with at least one major cloud provider (Azure, AWS, GCP) for data science and ML deployments. MLOps: Understanding and practical experience with MLOps tools and practices (e.g., MLflow, Kubeflow, Docker, Kubernetes, CI/CD pipelines). Data Visualization: Proficiency with tools like Tableau, Power BI, or similar for creating compelling data stories and dashboards. Analytical Prowess: Deep understanding of statistical inference, experimental design, causal inference, and the mathematical foundations of machine learning algorithms. Problem Solving: Proven ability to analyze complex, ambiguous problems, break them down into manageable components, and devise innovative solutions. Preferred Qualifications Publications in top-tier conferences or journals related to forecasting, time series analysis, or applied machine learning. Experience with real-time forecasting systems or streaming data analytics. Familiarity with specific automotive data types (e.g., telematics, vehicle sensor data, dealership data, market sentiment). Experience with distributed version control systems (e.g., Git). Knowledge of agile development methodologies. Soft Skills Exceptional Communication: Ability to articulate complex technical concepts and insights to a diverse audience, including senior management and non-technical stakeholders. Collaboration: Strong interpersonal skills and a proven ability to work effectively within cross-functional teams. Intellectual Curiosity & Proactiveness: A passion for continuous learning, staying ahead of industry trends, and proactively identifying opportunities for improvement. Strategic Thinking: Ability to see the big picture and align technical solutions with overall business objectives. Mentorship: Desire and ability to guide and develop less experienced team members. Resilience & Adaptability: Thrive in a fast-paced, evolving environment with complex challenges. Benefits Health Insurance Paid leave Technical training and certifications Robust learning and development opportunities Incentive Toastmasters Food Program Fitness Program Referral Bonus Program Hakkoda is committed to fostering diversity, equity, and inclusion within our teams. A diverse workforce enhances our ability to serve clients and enriches our culture. We encourage candidates of all races, genders, sexual orientations, abilities, and experiences to apply, creating a workplace where everyone can succeed and thrive. Ready to take your career to the next level? 🚀 💻 Apply today👇 and join a team that’s shaping the future!! Hakkoda is an IBM subsidiary which has been acquired by IBM and will be integrated in the IBM organization. Hakkoda will be the hiring entity. By Proceeding with this application, you understand that Hakkoda will share your personal information with other IBM subsidiaries involved in your recruitment process, wherever these are located. More information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here. Show more Show less

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies